Sample records for method main results

  1. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach.

    PubMed

    Park, Hyunseok; Magee, Christopher L

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.

  2. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach

    PubMed Central

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304

  3. Application of Innovative P&E Method at Technical Universities in Slovakia

    ERIC Educational Resources Information Center

    Nemec, Miroslav; Krišták, Luboš; Hockicko, Peter; Danihelová, Zuzana; Velmovská, Klára

    2017-01-01

    The paper deals with innovative teaching methods at universities. The result of this effort is the interactive P&E method, whose main idea is the interactive work with students while solving problem tasks. The main aim of the given method is to change the students' position, by means of experiment analyses and qualitative tasks, from a passive…

  4. Determination of main components in the extracellular polymeric substances extracted from activated sludge using a spectral probing method.

    PubMed

    Shen, Rong; Sheng, Guo-Ping; Yu, Han-Qing

    2012-06-01

    In this study, a spectral probing method was applied to determine the content of the main components, i.e., proteins, polysaccharides and humic substances, in the extracellular polymeric substances (EPS) extracted from activated sludge. The measurement results were consistent with those obtained from the conventional methods, such as the anthrone for polysaccharide determination, the modified Lowry method for protein and humic substance determination. The recoveries for the determination of proteins, humic substances and polysaccharides in the EPS extracted from six sludge samples using standard additional method were between 92.4 and 108.9%, 84.8 and 108.9%, 75.1 and 117.2%, respectively. These results indicate that the propose method has a good accuracy and precision, and can be used as an effective approach to determine the main components in sludge EPS. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Simultaneous quantitative analysis of main components in linderae reflexae radix with one single marker.

    PubMed

    Wang, Li-Li; Zhang, Yun-Bin; Sun, Xiao-Ya; Chen, Sui-Qing

    2016-05-08

    Establish a quantitative analysis of multi-components by the single marker (QAMS) method for quality evaluation and validate its feasibilities by the simultaneous quantitative assay of four main components in Linderae Reflexae Radix. Four main components of pinostrobin, pinosylvin, pinocembrin, and 3,5-dihydroxy-2-(1- p -mentheneyl)- trans -stilbene were selected as analytes to evaluate the quality by RP-HPLC coupled with a UV-detector. The method was evaluated by a comparison of the quantitative results between the external standard method and QAMS with a different HPLC system. The results showed that no significant differences were found in the quantitative results of the four contents of Linderae Reflexae Radix determined by the external standard method and QAMS (RSD <3%). The contents of four analytes (pinosylvin, pinocembrin, pinostrobin, and Reflexanbene I) in Linderae Reflexae Radix were determined by the single marker of pinosylvin. This fingerprint was the spectra determined by Shimadzu LC-20AT and Waters e2695 HPLC that were equipped with three different columns.

  6. A Simplified and Reliable Damage Method for the Prediction of the Composites Pieces

    NASA Astrophysics Data System (ADS)

    Viale, R.; Coquillard, M.; Seytre, C.

    2012-07-01

    Structural engineers are often faced to test results on composite structures largely tougher than predicted. By attempting to reduce this frequent gap, a survey of some extensive synthesis works relative to the prediction methods and to the failure criteria was led. This inquiry dealts with the plane stress state only. All classical methods have strong and weak points wrt practice and reliability aspects. The main conclusion is that in the plane stress case, the best usaul industrial methods give predictions rather similar. But very generally they do not explain the often large discrepancies wrt the tests, mainly in the cases of strong stress gradients or of bi-axial laminate loadings. It seems that only the methods considering the complexity of the composites damages (so-called physical methods or Continuum Damage Mechanics “CDM”) bring a clear mending wrt the usual methods..The only drawback of these methods is their relative intricacy mainly in urged industrial conditions. A method with an approaching but simplified representation of the CDM phenomenology is presented. It was compared to tests and other methods: - it brings a fear improvement of the correlation with tests wrt the usual industrial methods, - it gives results very similar to the painstaking CDM methods and very close to the test results. Several examples are provided. In addition this method is really thrifty wrt the material characterization as well as for the modelisation and the computation efforts.

  7. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center

    PubMed Central

    Dou, Chao

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always “dirty,” which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the “dirty” data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 PMID:28090205

  8. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center.

    PubMed

    Miao, Beibei; Dou, Chao; Jin, Xuebo

    2016-01-01

    The storage volume of internet data center is one of the classical time series. It is very valuable to predict the storage volume of a data center for the business value. However, the storage volume series from a data center is always "dirty," which contains the noise, missing data, and outliers, so it is necessary to extract the main trend of storage volume series for the future prediction processing. In this paper, we propose an irregular sampling estimation method to extract the main trend of the time series, in which the Kalman filter is used to remove the "dirty" data; then the cubic spline interpolation and average method are used to reconstruct the main trend. The developed method is applied in the storage volume series of internet data center. The experiment results show that the developed method can estimate the main trend of storage volume series accurately and make great contribution to predict the future volume value. 
 .

  9. Mapping and monitoring potato cropping systems in Maine: geospatial methods and land use assessments

    USDA-ARS?s Scientific Manuscript database

    Geospatial frameworks and GIS-based approaches were used to assess current cropping practices in potato production systems in Maine. Results from the geospatial integration of remotely-sensed cropland layers (2008-2011) and soil datasets for Maine revealed a four-year potato systems footprint estima...

  10. Non-volatile main memory management methods based on a file system.

    PubMed

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  11. Simultaneous stochastic inversion for geomagnetic main field and secular variation. I - A large-scale inverse problem

    NASA Technical Reports Server (NTRS)

    Bloxham, Jeremy

    1987-01-01

    The method of stochastic inversion is extended to the simultaneous inversion of both main field and secular variation. In the present method, the time dependency is represented by an expansion in Legendre polynomials, resulting in a simple diagonal form for the a priori covariance matrix. The efficient preconditioned Broyden-Fletcher-Goldfarb-Shanno algorithm is used to solve the large system of equations resulting from expansion of the field spatially to spherical harmonic degree 14 and temporally to degree 8. Application of the method to observatory data spanning the 1900-1980 period results in a data fit of better than 30 nT, while providing temporally and spatially smoothly varying models of the magnetic field at the core-mantle boundary.

  12. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  13. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study

    PubMed Central

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-01-01

    Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Results: Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. Conclusions: In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors’ preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management. PMID:27908178

  14. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  15. Improving Allergen Prediction in Main Crops Using a Weighted Integrative Method.

    PubMed

    Li, Jing; Wang, Jing; Li, Jing

    2017-12-01

    As a public health problem, food allergy is frequently caused by food allergy proteins, which trigger a type-I hypersensitivity reaction in the immune system of atopic individuals. The food allergens in our daily lives are mainly from crops including rice, wheat, soybean and maize. However, allergens in these main crops are far from fully uncovered. Although some bioinformatics tools or methods predicting the potential allergenicity of proteins have been proposed, each method has their limitation. In this paper, we built a novel algorithm PREAL W , which integrated PREAL, FAO/WHO criteria and motif-based method by a weighted average score, to benefit the advantages of different methods. Our results illustrated PREAL W has better performance significantly in the crops' allergen prediction. This integrative allergen prediction algorithm could be useful for critical food safety matters. The PREAL W could be accessed at http://lilab.life.sjtu.edu.cn:8080/prealw .

  16. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    PubMed

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.

  17. A comparative study of three pillars system and banking methods in accounting long-term purposes of retiree in Indonesian saving account

    NASA Astrophysics Data System (ADS)

    Hasbullah, E. S.; Suyudi, M.; Halim, N. A.; Sukono; Gustaf, F.; Putra, A. S.

    2018-03-01

    Human productivity is the main capital in economic activity. This main factor leads to the fact that the continuity of human resources in economic sector depends on the limited productivity age. In other word, once the economic agents has reach the limit of the productivity age. Hence they enter the pension state. In this case, the preparation of ‘old-age’ fund become crucial and should be initiated before the pension state to avoid the destitute condition of retiree. Two most simple and familiar methods in preparing the pension fund are The Three Pillars system and banking methods. Here we simulate the both of the methods for the synthetic data of investment program and analyse the result. The result gives the idea that the Three Pillar System has effective prospect in Long-term scheme. However, the banking method is likely adapted to the short-term plan.

  18. Fine-Grained Indexing of the Biomedical Literature: MeSH Subheading Attachment for a MEDLINE Indexing Tool

    PubMed Central

    Névéol, Aurélie; Shooshan, Sonya E.; Mork, James G.; Aronson, Alan R.

    2007-01-01

    Objective This paper reports on the latest results of an Indexing Initiative effort addressing the automatic attachment of subheadings to MeSH main headings recommended by the NLM’s Medical Text Indexer. Material and Methods Several linguistic and statistical approaches are used to retrieve and attach the subheadings. Continuing collaboration with NLM indexers also provided insight on how automatic methods can better enhance indexing practice. Results The methods were evaluated on corpus of 50,000 MEDLINE citations. For main heading/subheading pair recommendations, the best precision is obtained with a post-processing rule method (58%) while the best recall is obtained by pooling all methods (64%). For stand-alone subheading recommendations, the best performance is obtained with the PubMed Related Citations algorithm. Conclusion Significant progress has been made in terms of subheading coverage. After further evaluation, some of this work may be integrated in the MEDLINE indexing workflow. PMID:18693897

  19. Analysis of the width-w non-adjacent form in conjunction with hyperelliptic curve cryptography and with lattices☆

    PubMed Central

    Krenn, Daniel

    2013-01-01

    In this work the number of occurrences of a fixed non-zero digit in the width-w non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange’s method. PMID:23805020

  20. Analysis of the width-[Formula: see text] non-adjacent form in conjunction with hyperelliptic curve cryptography and with lattices.

    PubMed

    Krenn, Daniel

    2013-06-17

    In this work the number of occurrences of a fixed non-zero digit in the width-[Formula: see text] non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange's method.

  1. A note on the computation of antenna-blocking shadows

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1993-01-01

    A simple and readily applied method is provided to compute the shadow on the main reflector of a Cassegrain antenna, when cast by the subreflector and the subreflector supports. The method entails some convenient minor approximations that will produce results similar to results obtained with a lengthier, mainframe computer program.

  2. Diversification of visual media retrieval results using saliency detection

    NASA Astrophysics Data System (ADS)

    Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.

    2013-03-01

    Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.

  3. A qualitative and quantitative HPTLC densitometry method for the analysis of cannabinoids in Cannabis sativa L.

    PubMed

    Fischedick, Justin T; Glas, Ronald; Hazekamp, Arno; Verpoorte, Rob

    2009-01-01

    Cannabis and cannabinoid based medicines are currently under serious investigation for legitimate development as medicinal agents, necessitating new low-cost, high-throughput analytical methods for quality control. The goal of this study was to develop and validate, according to ICH guidelines, a simple rapid HPTLC method for the quantification of Delta(9)-tetrahydrocannabinol (Delta(9)-THC) and qualitative analysis of other main neutral cannabinoids found in cannabis. The method was developed and validated with the use of pure cannabinoid reference standards and two medicinal cannabis cultivars. Accuracy was determined by comparing results obtained from the HTPLC method with those obtained from a validated HPLC method. Delta(9)-THC gives linear calibration curves in the range of 50-500 ng at 206 nm with a linear regression of y = 11.858x + 125.99 and r(2) = 0.9968. Results have shown that the HPTLC method is reproducible and accurate for the quantification of Delta(9)-THC in cannabis. The method is also useful for the qualitative screening of the main neutral cannabinoids found in cannabis cultivars.

  4. [Landscape pattern change of Dongzhai Harbour mangrove, South China analyzed with a patch-based method and its driving forces].

    PubMed

    Huang, Xing; Xin, Kun; Li, Xiu-zhen; Wang, Xue-ping; Ren, Lin-jing; Li, Xi-zhi; Yan, Zhong-zheng

    2015-05-01

    According to the interpreted results of three satellite images of Dongzhai Harbour obtained in 1988, 1998 and 2009, the changes of landscape pattern and the differences of its driving forces of mangrove forest in Dongzhai Harbour were analyzed with a patch-based method on spatial distribution dynamics. The results showed that the areas of mangrove forest in 1988, 1998 and 2009 were 1809.4, 1738.7 and 1608.2 hm2 respectively, which presented a trend of decrease with enhanced degree of landscape fragmentation. The transformations among different landscape types indicated that the mangrove, agricultural land and forest land were mainly changed into built-up land and aquaculture pond. The statistical results obtained from three different methods, i.e., accumulative counting, percentage counting and main transformation route counting, showed that natural factors were the main reason for the changes of patch number, responsible for 58.6%, 72.2% and 72.1% of patch number change, respectively, while the percentages of patch area change induced by human activities were 70.4%, 70.3% and 76.4%, respectively, indicating that human activities were the primary factors of the change of patch areas.

  5. The Mixed Finite Element Multigrid Method for Stokes Equations

    PubMed Central

    Muzhinji, K.; Shateyi, S.; Motsa, S. S.

    2015-01-01

    The stable finite element discretization of the Stokes problem produces a symmetric indefinite system of linear algebraic equations. A variety of iterative solvers have been proposed for such systems in an attempt to construct efficient, fast, and robust solution techniques. This paper investigates one of such iterative solvers, the geometric multigrid solver, to find the approximate solution of the indefinite systems. The main ingredient of the multigrid method is the choice of an appropriate smoothing strategy. This study considers the application of different smoothers and compares their effects in the overall performance of the multigrid solver. We study the multigrid method with the following smoothers: distributed Gauss Seidel, inexact Uzawa, preconditioned MINRES, and Braess-Sarazin type smoothers. A comparative study of the smoothers shows that the Braess-Sarazin smoothers enhance good performance of the multigrid method. We study the problem in a two-dimensional domain using stable Hood-Taylor Q 2-Q 1 pair of finite rectangular elements. We also give the main theoretical convergence results. We present the numerical results to demonstrate the efficiency and robustness of the multigrid method and confirm the theoretical results. PMID:25945361

  6. Introductory Guide to the Statistics of Molecular Genetics

    ERIC Educational Resources Information Center

    Eley, Thalia C.; Rijsdijk, Fruhling

    2005-01-01

    Background: This introductory guide presents the main two analytical approaches used by molecular geneticists: linkage and association. Methods: Traditional linkage and association methods are described, along with more recent advances in methodologies such as those using a variance components approach. Results: New methods are being developed all…

  7. Main Belt Comet P/2006 VW139: A Fragment from a Recent Collision?

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Hsieh, H. H.; Cellino, A.

    2012-05-01

    We applied different methods to examine a possibility that main belt comet P/2006VW139, recently discovered by the Pan-STARRS 1, belongs to a small group of 24 asteroids. Results show strong evidence that P/2006VW139 really belongs to this group.

  8. Concrete airship sheds at Orly, France. Part II

    NASA Technical Reports Server (NTRS)

    FREYSSINET

    1925-01-01

    This report deals mainly with the methods of construction employed when after the plan had been approved. The foundation, side walls, doors and roof are all discussed and the economic savings resulting from this method of construction.

  9. Investigation of in-vivo skin autofluorescence lifetimes under long-term cw optical excitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lihachev, A; Ferulova, I; Vasiljeva, K

    2014-08-31

    The main results obtained during the last five years in the field of laser-excited in-vivo human skin photobleaching effects are presented. The main achievements and results obtained, as well as methods and experimental devices are briefly described. In addition, the impact of long-term 405-nm cw low-power laser excitation on the skin autofluorescence lifetime is experimentally investigated. (laser biophotonics)

  10. Rupture process of 2016, 25 January earthquake, Alboran Sea (South Spain, Mw= 6.4) and aftershocks series

    NASA Astrophysics Data System (ADS)

    Buforn, E.; Pro, C.; del Fresno, C.; Cantavella, J.; Sanz de Galdeano, C.; Udias, A.

    2016-12-01

    We have studied the rupture process of the 25 January 2016 earthquake (Mw =6.4) occurred in South Spain in the Alboran Sea. Main shock, foreshock and largest aftershocks (Mw =4.5) have been relocated using the NonLinLoc algorithm. Results obtained show a NE-SW distribution of foci at shallow depth (less than 15 km). For main shock, focal mechanism has been obtained from slip inversion over the rupture plane of teleseismic data, corresponding to left-lateral strike-slip motion. The rupture starts at 7 km depth and it propagates upward with a complex source time function. In order to obtain a more detailed source time function and to validate the results obtained from teleseismic data, we have used the Empirical Green Functions method (EGF) at regional distances. Finally, results of the directivity effect from teleseismic Rayleigh waves and the EGF method, are consistent with a rupture propagation to the NE. These results are interpreted in terms of the main geological features in the region.

  11. Impact of different post-harvest processing methods on the chemical compositions of peony root.

    PubMed

    Zhu, Shu; Shirakawa, Aimi; Shi, Yanhong; Yu, Xiaoli; Tamura, Takayuki; Shibahara, Naotoshi; Yoshimatsu, Kayo; Komatsu, Katsuko

    2018-06-01

    The impact of key processing steps such as boiling, peeling, drying and storing on chemical compositions and morphologic features of the produced peony root was investigated in detail by applying 15 processing methods to fresh roots of Paeonia lactiflora and then monitoring contents of eight main components, as well as internal root color. The results showed that low temperature (4 °C) storage of fresh roots for approximately 1 month after harvest resulted in slightly increased and stable content of paeoniflorin, which might be due to suppression of enzymatic degradation. This storage also prevented roots from discoloring, facilitating production of favorable bright color roots. Boiling process triggered decomposition of polygalloylglucoses, thereby leading to a significant increase in contents of pentagalloylglucose and gallic acid. Peeling process resulted in a decrease of albiflorin and catechin contents. As a result, an optimized and practicable processing method ensuring high contents of the main active components in the produced root was developed.

  12. Analysis and Countermeasure Study on DC Bias of Main Transformer in a City

    NASA Astrophysics Data System (ADS)

    Wang, PengChao; Wang, Hongtao; Song, Xinpu; Gu, Jun; Liu, yong; Wu, weili

    2017-07-01

    According to the December 2015 Guohua Beijing thermal power transformer DC magnetic bias phenomenon, the monitoring data of 24 hours of direct current is analyzed. We find that the maximum DC current is up to 25 and is about 30s for the trend cycle, on this basis, then, of the geomagnetic storm HVDC and subway operation causes comparison of the mechanism, and make a comprehensive analysis of the thermal power plant’s geographical location, surrounding environment and electrical contact etc.. The results show that the main reason for the DC bias of Guohua thermal power transformer is the operation of the subway, and the change of the DC bias current is periodic. Finally, of Guohua thermal power transformer DC magnetic bias control method is studied, the simulation results show that the method of using neutral point with small resistance or capacitance can effectively inhibit the main transformer neutral point current.

  13. Application of geometry based hysteresis modelling in compensation of hysteresis of piezo bender actuator

    NASA Astrophysics Data System (ADS)

    Milecki, Andrzej; Pelic, Marcin

    2016-10-01

    This paper presents results of studies of an application of a new method of piezo bender actuators modelling. A special hysteresis simulation model was developed and is presented. The model is based on a geometrical deformation of main hysteresis loop. The piezoelectric effect is described and the history of the hysteresis modelling is briefly reviewed. Firstly, a simple model for main loop modelling is proposed. Then, a geometrical description of the non-saturated hysteresis is presented and its modelling method is introduced. The modelling makes use of the function describing the geometrical shape of the two hysteresis main curves, which can be defined theoretically or obtained by measurement. These main curves are stored in the memory and transformed geometrically in order to obtain the minor curves. Such model was prepared in the Matlab-Simulink software, but can be easily implemented using any programming language and applied in an on-line controller. In comparison to the other known simulation methods, the one presented in the paper is easy to understand, and uses simple arithmetical equations, allowing to quickly obtain the inversed model of hysteresis. The inversed model was further used for compensation of a non-saturated hysteresis of the piezo bender actuator and results have also been presented in the paper.

  14. Using Vision Metrology System for Quality Control in Automotive Industries

    NASA Astrophysics Data System (ADS)

    Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.

    2012-07-01

    The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.

  15. PRELIMINARY RESULTS OF EPA'S PERFORMANCE EVALUATION OF FEDERAL REFERENCE METHODS AND FEDERAL EQUIVALENT METHODS FOR COARSE PARTICULATE MATTER

    EPA Science Inventory

    The main objective of this study is to evaluate the performance of sampling methods for potential use as a Federal Reference Method (FRM) capable of providing an estimate of coarse particle (PMc: particulate matter with an aerodynamic diameter between 2.5 µm and 10 µm) ...

  16. The relation between periods’ identification and noises in hydrologic series data

    NASA Astrophysics Data System (ADS)

    Sang, Yan-Fang; Wang, Dong; Wu, Ji-Chun; Zhu, Qing-Ping; Wang, Ling

    2009-04-01

    SummaryIdentification of dominant periods is a typical and important issue in hydrologic series data analysis, since it is the basis of building effective stochastic models, understanding complex hydrologic processes, etc. However it is still a difficult task due to the influence of many interrelated factors, such as noises in hydrologic series data. In this paper, firstly the great influence of noises on periods' identification has been analyzed. Then, based on two conventional methods of hydrologic series analysis: wavelet analysis (WA) and maximum entropy spectral analysis (MESA), a new method of periods' identification of hydrologic series data, main series spectral analysis (MSSA), has been put forward, whose main idea is to identify periods of the main series on the basis of reducing hydrologic noises. Various methods (include fast Fourier transform (FFT), MESA and MSSA) have been applied to both synthetic series and observed hydrologic series. Results show that conventional methods (FFT and MESA) are not as good as expected due to the great influence of noises. However, this influence is not so strong while using the new method MSSA. In addition, by using the new de-noising method proposed in this paper, which is suitable for both normal noises and skew noises, the results are more reasonable, since noises separated from hydrologic series data generally follow skew probability distributions. In conclusion, based on comprehensive analyses, it can be stated that the proposed method MSSA could improve periods' identification by effectively reducing the influence of hydrologic noises.

  17. Application of machine learning on brain cancer multiclass classification

    NASA Astrophysics Data System (ADS)

    Panca, V.; Rustam, Z.

    2017-07-01

    Classification of brain cancer is a problem of multiclass classification. One approach to solve this problem is by first transforming it into several binary problems. The microarray gene expression dataset has the two main characteristics of medical data: extremely many features (genes) and only a few number of samples. The application of machine learning on microarray gene expression dataset mainly consists of two steps: feature selection and classification. In this paper, the features are selected using a method based on support vector machine recursive feature elimination (SVM-RFE) principle which is improved to solve multiclass classification, called multiple multiclass SVM-RFE. Instead of using only the selected features on a single classifier, this method combines the result of multiple classifiers. The features are divided into subsets and SVM-RFE is used on each subset. Then, the selected features on each subset are put on separate classifiers. This method enhances the feature selection ability of each single SVM-RFE. Twin support vector machine (TWSVM) is used as the method of the classifier to reduce computational complexity. While ordinary SVM finds single optimum hyperplane, the main objective Twin SVM is to find two non-parallel optimum hyperplanes. The experiment on the brain cancer microarray gene expression dataset shows this method could classify 71,4% of the overall test data correctly, using 100 and 1000 genes selected from multiple multiclass SVM-RFE feature selection method. Furthermore, the per class results show that this method could classify data of normal and MD class with 100% accuracy.

  18. Linking main-belt comets to asteroid families

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Hsieh, H. H.; Cellino, A.

    2012-09-01

    Here we present our results obtained by applying different methods in order to establish a firm link between the main-belt comets (MBCs) and colisionally-formed asteroid families (AFs), i.e, to possibly find additional line of evidence supporting the hypothesis that MBCs may be preferentially found among the members of young AFs.

  19. Main Road Extraction from ZY-3 Grayscale Imagery Based on Directional Mathematical Morphology and VGI Prior Knowledge in Urban Areas

    PubMed Central

    Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming

    2015-01-01

    Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832

  20. Models of convection-driven tectonic plates - A comparison of methods and results

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Gable, Carl W.; Weinstein, Stuart A.

    1992-01-01

    Recent numerical studies of convection in the earth's mantle have included various features of plate tectonics. This paper describes three methods of modeling plates: through material properties, through force balance, and through a thin power-law sheet approximation. The results obtained are compared using each method on a series of simple calculations. From these results, scaling relations between the different parameterizations are developed. While each method produces different degrees of deformation within the surface plate, the surface heat flux and average plate velocity agree to within a few percent. The main results are not dependent upon the plate modeling method and herefore are representative of the physical system modeled.

  1. A comparison between Warner-Bratzler shear force measurement and texture profile analysis of meat and meat products: a review

    NASA Astrophysics Data System (ADS)

    Novaković, S.; Tomašević, I.

    2017-09-01

    Texture is one of the most important characteristics of meat and we can explain it as the human physiological-psychological awareness of a number of rheological and other properties of foods and their relations. In this paper, we discuss instrumental measurement of texture by Warner-Bratzler shear force (WBSF) and texture profile analysis (TPA). The conditions for using the device are detailed in WBSF measurements, and the influence of different parameters on the execution of the method and final results are shown. After that, the main disadvantages are reflected in the non-standardized method. Also, we introduce basic texture parameters which connect and separate TPA and WBSF methods and mention contemporary methods with their main advantage.

  2. Searching for an Axis-Parallel Shoreline

    NASA Astrophysics Data System (ADS)

    Langetepe, Elmar

    We are searching for an unknown horizontal or vertical line in the plane under the competitive framework. We design a framework for lower bounds on all cyclic and monotone strategies that result in two-sequence functionals. For optimizing such functionals we apply a method that combines two main paradigms. The given solution shows that the combination method is of general interest. Finally, we obtain the current best strategy and can prove that this is the best strategy among all cyclic and monotone strategies which is a main step toward a lower bound construction.

  3. Smart light random memory sprays Retinex: a fast Retinex implementation for high-quality brightness adjustment and color correction.

    PubMed

    Banić, Nikola; Lončarić, Sven

    2015-11-01

    Removing the influence of illumination on image colors and adjusting the brightness across the scene are important image enhancement problems. This is achieved by applying adequate color constancy and brightness adjustment methods. One of the earliest models to deal with both of these problems was the Retinex theory. Some of the Retinex implementations tend to give high-quality results by performing local operations, but they are computationally relatively slow. One of the recent Retinex implementations is light random sprays Retinex (LRSR). In this paper, a new method is proposed for brightness adjustment and color correction that overcomes the main disadvantages of LRSR. There are three main contributions of this paper. First, a concept of memory sprays is proposed to reduce the number of LRSR's per-pixel operations to a constant regardless of the parameter values, thereby enabling a fast Retinex-based local image enhancement. Second, an effective remapping of image intensities is proposed that results in significantly higher quality. Third, the problem of LRSR's halo effect is significantly reduced by using an alternative illumination processing method. The proposed method enables a fast Retinex-based image enhancement by processing Retinex paths in a constant number of steps regardless of the path size. Due to the halo effect removal and remapping of the resulting intensities, the method outperforms many of the well-known image enhancement methods in terms of resulting image quality. The results are presented and discussed. It is shown that the proposed method outperforms most of the tested methods in terms of image brightness adjustment, color correction, and computational speed.

  4. Imitative modeling automatic system Control of steam pressure in the main steam collector with the influence on the main Servomotor steam turbine

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Zverkov, V. P.; Kuzishchin, V. F.; Ryzhkov, O. S.; Sabanin, V. R.

    2017-11-01

    The research and setting results of steam pressure in the main steam collector “Do itself” automatic control system (ACS) with high-speed feedback on steam pressure in the turbine regulating stage are presented. The ACS setup is performed on the simulation model of the controlled object developed for this purpose with load-dependent static and dynamic characteristics and a non-linear control algorithm with pulse control of the turbine main servomotor. A method for tuning nonlinear ACS with a numerical algorithm for multiparametric optimization and a procedure for separate dynamic adjustment of control devices in a two-loop ACS are proposed and implemented. It is shown that the nonlinear ACS adjusted with the proposed method with the regulators constant parameters ensures reliable and high-quality operation without the occurrence of oscillations in the transient processes the operating range of the turbine loads.

  5. Assessing the Links Between Anthropometrics Data and Akabane Test Results.

    PubMed

    Muzhikov, Valery; Vershinina, Elena; Belenky, Vadim; Muzhikov, Ruslan

    2018-02-01

    According to popular belief, metabolic disorders and imbalances are one of the main factors contributing to various human illnesses. Early diagnosis of these disorders is one of the main methods for preventing serious diseases. The goal of this study was to assess the correlations between main physical indicators and the activity of certain acupuncture channels using the thermal Akabane test based on ancient Chinese diagnostic methods. This test measures the pain thresholds' temperature sensitivity when a point source of heat is applied to the "entrance-exit" points of each channel. The skin temperature sensitivity in our bodies is a basic reactive system; it is as significant as such important indicators as body temperature and provides a very clear representation of functional and psychophysiological profiles. On the basis of our statistical study, we revealed reliable correspondence between the activity of certain acupuncture channels and main anthropometric and biometric data. Copyright © 2018. Published by Elsevier B.V.

  6. Method and radial gap machine for high strength undiffused brushless operation

    DOEpatents

    Hsu, John S.

    2006-10-31

    A radial gap brushless electric machine (30) having a stator (31) and a rotor (32) and a main air gap (34) also has at least one stationary excitation coil (35a, 36a) separated from the rotor (32) by a secondary air gap (35e, 35f, 36e, 36f) so as to induce a secondary flux in the rotor (32) which controls a resultant flux in the main air gap (34). Permanent magnetic (PM) material (38) is disposed in spaces between the rotor pole portions (39) to inhibit the second flux from leaking from the pole portions (39) prior to reaching the main air gap (34). By selecting the direction of current in the stationary excitation coil (35a, 36a) both flux enhancement and flux weakening are provided for the main air gap (34). A method of non-diffused flux enhancement and flux weakening for a radial gap machine is also disclosed.

  7. Value of Construction Company and its Dependence on Significant Variables

    NASA Astrophysics Data System (ADS)

    Vítková, E.; Hromádka, V.; Ondrušková, E.

    2017-10-01

    The paper deals with the value of the construction company assessment respecting usable approaches and determinable variables. The reasons of the value of the construction company assessment are different, but the most important reasons are the sale or the purchase of the company, the liquidation of the company, the fusion of the company with another subject or the others. According the reason of the value assessment it is possible to determine theoretically different approaches for valuation, mainly it concerns about the yield method of valuation and the proprietary method of valuation. Both approaches are dependant of detailed input variables, which quality will influence the final assessment of the company´s value. The main objective of the paper is to suggest, according to the analysis, possible ways of input variables, mainly in the form of expected cash-flows or the profit, determination. The paper is focused mainly on methods of time series analysis, regression analysis and mathematical simulation utilization. As the output, the results of the analysis on the case study will be demonstrated.

  8. [Writing and publication of a medical article].

    PubMed

    Salmi, L R

    1999-11-01

    To advance in their strategies to manage patients, clinicians need new research results. To be accessible, medical research must be published. Writing and publishing medical articles should respect principles that are described in this article. Good writing is based on a logical organization and the application of scientific style. Organization according to the IMRD structure (Introduction, Methods, Results, Discussion) allows one to present the reasons for and objectives of the study (Introduction), details on whatever has been done to answer the question (Methods), data on the actual study population and answers to the main question (Results), and a critical appraisal of these results, given the limits of the study and current knowledge (Discussion). The main elements of scientific style are precision, clarity, fluidity and concision. Finally, submitting a paper to a scientific journal implies presenting the work in a covering letter and respecting rules for formatting a manuscript (order of presentation, typography, etc.).

  9. Surface and allied studies in silicon solar cells

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.

    1983-01-01

    Two main results are presented. The first deals with a simple method that determines the minority-carrier lifetime and the effective surface recombination velocity of the quasi-neutral base of silicon solar cells. The method requires the observation of only a single transient, and is amenable to automation for in-process monitoring in manufacturing. This method, which is called short-circuit current decay, avoids distortion in the observed transient and consequent inacccuracies that arise from the presence of mobile holes and electrons stored in the p/n junction spacecharge region at the initial instant of the transient. The second main result consists in a formulation of the relevant boundary-value problems that resembles that used in linear two-port network theory. This formulation enables comparisons to be made among various contending methods for measuring material parameters of p/n junction devices, and enables the option of putting the description in the time domain of the transient studies in the form of an infinite series, although closed-form solutions are also possible.

  10. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  11. Artificial Satellites Observations Using the Complex of Telescopes of RI "MAO"

    NASA Astrophysics Data System (ADS)

    Sybiryakova, Ye. S.; Shulga, O. V.; Vovk, V. S.; Kaliuzny, M. P.; Bushuev, F. I.; Kulichenko, M. O.; Haloley, M. I.; Chernozub, V. M.

    2017-02-01

    Special methods, means and software for cosmic objects' observation and processing of obtained results were developed. Combined method, which consists in separated accumulation of images of reference stars and artificial objects, is the main method used in observations of artificial cosmic objects. It is used for observations of artificial objects at all types of orbits.

  12. A method for testing the spectraltransmittance of infrared smoke interference

    NASA Astrophysics Data System (ADS)

    Lei, Hao; Zhang, Yazhou; Wang, Guangping; Wu, Jingli

    2018-02-01

    Infrared smoke is mainly used for shielding, blind, deception and recognition on the battlefield. The traditional shelter smoke is mainly placed in the friendly positions or positions between the friendly positions and enemy positions, to reduce the enemy observation post investigative capacity. The passive interference capability of the smoke depends on the infrared extinction ability of the smoke. The infrared transmittance test is an objective and accurate representation of the extinction ability of the smoke. In this paper, a method for testing the spectral transmittance of infrared smoke interference is introduced. The uncertainty of the measurement results is analyzed. The results show that this method can effectively obtain the spectral transmittance of the infrared smoke and uncertainty of the measurement is 7.16%, which can be effective for the smoke detection, smoke composition analysis, screening effect evaluation to provide test parameters support.

  13. Comparison of the Various Methodologies Used in Studying Runoff and Sediment Load in the Yellow River Basin

    NASA Astrophysics Data System (ADS)

    Xu, M., III; Liu, X.

    2017-12-01

    In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.

  14. Wideband characterization of the complex wave number and characteristic impedance of sound absorbers.

    PubMed

    Salissou, Yacoubou; Panneton, Raymond

    2010-11-01

    Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.

  15. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot

    PubMed Central

    Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye

    2017-01-01

    The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758

  16. Welding studs detection based on line structured light

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Wang, Jia; Wang, Wen; Xiao, Zhitao

    2018-01-01

    The quality of welding studs is significant for installation and localization of components of car in the process of automobile general assembly. A welding stud detection method based on line structured light is proposed. Firstly, the adaptive threshold is designed to calculate the binary images. Then, the light stripes of the image are extracted after skeleton line extraction and morphological filtering. The direction vector of the main light stripe is calculated using the length of the light stripe. Finally, the gray projections along the orientation of the main light stripe and the vertical orientation of the main light stripe are computed to obtain curves of gray projection, which are used to detect the studs. Experimental results demonstrate that the error rate of proposed method is lower than 0.1%, which is applied for automobile manufacturing.

  17. A quasi-Newton algorithm for large-scale nonlinear equations.

    PubMed

    Huang, Linghua

    2017-01-01

    In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  18. Dispersion interference in the pulsed-wire measurement method

    NASA Astrophysics Data System (ADS)

    Shahal, O.; Elkonin, B. V.; Sokolowski, J. S.

    1990-10-01

    The magnetic profile of the wiggler to be used in the planned Weizmann Institute FEL has been measured using the pulsed-wire method. The main transverse deflection pattern caused by an electrical current pulse in a wire placed along the wiggler was sometimes accompanied by minor faster and slower parasitic components. These components interfered with the main profile, resulting in distorted mapping of the wiggler magnetic field. Their periodical structure being very close to the main pattern could not be easily resolved by applying a numerical Fourier transform. A strong correlation between the wire tension and the amplitude of the parasitic patterns was found. Significant damping of these oscillations was achieved by applying high enough tension to the wire (close the yield point), allowing to disregard their contribution to the measurement accuracy.

  19. [Results of therapy of children with amblyopia by scanning stimulating laser].

    PubMed

    Chentsova, O B; Magaramova, M D; Grechanyĭ, M P

    1997-01-01

    A new effective method for the treatment of amblyopia was used in 113 children: stimulation with ophthalmological SLSO-208A scanning laser by two methods differing by the transmission coefficient and scanning pattern. Good results were attained, the best when laser exposure was combined with traditional therapy for amblyopia and in the patients with the central fixation. The results were assessed by the main parameters of visual functions and the stability of the effect.

  20. Nonlinear rotordynamics analysis. [Space Shuttle Main Engine turbopumps

    NASA Technical Reports Server (NTRS)

    Noah, Sherif T.

    1991-01-01

    Effective analysis tools were developed for predicting the nonlinear rotordynamic behavior of the Space Shuttle Main Engine (SSME) turbopumps under steady and transient operating conditions. Using these methods, preliminary parametric studies were conducted on both generic and actual HPOTP (high pressure oxygen turbopump) models. In particular, a novel modified harmonic balance/alternating Fourier transform (HB/AFT) method was developed and used to conduct a preliminary study of the effects of fluid, bearing and seal forces on the unbalanced response of a multi-disk rotor in the presence of bearing clearances. The method makes it possible to determine periodic, sub-, super-synchronous and chaotic responses of a rotor system. The method also yields information about the stability of the obtained response, thus allowing bifurcation analyses. This provides a more effective capability for predicting the response under transient conditions by searching in proximity of resonance peaks. Preliminary results were also obtained for the nonlinear transient response of an actual HPOTP model using an efficient, newly developed numerical method based on convolution integration. Currently, the HB/AFT is being extended for determining the aperiodic response of nonlinear systems. Initial results show the method to be promising.

  1. Algebraic methods in system theory

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.; Willems, J. C.; Willsky, A. S.

    1975-01-01

    Investigations on problems of the type which arise in the control of switched electrical networks are reported. The main results concern the algebraic structure and stochastic aspects of these systems. Future reports will contain more detailed applications of these results to engineering studies.

  2. Can Scat Analysis Describe the Feeding Habits of Big Cats? A Case Study with Jaguars (Panthera onca) in Southern Pantanal, Brazil.

    PubMed

    Perilli, Miriam L L; Lima, Fernando; Rodrigues, Flávio H G; Cavalcanti, Sandra M C

    2016-01-01

    Large cats feeding habits have been studied through two main methods: scat analysis and the carcasses of prey killed by monitored animals. From November 2001 to April 2004, we studied jaguar predation patterns using GPS telemetry location clusters on a cattle ranch in southern Pantanal. During this period, we recorded 431 carcasses of animals preyed upon by monitored jaguars. Concurrently, we collected 125 jaguar scats opportunistically. We compared the frequencies of prey found through each method. We also compared the prey communities using Bray-Curtis similarity coefficient. These comparisons allowed us to evaluate the use of scat analysis as a means to describe jaguar feeding habits. Both approaches identified prey communities with high similarity (Bray-Curtis coefficient > 70). According to either method, jaguars consume three main prey: cattle (Bos taurus), caiman (Caiman yacare) and peccaries (Tayassu pecari and Pecari tajacu). The two methods did not differ in the frequency of the three main prey over dry and wet seasons or years sampled. Our results show that scat analysis is effective and capable of describing jaguar feeding habits.

  3. Syntactic methods of shape feature description and its application in analysis of medical images

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Tadeusiewicz, Ryszard

    2000-02-01

    The paper presents specialist algorithms of morphologic analysis of shapes of selected organs of abdominal cavity proposed in order to diagnose disease symptoms occurring in the main pancreatic ducts and upper segments of ureters. Analysis of the correct morphology of these structures has been conducted with the use of syntactic methods of pattern recognition. Its main objective is computer-aided support to early diagnosis of neoplastic lesions and pancreatitis based on images taken in the course of examination with the endoscopic retrograde cholangiopancreatography (ERCP) method and a diagnosis of morphological lesions in ureter based on kidney radiogram analysis. In the analysis of ERCP images, the main objective is to recognize morphological lesions in pancreas ducts characteristic for carcinoma and chronic pancreatitis. In the case of kidney radiogram analysis the aim is to diagnose local irregularity of ureter lumen. Diagnosing the above mentioned lesion has been conducted with the use of syntactic methods of pattern recognition, in particular the languages of shape features description and context-free attributed grammars. These methods allow to recognize and describe in a very efficient way the aforementioned lesions on images obtained as a result of initial image processing into diagrams of widths of the examined structures.

  4. Substructures in Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Lehodey, Brigitte Tome

    2000-01-01

    This dissertation presents two methods for the detection of substructures in clusters of galaxies and the results of their application to a group of four clusters. In chapters 2 and 3, we remember the main properties of clusters of galaxies and give the definition of substructures. We also try to show why the study of substructures in clusters of galaxies is so important for Cosmology. Chapters 4 and 5 describe these two methods, the first one, the adaptive Kernel, is applied to the study of the spatial and kinematical distribution of the cluster galaxies. The second one, the MVM (Multiscale Vision Model), is applied to analyse the cluster diffuse X-ray emission, i.e., the intracluster gas distribution. At the end of these two chapters, we also present the results of the application of these methods to our sample of clusters. In chapter 6, we draw the conclusions from the comparison of the results we obtain with each method. In the last chapter, we present the main conclusions of this work trying to point out possible developments. We close with two appendices in which we detail some questions raised in this work not directly linked to the problem of substructures detection.

  5. On the objective identification of flood seasons

    NASA Astrophysics Data System (ADS)

    Cunderlik, Juraj M.; Ouarda, Taha B. M. J.; BobéE, Bernard

    2004-01-01

    The determination of seasons of high and low probability of flood occurrence is a task with many practical applications in contemporary hydrology and water resources management. Flood seasons are generally identified subjectively by visually assessing the temporal distribution of flood occurrences and, then at a regional scale, verified by comparing the temporal distribution with distributions obtained at hydrologically similar neighboring sites. This approach is subjective, time consuming, and potentially unreliable. The main objective of this study is therefore to introduce a new, objective, and systematic method for the identification of flood seasons. The proposed method tests the significance of flood seasons by comparing the observed variability of flood occurrences with the theoretical flood variability in a nonseasonal model. The method also addresses the uncertainty resulting from sampling variability by quantifying the probability associated with the identified flood seasons. The performance of the method was tested on an extensive number of samples with different record lengths generated from several theoretical models of flood seasonality. The proposed approach was then applied on real data from a large set of sites with different flood regimes across Great Britain. The results show that the method can efficiently identify flood seasons from both theoretical and observed distributions of flood occurrence. The results were used for the determination of the main flood seasonality types in Great Britain.

  6. Quantum informatics paradigms and tools for QIPC

    NASA Astrophysics Data System (ADS)

    Gruska, Jozef

    2006-11-01

    Quantum information processing and communication (QIPC) theory has developed recently very fast and brought a variety of interesting and important results of the great value also for the whole area of quantum physics. One can also say that the field of QIPC has been so far mainly theory driven and the experiments have been mostly done to show that it is indeed possible, and how difficult is to make it, what theory shows as possible. One of the main reasons for such a fast and successful development of the QIPC science is the fact that paradigms, models, concepts, value system, as well as methods and results of the (theoretical) informatics have been intensively used. The goal of this paper is to go behind this successful crusade and applications of the informatics for QIPC and to present, analyse and illustrate the main ideas, concepts, methods and tools that have been involved. All that should help more physics-oriented researchers in QIPC to understand that in order to explore the quantum world, new paradigms, concepts, models and so on are now available, and they could and should be used, due to the progress in (theoretical) informatics. Our concentration will be not only on what has been achieved, but even more on the main new challenges. In doing that we will concentrate more on the backgrounds, motivations, goals and implications than on the very technical results.

  7. Factor and prevention method of landslide event at FELCRA Semungkis, Hulu Langat, Selangor

    NASA Astrophysics Data System (ADS)

    Manap, N.; Jeyaramah, N.; Syahrom, N.

    2017-12-01

    Landslide is known as one of the powerful geological events that happens unpredictably due to natural or human factors. A study was carried out at FELCRA Semungkis, Hulu Langat which is known as one of the areas that has been affected by landslide that involving 16 causalities. The purpose of this study is to identify the main factor that causes the landslide at FELCRA Semungkis, Hulu Langat and to identify the protection method. Data was collected from three respondents working under government bodies through interview sessions. The data collected were analysed by using the content analysis method. From the results, it can be concluded that the main factors that caused the landslide to happened are the human factor and nature factor. The protection method that can be applied to stabilize the FELCRA Semungkis, Hulu Langat is by using the soil nailing method with the support of soil create system.

  8. Analysis of Vibration and Noise of Construction Machinery Based on Ensemble Empirical Mode Decomposition and Spectral Correlation Analysis Method

    NASA Astrophysics Data System (ADS)

    Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan

    In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.

  9. Simulation Research on Vehicle Active Suspension Controller Based on G1 Method

    NASA Astrophysics Data System (ADS)

    Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui

    2017-09-01

    Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.

  10. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  11. Perspectives of Maine Forest Cover Change from Landsat Imagery and Forest Inventory Analysis (FIA)

    Treesearch

    Steven Sader; Michael Hoppus; Jacob Metzler; Suming Jin

    2005-01-01

    A forest change detection map was developed to document forest gains and losses during the decade of the 1990s. The effectiveness of the Landsat imagery and methods for detecting Maine forest cover change are indicated by the good accuracy assessment results: forest-no change, forest loss, and forest gain accuracy were 90, 88, and 92% respectively, and the good...

  12. Determination of main components and anaerobic rumen digestibility of aquatic plants in vitro using near-infrared-reflectance spectroscopy.

    PubMed

    Yue, Zheng-Bo; Zhang, Meng-Lin; Sheng, Guo-Ping; Liu, Rong-Hua; Long, Ying; Xiang, Bing-Ren; Wang, Jin; Yu, Han-Qing

    2010-04-01

    A near-infrared-reflectance (NIR) spectroscopy-based method is established to determine the main components of aquatic plants as well as their anaerobic rumen biodegradability. The developed method is more rapid and accurate compared to the conventional chemical analysis and biodegradability tests. Moisture, volatile solid, Klason lignin and ash in entire aquatic plants could be accurately predicted using this method with coefficient of determination (r(2)) values of 0.952, 0.916, 0.939 and 0.950, respectively. In addition, the anaerobic rumen biodegradability of aquatic plants, represented as biogas and methane yields, could also be predicted well. The algorithm of continuous wavelet transform for the NIR spectral data pretreatment is able to greatly enhance the robustness and predictive ability of the NIR spectral analysis. These results indicate that NIR spectroscopy could be used to predict the main components of aquatic plants and their anaerobic biodegradability. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  13. Stability test for power converters in high-powered operations for J-PARC MR main magnets

    NASA Astrophysics Data System (ADS)

    Morita, Yuichi; Kurimoto, Yoshinori; Miura, Kazuki; Sagawa, Ryu; Shimogawa, Tetsushi

    2017-10-01

    The Japan Proton Accelerator Research Complex (J-PARC) aims at achieving a megawatt-class proton accelerator facility. One promising method for increasing the beam power is to shorten the repetition cycle of the main ring from the current cycle of 2.48 s to 1.3 s. In this scheme, however, the increase in the output voltage and the power variation of the electric system are serious concerns for the power supplies of the main magnets. We have been developing a new power supply that provides solutions to these issues. Recently, we proposed a new method for high-powered tests of the converter that does not require a large-scale load and power source. We carried out a high-powered test of ∼100 kVA for the prototype DC/DC converters of the new power supply with this method. This paper introduces the design of the power supply and the results of the high-powered test for the prototype DC/DC converters.

  14. Remarkable Growth of Open Access in the Biomedical Field: Analysis of PubMed Articles from 2006 to 2010

    PubMed Central

    Kurata, Keiko; Morioka, Tomoko; Yokoi, Keiko; Matsubayashi, Mamiko

    2013-01-01

    Introduction This study clarifies the trends observed in open access (OA) in the biomedical field between 2006 and 2010, and explores the possible explanations for the differences in OA rates revealed in recent surveys. Methods The study consists of a main survey and two supplementary surveys. In the main survey, a manual Google search was performed to investigate whether full-text versions of articles from PubMed were freely available. Target samples were articles published in 2005, 2007, and 2009; the searches were performed a year after publication in 2006, 2008, and 2010, respectively. Using the search results, we classified the OA provision methods into seven categories. The supplementary surveys calculated the OA rate using two search functions on PubMed: “LinkOut” and “Limits.” Results The main survey concluded that the OA rate increased significantly between 2006 and 2010: the OA rate in 2010 (50.2%) was twice that in 2006 (26.3%). Furthermore, majority of OA articles were available from OA journal (OAJ) websites, indicating that OAJs have consistently been a significant contributor to OA throughout the period. OA availability through the PubMed Central (PMC) repository also increased significantly. OA rates obtained from two supplementary surveys were lower than those found in the main survey. “LinkOut” could find only 40% of OA articles in the main survey. Discussion OA articles in the biomedical field have more than a 50% share. OA has been achieved through OAJs. The reason why the OA rates in our surveys are different from those in recent surveys seems to be the difference in sampling methods and verification procedures. PMID:23658683

  15. Diagnostic Value of Multidetector CT and Its Multiplanar Reformation, Volume Rendering and Virtual Bronchoscopy Postprocessing Techniques for Primary Trachea and Main Bronchus Tumors

    PubMed Central

    Luo, Mingyue; Duan, Chaijie; Qiu, Jianping; Li, Wenru; Zhu, Dongyun; Cai, Wenli

    2015-01-01

    Purpose To evaluate the diagnostic value of multidetector CT (MDCT) and its multiplanar reformation (MPR), volume rendering (VR) and virtual bronchoscopy (VB) postprocessing techniques for primary trachea and main bronchus tumors. Methods Detection results of 31 primary trachea and main bronchus tumors with MDCT and its MPR, VR and VB postprocessing techniques, were analyzed retrospectively with regard to tumor locations, tumor morphologies, extramural invasions of tumors, longitudinal involvements of tumors, morphologies and extents of luminal stenoses, distances between main bronchus tumors and trachea carinae, and internal features of tumors. The detection results were compared with that of surgery and pathology. Results Detection results with MDCT and its MPR, VR and VB were consistent with that of surgery and pathology, included tumor locations (tracheae, n = 19; right main bronchi, n = 6; left main bronchi, n = 6), tumor morphologies (endoluminal nodes with narrow bases, n = 2; endoluminal nodes with wide bases, n = 13; both intraluminal and extraluminal masses, n = 16), extramural invasions of tumors (brokethrough only serous membrane, n = 1; 4.0 mm—56.0 mm, n = 14; no clear border with right atelectasis, n = 1), longitudinal involvements of tumors (3.0 mm, n = 1; 5.0 mm—68.0 mm, n = 29; whole right main bronchus wall and trachea carina, n = 1), morphologies of luminal stenoses (irregular, n = 26; circular, n = 3; eccentric, n = 1; conical, n = 1) and extents (mild, n = 5; moderate, n = 7; severe, n = 19), distances between main bronchus tumors and trachea carinae (16.0 mm, n = 1; invaded trachea carina, n = 1; >20.0 mm, n = 10), and internal features of tumors (fairly homogeneous densities with rather obvious enhancements, n = 26; homogeneous density with obvious enhancement, n = 1; homogeneous density without obvious enhancement, n = 1; not enough homogeneous density with obvious enhancement, n = 1; punctate calcification with obvious enhancement, n = 1; low density without obvious enhancement, n = 1). Conclusion MDCT and its MPR, VR and VB images have respective advantages and disadvantages. Their combination could complement to each other to accurately detect locations, natures (benignancy, malignancy or low malignancy), and quantities (extramural invasions, longitudinal involvements, extents of luminal stenoses, distances between main bronchus tumors and trachea carinae) of primary trachea and main bronchus tumors with crucial information for surgical treatment, are highly useful diagnostic methods for primary trachea and main bronchus tumors. PMID:26332466

  16. High resolution flow field prediction for tail rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Bliss, Donald B.

    1989-01-01

    The prediction of tail rotor noise due to the impingement of the main rotor wake poses a significant challenge to current analysis methods in rotorcraft aeroacoustics. This paper describes the development of a new treatment of the tail rotor aerodynamic environment that permits highly accurate resolution of the incident flow field with modest computational effort relative to alternative models. The new approach incorporates an advanced full-span free wake model of the main rotor in a scheme which reconstructs high-resolution flow solutions from preliminary, computationally inexpensive simulations with coarse resolution. The heart of the approach is a novel method for using local velocity correction terms to capture the steep velocity gradients characteristic of the vortex-dominated incident flow. Sample calculations have been undertaken to examine the principal types of interactions between the tail rotor and the main rotor wake and to examine the performance of the new method. The results of these sample problems confirm the success of this approach in capturing the high-resolution flows necessary for analysis of rotor-wake/rotor interactions with dramatically reduced computational cost. Computations of radiated sound are also carried out that explore the role of various portions of the main rotor wake in generating tail rotor noise.

  17. Fast cat-eye effect target recognition based on saliency extraction

    NASA Astrophysics Data System (ADS)

    Li, Li; Ren, Jianlin; Wang, Xingbin

    2015-09-01

    Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.

  18. Consensus methods: review of original methods and their main alternatives used in public health

    PubMed Central

    Bourrée, Fanny; Michel, Philippe; Salmi, Louis Rachid

    2008-01-01

    Summary Background Consensus-based studies are increasingly used as decision-making methods, for they have lower production cost than other methods (observation, experimentation, modelling) and provide results more rapidly. The objective of this paper is to describe the principles and methods of the four main methods, Delphi, nominal group, consensus development conference and RAND/UCLA, their use as it appears in peer-reviewed publications and validation studies published in the healthcare literature. Methods A bibliographic search was performed in Pubmed/MEDLINE, Banque de Données Santé Publique (BDSP), The Cochrane Library, Pascal and Francis. Keywords, headings and qualifiers corresponding to a list of terms and expressions related to the consensus methods were searched in the thesauri, and used in the literature search. A search with the same terms and expressions was performed on Internet using the website Google Scholar. Results All methods, precisely described in the literature, are based on common basic principles such as definition of subject, selection of experts, and direct or remote interaction processes. They sometimes use quantitative assessment for ranking items. Numerous variants of these methods have been described. Few validation studies have been implemented. Not implementing these basic principles and failing to describe the methods used to reach the consensus were both frequent reasons contributing to raise suspicion regarding the validity of consensus methods. Conclusion When it is applied to a new domain with important consequences in terms of decision making, a consensus method should be first validated. PMID:19013039

  19. Intra-retinal segmentation of optical coherence tomography images using active contours with a dynamic programming initialization and an adaptive weighting strategy

    NASA Astrophysics Data System (ADS)

    Gholami, Peyman; Roy, Priyanka; Kuppuswamy Parthasarathy, Mohana; Ommani, Abbas; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Retinal layer shape and thickness are one of the main indicators in the diagnosis of ocular diseases. We present an active contour approach to localize intra-retinal boundaries of eight retinal layers from OCT images. The initial locations of the active contour curves are determined using a Viterbi dynamic programming method. The main energy function is a Chan-Vese active contour model without edges. A boundary term is added to the energy function using an adaptive weighting method to help curves converge to the retinal layer edges more precisely, after evolving of curves towards boundaries, in final iterations. A wavelet-based denoising method is used to remove speckle from OCT images while preserving important details and edges. The performance of the proposed method was tested on a set of healthy and diseased eye SD-OCT images. The experimental results, compared between the proposed method and the manual segmentation, which was determined by an optometrist, indicate that our method has obtained an average of 95.29%, 92.78%, 95.86%, 87.93%, 82.67%, and 90.25% respectively, for accuracy, sensitivity, specificity, precision, Jaccard Index, and Dice Similarity Coefficient over all segmented layers. These results justify the robustness of the proposed method in determining the location of different retinal layers.

  20. DCS-SVM: a novel semi-automated method for human brain MR image segmentation.

    PubMed

    Ahmadvand, Ali; Daliri, Mohammad Reza; Hajiali, Mohammadtaghi

    2017-11-27

    In this paper, a novel method is proposed which appropriately segments magnetic resonance (MR) brain images into three main tissues. This paper proposes an extension of our previous work in which we suggested a combination of multiple classifiers (CMC)-based methods named dynamic classifier selection-dynamic local training local Tanimoto index (DCS-DLTLTI) for MR brain image segmentation into three main cerebral tissues. This idea is used here and a novel method is developed that tries to use more complex and accurate classifiers like support vector machine (SVM) in the ensemble. This work is challenging because the CMC-based methods are time consuming, especially on huge datasets like three-dimensional (3D) brain MR images. Moreover, SVM is a powerful method that is used for modeling datasets with complex feature space, but it also has huge computational cost for big datasets, especially those with strong interclass variability problems and with more than two classes such as 3D brain images; therefore, we cannot use SVM in DCS-DLTLTI. Therefore, we propose a novel approach named "DCS-SVM" to use SVM in DCS-DLTLTI to improve the accuracy of segmentation results. The proposed method is applied on well-known datasets of the Internet Brain Segmentation Repository (IBSR) and promising results are obtained.

  1. Fast analysis of triterpenoids in Ganoderma lucidum spores by ultra-performance liquid chromatography coupled with triple quadrupole mass spectrometry.

    PubMed

    Yan, Zhou; Xia, Bing; Qiu, Ming Hua; Li Sheng, Ding; Xu, Hong Xi

    2013-11-01

    A rapid and reliable method was established for simultaneous determination of main triterpenoids in Ganoderma lucidum spores using ultra-high-performance liquid chromatography coupled with triple quadrupole mass spectrometry (UPLC-TQ-MS). The established method was validated in terms of linearity, sensitivity, precision, accuracy and stability, and was successfully applied to determine the contents of 10 main triterpenoids in different batches of G. lucidum spores. The analysis results showed that moderate levels of triterpenoids were found in G. lucidum spores. In addition, a MS full scan with a daughter ion scan experiment was performed to identify the potential derivatives of triterpenoids present in G. lucidum spores. As a result, a total of 22 triterpenoids from different G. lucidum spores were unequivocally or tentatively identified via comparisons with authentic standards and literatures. This method provides both qualitative and quantitative results without the need for repetitive UPLC-MS analyses, thereby increasing efficiency and productivity, making it suitable for high-throughput applications. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features

    NASA Astrophysics Data System (ADS)

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  3. Development and applications of optical interferometric micrometrology in the Angstrom and subangstrom range

    NASA Technical Reports Server (NTRS)

    Lauer, James L.; Abel, Phillip B.

    1988-01-01

    The characteristics of the scanning tunneling microscope and atomic force microscope (AFM) are briefly reviewed, and optical methods, mainly interferometry, of sufficient resolution to measure AFM deflections are discussed. The methods include optical resonators, laser interferometry, multiple-beam interferometry, and evanescent wave detection. Experimental results using AFM are reviewed.

  4. Examining the association between motivations for induced abortion and method safety among women in Ghana.

    PubMed

    Biney, Adriana A E; Atiglo, D Yaw

    2017-10-01

    This article draws on data from 552 women interviewed in the 2007 Ghana Maternal Health Survey to examine the association between motivations for women's pregnancy terminations and the safety of methods used. Women's reasons for induced abortions represented their vulnerability types at the critical time of decision making. Different motivations can result in taking various forms of action with the most vulnerable potentially resorting to the most harmful behaviors. Analysis of survey data pointed to spacing/delaying births as the main reason for abortion. Furthermore, women were more likely to terminate pregnancies unsafely if their main motivation for abortion was financial constraints. Especially among rural women, abortions for any reason were more likely associated with safe methods than if for financial reasons. These findings suggest a theme of vulnerability, resulting from poverty, as the motivations for women to resort to harmful abortion methods. Therefore, interventions formulated to reduce instances of unsafe pregnancy terminations should target reducing poverty and capacity building with the aim of economic advancement, in addition to curbing the root of the problem: unintended pregnancy.

  5. Incidence of the muffin-tin approximation on the electronic structure of large clusters calculated by the MS-LSD method: The typical case of C{sub 60}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Razafinjanahary, H.; Rogemond, F.; Chermette, H.

    The MS-LSD method remains a method of interest when rapidity and small computer resources are required; its main drawback is some lack of accuracy, mainly due to the muffin-tin distribution of the potential. In the case of large clusters or molecules, the use of an empty sphere to fill, in part, the large intersphere region can improve greatly the results. Calculations bearing on C{sub 60} has been undertaken to underline this trend, because, on the one hand, the fullerenes exhibit a remarkable possibility to fit a large empty sphere in the center of the cluster and, on the other hand,more » numerous accurate calculations have already been published, allowing quantitative comparison with results. The author`s calculations suggest that in case of added empty sphere the results compare well with the results of more accurate calculations. The calculated electron affinity for C{sub 60} and C{sub 60}{sup {minus}} are in reasonable agreement with experimental values, but the stability of C{sub 60}{sup 2-} in gas phase is not found. 35 refs., 3 figs., 5 tabs.« less

  6. Parameters estimation using the first passage times method in a jump-diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaldi, K., E-mail: kkhaldi@umbb.dz; LIMOSE Laboratory, Boumerdes University, 35000; Meddahi, S., E-mail: samia.meddahi@gmail.com

    2016-06-02

    The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.

  7. Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.

    2017-01-01

    With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.

  8. Flow processes in overexpanded chemical rocket nozzles. Part 2: Side loads due to asymmetric separation

    NASA Technical Reports Server (NTRS)

    Schmucker, R. H.

    1984-01-01

    Methods for measuring the lateral forces, occurring as a result of asymmetric nozzle flow separation, are discussed. The effect of some parameters on the side load is explained. A new method was developed for calculation of the side load. The values calculated are compared with side load data of the J-2 engine. Results are used for predicting side loads of the space shuttle main engine.

  9. Application of the enhanced homotopy perturbation method to solve the fractional-order Bagley-Torvik differential equation

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M.; Ghaderi, R.; Sheikhol Eslami, A.; Ranjbar, A.; Hosseinnia, S. H.; Momani, S.; Sadati, J.

    2009-10-01

    The enhanced homotopy perturbation method (EHPM) is applied for finding improved approximate solutions of the well-known Bagley-Torvik equation for three different cases. The main characteristic of the EHPM is using a stabilized linear part, which guarantees the stability and convergence of the overall solution. The results are finally compared with the Adams-Bashforth-Moulton numerical method, the Adomian decomposition method (ADM) and the fractional differential transform method (FDTM) to verify the performance of the EHPM.

  10. Blind image quality assessment based on aesthetic and statistical quality-aware features

    NASA Astrophysics Data System (ADS)

    Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi

    2017-07-01

    The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.

  11. Plant species classification using flower images—A comparative study of local feature representations

    PubMed Central

    Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick

    2017-01-01

    Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999

  12. Fuzzy cluster analysis of air quality in Beijing district

    NASA Astrophysics Data System (ADS)

    Liu, Hongkai

    2018-02-01

    The principle of fuzzy clustering analysis is applied in this article, by using the method of transitive closure, the main air pollutants in 17 districts of Beijing from 2014 to 2016 were classified. The results of the analysis reflects the nearly three year’s changes of the main air pollutants in Beijing. This can provide the scientific for atmospheric governance in the Beijing area and digital support.

  13. Theoretical research program to study chemical reactions in AOTV bow shock tubes

    NASA Technical Reports Server (NTRS)

    Taylor, P.

    1986-01-01

    Progress in the development of computational methods for the characterization of chemical reactions in aerobraking orbit transfer vehicle (AOTV) propulsive flows is reported. Two main areas of code development were undertaken: (1) the implementation of CASSCF (complete active space self-consistent field) and SCF (self-consistent field) analytical first derivatives on the CRAY X-MP; and (2) the installation of the complete set of electronic structure codes on the CRAY 2. In the area of application calculations the main effort was devoted to performing full configuration-interaction calculations and using these results to benchmark other methods. Preprints describing some of the systems studied are included.

  14. Iterative procedures for space shuttle main engine performance models

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1989-01-01

    Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.

  15. Microcolumn-based speciation analysis of thallium in soil and green cabbage.

    PubMed

    Jia, Yanlong; Xiao, Tangfu; Sun, Jialong; Yang, Fei; Baveye, Philippe C

    2018-07-15

    Thallium (Tl) is a toxic trace metal, whose geochemical behavior and biological effects are closely controlled by its chemical speciation in the environment. However, little tends to be known about this speciation of Tl in soil and plant systems that directly affect the safety of food supplies. In this context, the objective of the present study was to elaborate an efficient method to separate and detect Tl(I) and Tl(III) species for soil and plant samples. This method involves the selective adsorption of Tl(I) on microcolumns filled with immobilized oxine, in the presence of DTPA (diethylenetriaminepentaacetic acid), followed by DTPA-enhanced ultrasonic and heating-induced extraction, coupled with ICP-MS detection. The method was characterized by a LOD of 0.037 μg/L for Tl(I) and 0.18 μg/L for Tl(III) in 10  mL samples. With this method, a second objective of the research was to assess the speciation of Tl in pot and field soils and in green cabbage crops. Experimental results suggest that DTPA extracted Tl was mainly present as Tl(I) in soils (>95%). Tl in hyperaccumulator plant green cabbage was also mainly present as Tl(I) (>90%). With respect to Tl uptake in plants, this study provides direct evidence that green cabbage mainly takes up Tl(I) from soil, and transports it into the aboveground organs. In soils, Tl(III) is reduced to Tl(I) even at the surface where the chemical environment promotes oxidation. This observation is conducive to understanding the mechanisms of Tl isotope fractionation in the soil-plant system. Based on geochemical fraction studies, the reducible fraction was the main source of Tl getting accumulated by plants. These results indicate that the improved analytical method presented in this study offers an economical, simple, fast, and sensitive approach for the separation of Tl species present in soils at trace levels. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Homing in on sweet spots in Cretaceous Austin chalk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, G.E.; Sonnenberg, F.P.

    1993-11-29

    In discussing the nature and causes of fracturing in the Cretaceous Austin chalk of south central Texas, many geologists and operators involved in horizontal drilling of the chalk consider regional rock stress as the probable main cause of the fractures. If Austin chalk fractures are mainly the result of regional extensional stress without localizing factors, then fractured sweet spots are randomly distributed and successful exploration is more or less a matter of luck, usually dependent upon the coincidental placement of a seismic line. But if local, deep-seated structure or basement topography are the main causes of sweet spots, then amore » successful exploration method would be to first delineate the basement paleo structure or topography and secondly, place a seismic line to confirm the delineated features. Finding localities of maximum fracturing and production would than be based on scientific logic rather than luck. It is the purpose of this article to present the results of an examination of these alternative causes for the Austin chalk fracturing in the hope of determining the most cost effective exploration method for the fractured chalk reservoir.« less

  17. Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method

    NASA Astrophysics Data System (ADS)

    Sun, C. J.; Zhou, J. H.; Wu, W.

    2017-10-01

    During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.

  18. Characterization of archaeological structures using the magnetic method in Thaje archaeological site, Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alkhatib Alkontar, Rozan; Calou, Paul; Rohmer, Jérôme; Munschy, Marc

    2017-04-01

    Among the surface methods of exploration that have been developed to meet the new requirements of archaeological research, geophysical methods offer a wide range of applications in the study of buried deposits. As a result of the most recent development, the magnetic field- prospection method is very efficient to highlight buried foundations even if the corresponding construction material is weakly magnetized like, for example, limestone. The magnetic field, that is being measured in a specific place and at a specific time, is the vector sum of the main regional magnetic field, the effect of subsurface structures, the temporal variation (mainly solar influence) and local disturbances such as power lines, buildings, fences … The measurement method is implemented by using an array of fluxgate 3-components magnetometers carried 1 m above the floor. The advantage of using vector magnetometers is that magnetic compensation can be of achieved. An array of four magnetometers are used to survey the archaeological site of Thaje (100-300 yr BC), Saudi Arabia, and to obtain a precise location of measurements, a differential global navigation satellite system is used with an accuracy of about 10 cm relative to the base station. 25 hectares have been surveyed within 13 days and data are compile to obtain a total magnetic intensity map with a node spacing of 25 cm. The map is interpreted using magnetic field transforms, such as reduction to the pole, fractional vertical derivatives. Tilt-angle. The results show a fairly precise plan of the city where main streets, buildings and rampart can be clearly distinguished.

  19. Numerical Simulation of One- And Two-Phase Flows In Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Gilinsky, Mikhail M.

    2002-01-01

    In this report, we present some results of problems investigated during joint research between the Hampton University (HU) Fluid Mechanics and Acoustics Laboratory (FM&AL), NASA Glenn Research Center (GRC) and the Hyper-X Program of the NASA Langley Research Center (LaRC). This work is supported by joint research between the NASA GRC/HU FM&AL and the Institute of Mechanics at Moscow State University (IM/MSU) in Russia under a Civilian Research and Development Foundation (CRDF) grant, #RE1-2068. The main areas of current scientific interest of the FM&AL include an investigation of the proposed and patented advanced methods for aircraft engine thrust and noise benefits. These methods are based on nontraditional 3D (three dimensional) corrugated and composite nozzle, inlet, propeller and screw designs such as the Bluebell and Telescope nozzles, Mobius-shaped screws, etc. These are the main subject of our other projects, of which one is the NASA MURED's (Minority University Research and Education Division) FAR (Faculty Awards for Research) Award, #NAG-3-2249. Working jointly with this project team, our team also analyzes additional methods for exhaust jet noise reduction. These methods are without essential thrust loss and even with thrust augmentation. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analyses for advanced aircraft and rocket engines. The FM&AL Team uses analytical methods, numerical simulations and experimental tests at the Hampton University campus, NASA and IM/MSU. The main results obtained by FM&AL team were published in the papers and patents.

  20. Analyzing the economic impacts of transportation projects.

    DOT National Transportation Integrated Search

    2013-09-01

    The main goal of the study is to explore methods, approaches and : analytical software tools for analyzing economic activity that results from largescale : transportation investments in Connecticut. The primary conclusion is that the : transportation...

  1. [Difficulties of genetic counselling in rare, mainly neurogenetic disorders].

    PubMed

    Horváth, Emese; Nagy, Nikoletta; Széll, Márta

    2014-08-03

    In recent decades methods used for the investigation of the genetic background of rare diseases showed a great improvement. The aim of the authors was to demonstrate difficulties of genetic counselling and investigations in case of five rare, mainly neurogenetic diseases. During pre-test genetic counselling, the disease suspected from the clinical symptoms and the available genetic tests were considered. During post-test genetic counselling, the results of the genetic tests were discussed. In three of the five cases genetic tests identified the disease-causing genetic abnormalities, while in two cases the causative abnormalities were not identified. Despite a great improvement of the available genetic methods, the causative genetic abnormalities cannot be identified in some cases. The genetic counsellor has a key role in the assessment and interpretation of the results and in helping the family planning.

  2. COPE Method Implementation Program to Reduce Communication Apprehension Level in Full Day Yunior High School Students

    NASA Astrophysics Data System (ADS)

    Prasetyo, A. R.

    2017-02-01

    This study was aimed to explore the effect of COPE method to reduce communication apprehension level of students in Early Adolescence who become Full Day Junior High School students. Full Day Junior High School students, especially in Surabaya coastal area, have more demands to develop the communication aspects such as group discussions and presentations and extracurricular activities. Higher demands to develop such aspects of communication may cause them to experience communication apprehension. The subject was Full Day School students totaling 31 students. The design of the research was experimental design. The experimental method used was a non-randomized pretest posttest control group design and purposive sampling was also used. COPE method is a process that consists of four main stages where people are trying to deal with and control of stressful situations as a result of the problem being faced by conducting cognitive and behavioral changes. Four main stages COPE method is Calming the nervous system, Originating an imaginative plan, Persisting in the face of obstacles and failure, and Evaluating and adjusting the plan. Results of quantitative analysis based on U-Mann Whitney Test shows significant effect on the COPE Method to decrease anxiety levels of communication (0.000 <0.005).

  3. Factors influencing the results of faculty evaluation in Isfahan University of Medical Sciences

    PubMed Central

    Kamali, Farahnaz; Yamani, Nikoo; Changiz, Tahereh; Zoubin, Fatemeh

    2018-01-01

    OBJECTIVE: This study aimed to explore factors influencing the results of faculty member evaluation from the viewpoints of faculty members affiliated with Isfahan University of Medical Sciences, Isfahan, Iran. MATERIALS AND METHODS: This qualitative study was done using a conventional content analysis method. Participants were faculty members of Isfahan University of Medical Sciences who, considering maximum variation in sampling, were chosen with a purposive sampling method. Semi-structured interviews were held with 11 faculty members until data saturation was reached. The interviews were transcribed verbatim and analyzed with conventional content analysis method for theme development. Further, the MAXQDA software was used for data management. RESULTS: The data analysis led to the development of two main themes, namely, “characteristics of the educational system” and “characteristics of the faculty member evaluation system.” The first main theme consists of three categories, i.e. “characteristics of influential people in evaluation,” “features of the courses,” and “background characteristics.” The other theme has the following as its categories: “evaluation methods,” “evaluation tools,” “evaluation process,” and “application of evaluation results.” Each category will have its subcategories. CONCLUSIONS: Many factors affect the evaluation of faculty members that should be taken into account by educational policymakers for improving the quality of the educational process. In addition to the factors that directly influence the educational system, methodological problems in the evaluation system need special attention. PMID:29417073

  4. Earthquake effect on volcano and the geological structure in central java using tomography travel time method and relocation hypocenter by grid search method

    NASA Astrophysics Data System (ADS)

    Suharsono; Nurdian, S. W.; Palupi, I. R.

    2016-11-01

    Relocating hypocenter is a way to improve the velocity model of the subsurface. One of the method is Grid Search. To perform the distribution of the velocity in subsurface by tomography method, it is used the result of relocating hypocenter to be a reference for subsurface analysis in volcanic and major structural patterns, such as in Central Java. The main data of this study is the earthquake data recorded from 1952 to 2012 with the P wave number is 9162, the number of events is 2426 were recorded by 30 stations located in the vicinity of Central Java. Grid search method has some advantages they are: it can relocate the hypocenter more accurate because this method is dividing space lattice model into blocks, and each grid block can only be occupied by one point hypocenter. Tomography technique is done by travel time data that has had relocated with inversion pseudo bending method. Grid search relocated method show that the hypocenter's depth is shallower than before and the direction is to the south, the hypocenter distribution is modeled into the subduction zone between the continent of Eurasia with the Indo-Australian with an average angle of 14 °. The tomography results show the low velocity value is contained under volcanoes with value of -8% to -10%, then the pattern of the main fault structure in Central Java can be description by the results of tomography at high velocity that is from 8% to 10% with the direction is northwest and northeast-southwest.

  5. Structural Noise and Acoustic Characteristics Improvement of Transport Power Plants

    NASA Astrophysics Data System (ADS)

    Chaynov, N. D.; Markov, V. A.; Savastenko, A. A.

    2018-03-01

    Noise reduction generated during the operation of various machines and mechanisms is an urgent task with regard to the power plants and, in particular, to internal combustion engines. Sound emission from the surfaces vibration of body parts is one of the main noise manifestations of the running engine and it is called a structural noise. The vibration defining of the outer surfaces of complex body parts and the calculation of their acoustic characteristics are determined with numerical methods. At the same time, realization of finite and boundary elements methods combination turned out to be very effective. The finite element method is used in calculating the structural elements vibrations, and the boundary elements method is used in the structural noise calculation. The main conditions of the methodology and the results of the structural noise analysis applied to a number of automobile engines are shown.

  6. Clustering self-organizing maps (SOM) method for human papillomavirus (HPV) DNA as the main cause of cervical cancer disease

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Aldila, D.; Fatimah, Arimbi, M. D.

    2017-07-01

    One of the most widely used clustering method, since it has advantage on its robustness, is Self-Organizing Maps (SOM) method. This paper discusses the application of SOM method on Human Papillomavirus (HPV) DNA which is the main cause of cervical cancer disease, the most dangerous cancer in developing countries. We use 18 types of HPV DNA-based on the newest complete genome. By using open-source-based program R, clustering process can separate 18 types of HPV into two different clusters. There are two types of HPV in the first cluster while 16 others in the second cluster. The analyzing result of 18 types HPV based on the malignancy of the virus (the difficultness to cure). Two of HPV types the first cluster can be classified as tame HPV, while 16 others in the second cluster are classified as vicious HPV.

  7. Fine-grained indexing of the biomedical literature: MeSH subheading attachment for a MEDLINE indexing tool.

    PubMed

    Névéol, Aurélie; Shooshan, Sonya E; Mork, James G; Aronson, Alan R

    2007-10-11

    This paper reports on the latest results of an Indexing Initiative effort addressing the automatic attachment of subheadings to MeSH main headings recommended by the NLM's Medical Text Indexer. Several linguistic and statistical approaches are used to retrieve and attach the subheadings. Continuing collaboration with NLM indexers also provided insight on how automatic methods can better enhance indexing practice. The methods were evaluated on corpus of 50,000 MEDLINE citations. For main heading/subheading pair recommendations, the best precision is obtained with a post-processing rule method (58%) while the best recall is obtained by pooling all methods (64%). For stand-alone subheading recommendations, the best performance is obtained with the PubMed Related Citations algorithm. Significant progress has been made in terms of subheading coverage. After further evaluation, some of this work may be integrated in the MEDLINE indexing workflow.

  8. [Study on commercial specification of atractylodes based on Delphi method].

    PubMed

    Wang, Hao; Chen, Li-Xiao; Huang, Lu-Qi; Zhang, Tian-Tian; Li, Ying; Zheng, Yu-Guang

    2016-03-01

    This research adopts "Delphi method" to evaluate atractylodes traditional traits and rank correlation. By using methods of mathematical statistics the relationship of the traditional identification indicators and atractylodes goods rank correlation was analyzed, It is found that the main characteristics affectingatractylodes commodity specifications and grades of main characters wereoil points of transaction,color of transaction,color of surface,grain of transaction,texture of transaction andspoilage. The study points out that the original "seventy-six kinds of medicinal materials commodity specification standards of atractylodes differentiate commodity specification" is not in conformity with the actual market situation, we need to formulate corresponding atractylodes medicinal products specifications and grades.This study combined with experimental results "Delphi method" and the market actual situation, proposed the new draft atractylodes commodity specifications and grades, as the new atractylodes commodity specifications and grades standards. It provides a reference and theoretical basis. Copyright© by the Chinese Pharmaceutical Association.

  9. Numerical Simulation of One-and Two-Phase Flows in Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Blankson, Isaiah M. (Technical Monitor); Gilinsky, Mikhail

    2002-01-01

    In this report, we present some results of problems investigated during joint research between the Hampton University Fluid Mechanics and Acoustics Laboratory (FM&AL), NASA Glenn Research Center (GRC) and the Hyper-X Program of the NASA Langley Research Center (LaRC). The main areas of current scientific interest of the FM&AL include an investigation of the proposed and patented advanced methods for aircraft engine thrust and noise benefits. These methods are based on nontraditional 3D corrugated and composite nozzle, inlet, propeller and screw designs such as the Bluebell and Telescope nozzles, Mobius-shaped screws, etc. These are the main subject of our other projects, of which one is the NASA MURED's FAR Award. Working jointly with this project team, our team also analyzes additional methods for exhaust jet noise reduction. These methods are without essential thrust loss and even with thrust augmentation.

  10. Can Scat Analysis Describe the Feeding Habits of Big Cats? A Case Study with Jaguars (Panthera onca) in Southern Pantanal, Brazil

    PubMed Central

    Perilli, Miriam L. L.; Lima, Fernando; Rodrigues, Flávio H. G.; Cavalcanti, Sandra M. C.

    2016-01-01

    Large cats feeding habits have been studied through two main methods: scat analysis and the carcasses of prey killed by monitored animals. From November 2001 to April 2004, we studied jaguar predation patterns using GPS telemetry location clusters on a cattle ranch in southern Pantanal. During this period, we recorded 431 carcasses of animals preyed upon by monitored jaguars. Concurrently, we collected 125 jaguar scats opportunistically. We compared the frequencies of prey found through each method. We also compared the prey communities using Bray-Curtis similarity coefficient. These comparisons allowed us to evaluate the use of scat analysis as a means to describe jaguar feeding habits. Both approaches identified prey communities with high similarity (Bray-Curtis coefficient > 70). According to either method, jaguars consume three main prey: cattle (Bos taurus), caiman (Caiman yacare) and peccaries (Tayassu pecari and Pecari tajacu). The two methods did not differ in the frequency of the three main prey over dry and wet seasons or years sampled. Our results show that scat analysis is effective and capable of describing jaguar feeding habits. PMID:27002524

  11. Optimal design of a main driving mechanism for servo punch press based on performance atlases

    NASA Astrophysics Data System (ADS)

    Zhou, Yanhua; Xie, Fugui; Liu, Xinjun

    2013-09-01

    The servomotor drive turret punch press is attracting more attentions and being developed more intensively due to the advantages of high speed, high accuracy, high flexibility, high productivity, low noise, cleaning and energy saving. To effectively improve the performance and lower the cost, it is necessary to develop new mechanisms and establish corresponding optimal design method with uniform performance indices. A new patented main driving mechanism and a new optimal design method are proposed. In the optimal design, the performance indices, i.e., the local motion/force transmission indices ITI, OTI, good transmission workspace good transmission workspace(GTW) and the global transmission indices GTIs are defined. The non-dimensional normalization method is used to get all feasible solutions in dimensional synthesis. Thereafter, the performance atlases, which can present all possible design solutions, are depicted. As a result, the feasible solution of the mechanism with good motion/force transmission performance is obtained. And the solution can be flexibly adjusted by designer according to the practical design requirements. The proposed mechanism is original, and the presented design method provides a feasible solution to the optimal design of the main driving mechanism for servo punch press.

  12. [Using neural networks based template matching method to obtain redshifts of normal galaxies].

    PubMed

    Xu, Xin; Luo, A-li; Wu, Fu-chao; Zhao, Yong-heng

    2005-06-01

    Galaxies can be divided into two classes: normal galaxy (NG) and active galaxy (AG). In order to determine NG redshifts, an automatic effective method is proposed in this paper, which consists of the following three main steps: (1) From the template of normal galaxy, the two sets of samples are simulated, one with the redshift of 0.0-0.3, the other of 0.3-0.5, then the PCA is used to extract the main components, and train samples are projected to the main component subspace to obtain characteristic spectra. (2) The characteristic spectra are used to train a Probabilistic Neural Network to obtain a Bayes classifier. (3) An unknown real NG spectrum is first inputted to this Bayes classifier to determine the possible range of redshift, then the template matching is invoked to locate the redshift value within the estimated range. Compared with the traditional template matching technique with an unconstrained range, our proposed method not only halves the computational load, but also increases the estimation accuracy. As a result, the proposed method is particularly useful for automatic spectrum processing produced from a large-scale sky survey project.

  13. [Promising new injection method to prevent angialgia/phlebitis from epirubicin hydrochloride therapy for breast cancer].

    PubMed

    Ono, Chiemi; Yamagami, Mitsue; Kamatani, Rika; Yamamoto, Makoto; Mukouyama, Tomoya; Sugimoto, Masakazu; Suzuki, Taizan; Kamo, Nobuyuki; Seki, Nobuhiko; Eguchi, Kenji; Ikeda, Tadashi

    2012-05-01

    Epirubicin hydrochloride(EPI)is well known to cause phlebitis as a typical adverse drug reaction. By preventing the development of severe phlebitis, patients are expected to continue effective chemotherapy with EPI without a decrease in QOL. We have previously reported promising results of a new injection method to prevent phlebitis from occurring during EPI therapy thorough a prospective clinical trial in our hospital(Jpn J Cancer Chemother 36: 969-974, 2009). In the present study, we have compared the conventional injection method(EPI main -route method, n=15)with our new method, which has been consistently practiced at present(EPI sub -route method, n=77). We found that in the EPI main -route method, angialgia/phlebitis developed in 14 of 15 cases(Grade 3, 53. 3%), leading to alteration of the regimen in 3 cases. On the other hand, with the EPI sub -route method, incidence of angialgia/phlebitis was markedly decreased, and only 6 of 77 cases developed these adverse reactions(Grade 3, 0%). One possible explanation for these results is that the reduction of intimal stimulation by the EPI sub -route method might be caused by the dilution and washout of EPI with pre-medication, as well as the shortened infusion times of EPI. Therefore, on the basis of the above hypothesis, we conclude that the EPI sub-route method might be a more effective treatment for the expected prevention of angialgia/phlebitis.

  14. DETERMINING COARSE PARTICULATE MATTER CONCENTRATIONS: A PERFORMANCE EVALUATION OF CANDIDATE METHODOLOGIES - STUDY DESIGN AND RESULTS FROM THE RTP EQUIPMENT SHAKEDOWN

    EPA Science Inventory

    The main objective of this study is to evaluate the performance of candidate sampling methods for potential use as a Federal Reference Method (FRM) capable of providing an estimate of coarse particle (PMc: particulate matter with an aerodynamic diameter between 2.5 um and 10 um...

  15. Estimation of evapotranspiration rate in irrigated lands using stable isotopes

    NASA Astrophysics Data System (ADS)

    Umirzakov, Gulomjon; Windhorst, David; Forkutsa, Irina; Brauer, Lutz; Frede, Hans-Georg

    2013-04-01

    Agriculture in the Aral Sea basin is the main consumer of water resources and due to the current agricultural management practices inefficient water usage causes huge losses of freshwater resources. There is huge potential to save water resources in order to reach a more efficient water use in irrigated areas. Therefore, research is required to reveal the mechanisms of hydrological fluxes in irrigated areas. This paper focuses on estimation of evapotranspiration which is one of the crucial components in the water balance of irrigated lands. Our main objective is to estimate the rate of evapotranspiration on irrigated lands and partitioning of evaporation into transpiration using stable isotopes measurements. Experiments has done in 2 different soil types (sandy and sandy loam) irrigated areas in Ferghana Valley (Uzbekistan). Soil samples were collected during the vegetation period. The soil water from these samples was extracted via a cryogenic extraction method and analyzed for the isotopic ratio of the water isotopes (2H and 18O) based on a laser spectroscopy method (DLT 100, Los Gatos USA). Evapotranspiration rates were estimated with Isotope Mass Balance method. The results of evapotranspiration obtained using isotope mass balance method is compared with the results of Catchment Modeling Framework -1D model results which has done in the same area and the same time.

  16. OH as an Alternate Tracer for Molecular Gas: Excitation Temperatures of the OH 18 cm Main Lines in W5

    NASA Astrophysics Data System (ADS)

    Engelke, Philip D.; Allen, Ronald J.

    2018-05-01

    We present excitation temperatures T ex for the OH 18 cm main lines at 1665 and 1667 MHz measured directly in front of the W5 star-forming region, using observations from the Green Bank Telescope and the Very Large Array. We find unequivocally that T ex at 1665 MHz is greater than T ex at 1667 MHz. Our method exploits variations in the continuum emission from W5, and the fact that the continuum brightness temperatures T C in this nebula are close to the excitation temperatures of the OH lines in the foreground gas. The result is that an OH line can appear in emission in one location and in absorption in a neighboring location, and the value of T C where the profiles switch from emission to absorption indicates T ex. Absolute measurements of T ex for the main lines were subject to greater uncertainty because of unknown effects of geometry of the OH features. We also employed the traditional “expected profile” method for comparison with our “continuum background” method and found that the continuum background method provided more precise results and was the one to definitively show the T ex difference. Our best estimate values are {T}ex}65=6.0+/- 0.5 K, {T}ex}67=5.1+/- 0.2 K, and {T}ex}65-{T}ex}67=0.9+/- 0.5 K. The T ex values we have measured for the ISM in front of W5 are similar to those found in the quiescent ISM, indicating that proximity to massive star-forming regions does not generally result in widespread anomalous excitation of OH emission.

  17. Implementation of hybrid clustering based on partitioning around medoids algorithm and divisive analysis on human Papillomavirus DNA

    NASA Astrophysics Data System (ADS)

    Arimbi, Mentari Dian; Bustamam, Alhadi; Lestari, Dian

    2017-03-01

    Data clustering can be executed through partition or hierarchical method for many types of data including DNA sequences. Both clustering methods can be combined by processing partition algorithm in the first level and hierarchical in the second level, called hybrid clustering. In the partition phase some popular methods such as PAM, K-means, or Fuzzy c-means methods could be applied. In this study we selected partitioning around medoids (PAM) in our partition stage. Furthermore, following the partition algorithm, in hierarchical stage we applied divisive analysis algorithm (DIANA) in order to have more specific clusters and sub clusters structures. The number of main clusters is determined using Davies Bouldin Index (DBI) value. We choose the optimal number of clusters if the results minimize the DBI value. In this work, we conduct the clustering on 1252 HPV DNA sequences data from GenBank. The characteristic extraction is initially performed, followed by normalizing and genetic distance calculation using Euclidean distance. In our implementation, we used the hybrid PAM and DIANA using the R open source programming tool. In our results, we obtained 3 main clusters with average DBI value is 0.979, using PAM in the first stage. After executing DIANA in the second stage, we obtained 4 sub clusters for Cluster-1, 9 sub clusters for Cluster-2 and 2 sub clusters in Cluster-3, with the BDI value 0.972, 0.771, and 0.768 for each main cluster respectively. Since the second stage produce lower DBI value compare to the DBI value in the first stage, we conclude that this hybrid approach can improve the accuracy of our clustering results.

  18. Comparative analysis of methods for detecting interacting loci

    PubMed Central

    2011-01-01

    Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. Conclusion This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list. PMID:21729295

  19. Estimation of the binding ability of main transport proteins of blood plasma with liver cirrhosis by the fluorescent probe method

    NASA Astrophysics Data System (ADS)

    Korolenko, E. A.; Korolik, E. V.; Korolik, A. K.; Kirkovskii, V. V.

    2007-07-01

    We present results from an investigation of the binding ability of the main transport proteins (albumin, lipoproteins, and α-1-acid glycoprotein) of blood plasma from patients at different stages of liver cirrhosis by the fluorescent probe method. We used the hydrophobic fluorescent probes anionic 8-anilinonaphthalene-1-sulfonate, which interacts in blood plasma mainly with albumin; cationic Quinaldine red, which interacts with α-1-acid glycoprotein; and neutral Nile red, which redistributes between lipoproteins and albumin in whole blood plasma. We show that the binding ability of albumin and α-1-acid glycoprotein to negatively charged and positively charged hydrophobic metabolites, respectively, increases in the compensation stage of liver cirrhosis. As the pathology process deepens and transitions into the decompensation stage, the transport abilities of albumin and α-1-acid glycoprotein decrease whereas the binding ability of lipoproteins remains high.

  20. [Dyuamical studies on metabolic chemistry of lignans from seeds of Arctium lappa].

    PubMed

    Zheng, Yi-min; Cai, Shao-xi; Xu, Xiu-ying; Fu, Shan-quan

    2005-08-01

    To study the metabolic chemistry and pharmaco-dynamics characters of ligan from seeds of Arctium lappa. HPLC method was used in the study. The analysis was carried out on C18 column. The mobile phase was CH3CN-0.05% H3PO4 (36:64) with flow-rate at 0.6 mL x min(-1) and wave-length of 210 nm. The column temperature was kept at 25 degrees C. The results indicated that the ligan was detected in plasma and the main organs 5 min after po. The main metabolic production in plasma was arctigenin. In addition, arctigenin and an unknown product were found in metabolic production in the organs. The method was stable,simple and reproducible. It can be used to determine the metabolic product of the ligan. The metabolic chemistry of ligan in plasma was obviously different from that in the main organs.

  1. Researching on Control Device of Prestressing Wire Reinforcement

    NASA Astrophysics Data System (ADS)

    Si, Jianhui; Guo, Yangbo; Liu, Maoshe

    2017-06-01

    This paper mainly introduces a device for controlling prestress and its related research methods, the advantage of this method is that the reinforcement process is easy to operate and control the prestress of wire rope accurately. The relationship between the stress and strain of the steel wire rope is monitored during the experiment, and the one - to - one relationship between the controllable position and the pretightening force of the steel wire rope is confirmed by the 5mm steel wire rope, and the results are analyzed theoretically by the measured elastic modulus. The results show that the method can effectively control the prestressing force, and the result provides a reference method for strengthening the concrete column with prestressed steel strand.

  2. Atlanta Integrated Fare Collection Demonstration

    DOT National Transportation Integrated Search

    1982-09-01

    This report describes the evaluation results of the Atlanta Integrated Fare Collection Demonstration. One of the main purposes of the demonstration, which was funded through the UMTA Service and Methods Demonstration Program, was to assess the extent...

  3. An information theory criteria based blind method for enumerating active users in DS-CDMA system

    NASA Astrophysics Data System (ADS)

    Samsami Khodadad, Farid; Abed Hodtani, Ghosheh

    2014-11-01

    In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.

  4. Probabilistic Structural Analysis Methods for select space propulsion system components (PSAM). Volume 2: Literature surveys of critical Space Shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Rajagopal, K. R.

    1992-01-01

    The technical effort and computer code development is summarized. Several formulations for Probabilistic Finite Element Analysis (PFEA) are described with emphasis on the selected formulation. The strategies being implemented in the first-version computer code to perform linear, elastic PFEA is described. The results of a series of select Space Shuttle Main Engine (SSME) component surveys are presented. These results identify the critical components and provide the information necessary for probabilistic structural analysis. Volume 2 is a summary of critical SSME components.

  5. Detection of main tidal frequencies using least squares harmonic estimation method

    NASA Astrophysics Data System (ADS)

    Mousavian, R.; Hossainali, M. Mashhadi

    2012-11-01

    In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.

  6. Investigations of Shuttle Main Landing Gear Door Environmental Seals

    NASA Technical Reports Server (NTRS)

    Finkbeiner, Joshua; Dunlap, Pat; Steinetz, Bruce; DeMango, Jeff; Newswander, Daniel

    2005-01-01

    The environmental seals for the main landing gear doors of the Shuttle Orbiters were raised by the Columbia Accident Investigation Board as a potential safety concern. Inspections of seals installed on the Shuttle Discovery revealed that they were permanently deformed and no longer met certified seal compression requirements. Replacement of the seals led to the inability to fully close the main landing gear doors. Johnson Space Center requested that Glenn Research Center conduct tests on the main landing gear door environmental seals to assist in installing the seals in a manner to allow the main landing gear doors to fully close. Further testing was conducted to fill out the seal performance database. Results from the testing indicated that the method of bonding the seals was important in reducing seal loads on the main landing gear doors. Also, the replacement seals installed in Shuttle Discovery were found to have leakage performance sufficient to meet the certification requirements.

  7. Simulation of Plasma Jet Merger and Liner Formation within the PLX- α Project

    NASA Astrophysics Data System (ADS)

    Samulyak, Roman; Chen, Hsin-Chiang; Shih, Wen; Hsu, Scott

    2015-11-01

    Detailed numerical studies of the propagation and merger of high Mach number argon plasma jets and the formation of plasma liners have been performed using the newly developed method of Lagrangian particles (LP). The LP method significantly improves accuracy and mathematical rigor of common particle-based numerical methods such as smooth particle hydrodynamics while preserving their main advantages compared to grid-based methods. A brief overview of the LP method will be presented. The Lagrangian particle code implements main relevant physics models such as an equation of state for argon undergoing atomic physics transformation, radiation losses in thin optical limit, and heat conduction. Simulations of the merger of two plasma jets are compared with experimental data from past PLX experiments. Simulations quantify the effect of oblique shock waves, ionization, and radiation processes on the jet merger process. Results of preliminary simulations of future PLX- alpha experiments involving the ~ π / 2 -solid-angle plasma-liner configuration with 9 guns will also be presented. Partially supported by ARPA-E's ALPHA program.

  8. Two-Dimensional Fourier Transform Analysis of Helicopter Flyover Noise

    NASA Technical Reports Server (NTRS)

    SantaMaria, Odilyn L.; Farassat, F.; Morris, Philip J.

    1999-01-01

    A method to separate main rotor and tail rotor noise from a helicopter in flight is explored. Being the sum of two periodic signals of disproportionate, or incommensurate frequencies, helicopter noise is neither periodic nor stationary. The single Fourier transform divides signal energy into frequency bins of equal size. Incommensurate frequencies are therefore not adequately represented by any one chosen data block size. A two-dimensional Fourier analysis method is used to separate main rotor and tail rotor noise. The two-dimensional spectral analysis method is first applied to simulated signals. This initial analysis gives an idea of the characteristics of the two-dimensional autocorrelations and spectra. Data from a helicopter flight test is analyzed in two dimensions. The test aircraft are a Boeing MD902 Explorer (no tail rotor) and a Sikorsky S-76 (4-bladed tail rotor). The results show that the main rotor and tail rotor signals can indeed be separated in the two-dimensional Fourier transform spectrum. The separation occurs along the diagonals associated with the frequencies of interest. These diagonals are individual spectra containing only information related to one particular frequency.

  9. Numerical analysis of a main crack interactions with micro-defects/inhomogeneities using two-scale generalized/extended finite element method

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felício B.

    2017-12-01

    Generalized or extended finite element method (G/XFEM) models the crack by enriching functions of partition of unity type with discontinuous functions that represent well the physical behavior of the problem. However, this enrichment functions are not available for all problem types. Thus, one can use numerically-built (global-local) enrichment functions to have a better approximate procedure. This paper investigates the effects of micro-defects/inhomogeneities on a main crack behavior by modeling the micro-defects/inhomogeneities in the local problem using a two-scale G/XFEM. The global-local enrichment functions are influenced by the micro-defects/inhomogeneities from the local problem and thus change the approximate solution of the global problem with the main crack. This approach is presented in detail by solving three different linear elastic fracture mechanics problems for different cases: two plane stress and a Reissner-Mindlin plate problems. The numerical results obtained with the two-scale G/XFEM are compared with the reference solutions from the analytical, numerical solution using standard G/XFEM method and ABAQUS as well, and from the literature.

  10. Hydration: certain basic aspects for developing technical and scientific parameters into the nutrition knowledge

    PubMed

    Perales-García, Aránzazu; Estévez-Martínez, Isabel; Urrialde, Rafael

    2016-07-12

    Introduction: Hydration is defined as the water intake coming from food and beverages. Its study has become an area by itself, within the nutrition field. Meaning that in 2010 the European Food Safety Authority (EFSA) approved the water intake recommendations, but the study of this topic implies a rigorous methodology, which represents several issues. Objective: Showing as a glance the main methodological issues in hydration studies. Material and methods: Bibliographic revision of scientific literature. Results: The main methodological issues presented are: sample selection (investigation field and sample design), selection of the method to evaluate hydration status (dilution techniques, bioelectrical impedance, plasmatic and urinary indicators, changes in body composition, water losses and clinic symptoms) selection of the method to evaluate water intake (biomarker, questionnaires, informatics programs, smartphone use, 24-h register, dietary history and food frequency questionnaire), and the main sources of hydration. Conclusions: Hydration status should be understood as a routine model, with daily frequency, according to gender, age, physical activity and environmental conditions. Furthermore, the correct design of the methodology has a special importance in order to take into account all the aspects

  11. Comparison of Peak-Flow Estimation Methods for Small Drainage Basins in Maine

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Hebson, Charles; Lombard, Pamela J.; Mann, Alexander

    2007-01-01

    Understanding the accuracy of commonly used methods for estimating peak streamflows is important because the designs of bridges, culverts, and other river structures are based on these flows. Different methods for estimating peak streamflows were analyzed for small drainage basins in Maine. For the smallest basins, with drainage areas of 0.2 to 1.0 square mile, nine peak streamflows from actual rainfall events at four crest-stage gaging stations were modeled by the Rational Method and the Natural Resource Conservation Service TR-20 method and compared to observed peak flows. The Rational Method had a root mean square error (RMSE) of -69.7 to 230 percent (which means that approximately two thirds of the modeled flows were within -69.7 to 230 percent of the observed flows). The TR-20 method had an RMSE of -98.0 to 5,010 percent. Both the Rational Method and TR-20 underestimated the observed flows in most cases. For small basins, with drainage areas of 1.0 to 10 square miles, modeled peak flows were compared to observed statistical peak flows with return periods of 2, 50, and 100 years for 17 streams in Maine and adjoining parts of New Hampshire. Peak flows were modeled by the Rational Method, the Natural Resources Conservation Service TR-20 method, U.S. Geological Survey regression equations, and the Probabilistic Rational Method. The regression equations were the most accurate method of computing peak flows in Maine for streams with drainage areas of 1.0 to 10 square miles with an RMSE of -34.3 to 52.2 percent for 50-year peak flows. The Probabilistic Rational Method was the next most accurate method (-38.5 to 62.6 percent). The Rational Method (-56.1 to 128 percent) and particularly the TR-20 method (-76.4 to 323 percent) had much larger errors. Both the TR-20 and regression methods had similar numbers of underpredictions and overpredictions. The Rational Method overpredicted most peak flows and the Probabilistic Rational Method tended to overpredict peak flows from the smaller (less than 5 square miles) drainage basins and underpredict peak flows from larger drainage basins. The results of this study are consistent with the most comprehensive analysis of observed and modeled peak streamflows in the United States, which analyzed statistical peak flows from 70 drainage basins in the Midwest and the Northwest.

  12. Game theory based models to analyze water conflicts in the Middle Route of the South-to-North Water Transfer Project in China.

    PubMed

    Wei, Shouke; Yang, Hong; Abbaspour, Karim; Mousavi, Jamshid; Gnauck, Albrecht

    2010-04-01

    This study applied game theory based models to analyze and solve water conflicts concerning water allocation and nitrogen reduction in the Middle Route of the South-to-North Water Transfer Project in China. The game simulation comprised two levels, including one main game with five players and four sub-games with each containing three sub-players. We used statistical and econometric regression methods to formulate payoff functions of the players, economic valuation methods (EVMs) to transform non-monetary value into economic one, cost-benefit Analysis (CBA) to compare the game outcomes, and scenario analysis to investigate the future uncertainties. The validity of game simulation was evaluated by comparing predictions with observations. The main results proved that cooperation would make the players collectively better off, though some player would face losses. However, players were not willing to cooperate, which would result in a prisoners' dilemma. Scenarios simulation results displayed that players in water scare area could not solve its severe water deficit problem without cooperation with other players even under an optimistic scenario, while the uncertainty of cooperation would come from the main polluters. The results suggest a need to design a mechanism to reduce the risk of losses of those players by a side payment, which provides them with economic incentives to cooperate. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  13. [Essential characteristics of qualitative research and its commonly used methods].

    PubMed

    Zhang, Hong-wei

    2008-02-01

    The main objectives of qualitative research lies in exploring the opinion, attitude, behavior, and experience of a person as a social role, also a patient. This essay introduces the basic characteristics of qualitative research, including its natural property, inductive method adopted, open character and wholism concept; the results of qualitative research are presented in a text form; and its commonly used methods include observation, individual interview and focus group discussion.

  14. Round Robin Test of Residual Resistance Ratio of Nb$$_3$$Sn Composite Superconductors

    DOE PAGES

    Matsushita, Teruo; Otabe, Edmund Soji; Kim, Dong Ho; ...

    2017-12-07

    A round robin test of residual resistance ratio (RRR) was performed for Nb 3Sn composite superconductors prepared by internal tin method by six institutes with the international standard test method described in IEC 61788-4. It was found that uncertainty mainly resulted from determination of the cryogenic resistance from the intersection of two straight lines drawn to fit the voltage vs. temperature curve around the resistive transition. As a result, the measurement clarified that RRR can be measured with expanded uncertainty not larger than 5% with the coverage factor 2 by using this test method.

  15. Application of the probabilistic approximate analysis method to a turbopump blade analysis. [for Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.

    1990-01-01

    An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.

  16. Round Robin Test of Residual Resistance Ratio of Nb$$_3$$Sn Composite Superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsushita, Teruo; Otabe, Edmund Soji; Kim, Dong Ho

    A round robin test of residual resistance ratio (RRR) was performed for Nb 3Sn composite superconductors prepared by internal tin method by six institutes with the international standard test method described in IEC 61788-4. It was found that uncertainty mainly resulted from determination of the cryogenic resistance from the intersection of two straight lines drawn to fit the voltage vs. temperature curve around the resistive transition. As a result, the measurement clarified that RRR can be measured with expanded uncertainty not larger than 5% with the coverage factor 2 by using this test method.

  17. Tooth shape optimization of brushless permanent magnet motors for reducing torque ripples

    NASA Astrophysics Data System (ADS)

    Hsu, Liang-Yi; Tsai, Mi-Ching

    2004-11-01

    This paper presents a tooth shape optimization method based on a generic algorithm to reduce the torque ripple of brushless permanent magnet motors under two different magnetization directions. The analysis of this design method mainly focuses on magnetic saturation and cogging torque and the computation of the optimization process is based on an equivalent magnetic network circuit. The simulation results, obtained from the finite element analysis, are used to confirm the accuracy and performance. Finite element analysis results from different tooth shapes are compared to show the effectiveness of the proposed method.

  18. Preliminary study for understanding the moderating role of government regulations in telecom sector of Pakistan

    NASA Astrophysics Data System (ADS)

    Tariq, Beenish; Mat, Nik Kamariah Nik

    2017-10-01

    Telecommunication sector of Pakistan is a significant contributor toward the economic development of Pakistan. However, telecommunication sector of Pakistan underwent a lot of changes from regulatory and marketing perspective in 2015, resulting in decreased cellular penetration, dropped down the cellular subscribers and decreased telecommunication revenue. Hence, this research paper is designed to validate the constructs used in addressing the moderating role of government regulations based on Oliver's four-stage loyalty model in telecom sector of Pakistan. This preliminary study has mainly employed the quantitative method (i.e. survey questionnaire), consisting of a total of 72 items related to eight constructs under study and used 7 points Likert scale. The main analysis method used is the reliability test of the constructs. The results reveal that the Cronbach alpha readings were between 0.756 and 0.932, indicating internally consistent and reliable measures of the constructs used. This result enables the constructs to be included in the actual data collection without change.

  19. Reduction of CO2 Emissions Due to Wind Energy - Methods and Issues in Estimating Operational Emission Reductions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holttinen, Hannele; Kiviluoma, Juha; McCann, John

    2015-10-05

    This paper presents ways of estimating CO2 reductions of wind power using different methodologies. Estimates based on historical data have more pitfalls in methodology than estimates based on dispatch simulations. Taking into account exchange of electricity with neighboring regions is challenging for all methods. Results for CO2 emission reductions are shown from several countries. Wind power will reduce emissions for about 0.3-0.4 MtCO2/MWh when replacing mainly gas and up to 0.7 MtCO2/MWh when replacing mainly coal powered generation. The paper focuses on CO2 emissions from power system operation phase, but long term impacts are shortly discussed.

  20. Damage monitoring of aircraft structures made of composite materials using wavelet transforms

    NASA Astrophysics Data System (ADS)

    Molchanov, D.; Safin, A.; Luhyna, N.

    2016-10-01

    The present article is dedicated to the study of the acoustic properties of composite materials and the application of non-destructive testing methods to aircraft components. A mathematical model of a wavelet transformed signal is presented. The main acoustic (vibration) properties of different composite material structures were researched. Multiple vibration parameter dependencies on the noise reduction factor were derived. The main steps of a research procedure and new method algorithm are presented. The data obtained was compared with the data from a three dimensional laser-Doppler scanning vibrometer, to validate the results. The new technique was tested in the laboratory and on civil aircraft at a training airfield.

  1. Testing of Raman spectroscopy method for assessment of skin implants

    NASA Astrophysics Data System (ADS)

    Timchenko, E. V.; Timchenko, P. E.; Volova, L. T.; Pershutkina, S. V.; Shalkovskaya, P. Y.

    2016-11-01

    Results of studies of testing of Raman spectroscopy (RS) method for assessment of skin implants are presented. As objects of study were used samples of rat's skin material. The main spectral differences of implants using various types of their processing appear at wavenumbers 1062 cm-1, 1645 cm-1, 1553 cm-1, 851 cm-1, 863 cm-1, 814 cm-1 and 1410 cm-1. Optical coefficients for assessment of skin implants were introduced. The research results are confirmed by morphological analysis.

  2. The theory and method of variable frequency directional seismic wave under the complex geologic conditions

    NASA Astrophysics Data System (ADS)

    Jiang, T.; Yue, Y.

    2017-12-01

    It is well known that the mono-frequency directional seismic wave technology can concentrate seismic waves into a beam. However, little work on the method and effect of variable frequency directional seismic wave under complex geological conditions have been done .We studied the variable frequency directional wave theory in several aspects. Firstly, we studied the relation between directional parameters and the direction of the main beam. Secondly, we analyzed the parameters that affect the beam width of main beam significantly, such as spacing of vibrator, wavelet dominant frequency, and number of vibrator. In addition, we will study different characteristics of variable frequency directional seismic wave in typical velocity models. In order to examine the propagation characteristics of directional seismic wave, we designed appropriate parameters according to the character of direction parameters, which is capable to enhance the energy of the main beam direction. Further study on directional seismic wave was discussed in the viewpoint of power spectral. The results indicate that the energy intensity of main beam direction increased 2 to 6 times for a multi-ore body velocity model. It showed us that the variable frequency directional seismic technology provided an effective way to strengthen the target signals under complex geological conditions. For concave interface model, we introduced complicated directional seismic technology which supports multiple main beams to obtain high quality data. Finally, we applied the 9-element variable frequency directional seismic wave technology to process the raw data acquired in a oil-shale exploration area. The results show that the depth of exploration increased 4 times with directional seismic wave method. Based on the above analysis, we draw the conclusion that the variable frequency directional seismic wave technology can improve the target signals of different geologic conditions and increase exploration depth with little cost. Due to inconvenience of hydraulic vibrators in complicated surface area, we suggest that the combination of high frequency portable vibrator and variable frequency directional seismic wave method is an alternative technology to increase depth of exploration or prospecting.

  3. Andrei Andreevich Bolibrukh's works on the analytic theory of differential equations

    NASA Astrophysics Data System (ADS)

    Anosov, Dmitry V.; Leksin, Vladimir P.

    2011-02-01

    This paper contains an account of A.A. Bolibrukh's results obtained in the new directions of research that arose in the analytic theory of differential equations as a consequence of his sensational counterexample to the Riemann-Hilbert problem. A survey of results of his students in developing topics first considered by Bolibrukh is also presented. The main focus is on the role of the reducibility/irreducibility of systems of linear differential equations and their monodromy representations. A brief synopsis of results on the multidimensional Riemann-Hilbert problem and on isomonodromic deformations of Fuchsian systems is presented, and the main methods in the modern analytic theory of differential equations are sketched. Bibliography: 69 titles.

  4. A Method for Modeling the Intrinsic Dynamics of Intraindividual Variability: Recovering the Parameters of Simulated Oscillators in Multi-Wave Panel Data.

    ERIC Educational Resources Information Center

    Boker, Steven M.; Nesselroade, John R.

    2002-01-01

    Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…

  5. Ambivalence and Fluidity in the Teenage Smoking and Quitting Experience: Lessons from a Qualitative Study at an English Secondary School

    ERIC Educational Resources Information Center

    Buswell, Marina; Duncan, Peter

    2013-01-01

    Objective: To evaluate a school-based stop smoking pilot project and to understand the teenage experience of smoking and quitting within that context. Design: Flexible design methods. Setting: A Kent (United Kingdom [UK]) secondary school. Methods: Semi-structured interviews analyzed following a grounded theory approach. Results: The main themes…

  6. Theoretical Estimation of the Processes While Using Casting Methods of Obtaining MMC

    NASA Astrophysics Data System (ADS)

    Popov, V.

    2017-08-01

    It is a well-known problem that silicone carbide particles are poorly wetted by aluminum melt, which is a main limiting factor for wide use of metal matrix composites within linear technologies. This paper seeks to find a theoretical explanation of this problem. As result, paper recommends to use solid methods for preparation of MMC with nanoreinforcements.

  7. Signal injection as a fault detection technique.

    PubMed

    Cusidó, Jordi; Romeral, Luis; Ortega, Juan Antonio; Garcia, Antoni; Riba, Jordi

    2011-01-01

    Double frequency tests are used for evaluating stator windings and analyzing the temperature. Likewise, signal injection on induction machines is used on sensorless motor control fields to find out the rotor position. Motor Current Signature Analysis (MCSA), which focuses on the spectral analysis of stator current, is the most widely used method for identifying faults in induction motors. Motor faults such as broken rotor bars, bearing damage and eccentricity of the rotor axis can be detected. However, the method presents some problems at low speed and low torque, mainly due to the proximity between the frequencies to be detected and the small amplitude of the resulting harmonics. This paper proposes the injection of an additional voltage into the machine being tested at a frequency different from the fundamental one, and then studying the resulting harmonics around the new frequencies appearing due to the composition between injected and main frequencies.

  8. Ergonomics analyses of five joineries located in Florianópolis-SC, using the LEST Method.

    PubMed

    Vergara, Lizandra Lupi Garcia; Garcia, Carolina Schwinden; Miranda, Felipe Vergara

    2012-01-01

    Considering the goal of Ergonomic Work Analysis to establish, from the point of view of workers, safe, healthy, comfortable and efficient environments, this study propose to analyze the work situation of machine operators at five joineries from Florianópolis-SC. For this, it was applied the LEST Method to evaluate the task made by the operators, considering the physical, cognitive and organizational work environment. As results, it was identified the main ergonomics problems of these workstations, presenting an ergonomic diagnosis and their implications on health and safety of workers. As result, it was concluded that the main ergonomics problems at joineries are related with noise, with constant load of weight and with the postures taken. Besides these problems, others were diagnosed, for example, the pressure for workers to comply strictly the task stipulated and also the poor training and capacity of workers.

  9. Signal Injection as a Fault Detection Technique

    PubMed Central

    Cusidó, Jordi; Romeral, Luis; Ortega, Juan Antonio; Garcia, Antoni; Riba, Jordi

    2011-01-01

    Double frequency tests are used for evaluating stator windings and analyzing the temperature. Likewise, signal injection on induction machines is used on sensorless motor control fields to find out the rotor position. Motor Current Signature Analysis (MCSA), which focuses on the spectral analysis of stator current, is the most widely used method for identifying faults in induction motors. Motor faults such as broken rotor bars, bearing damage and eccentricity of the rotor axis can be detected. However, the method presents some problems at low speed and low torque, mainly due to the proximity between the frequencies to be detected and the small amplitude of the resulting harmonics. This paper proposes the injection of an additional voltage into the machine being tested at a frequency different from the fundamental one, and then studying the resulting harmonics around the new frequencies appearing due to the composition between injected and main frequencies. PMID:22163801

  10. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  11. Determination of acrylamide in coffee and coffee products by GC-MS using an improved SPE clean-up.

    PubMed

    Soares, C; Cunha, S; Fernandes, J

    2006-12-01

    An improved gas chromatography-mass spectrometry (GC-MS) method to determine acrylamide (AA) in coffee and coffee products was developed. The method was based on two main purification steps: the first with ethanol and Carrez solutions in order to precipitate polysaccharides and proteins, respectively; and the second with a layered solid-phase extraction (SPE) column which proved to be efficient in the elimination of the main chromatographic interferences. The method is applicable to a wide range of coffee products. Twenty-six samples of different coffee products were analysed. The levels of AA were in the range 11.4-36.2 microg l-1 for 'espresso coffee' and 200.8-229.4 microg l-1 for coffee blends with cereals. The results indicate that the presence of cereals significantly increased the levels of AA.

  12. Assessment method of accessibility conditions: how to make public buildings accessible?

    PubMed

    Andrade, Isabela Fernandes; Ely, e Vera Helena Moro Bins

    2012-01-01

    The enforcement of accessibility today has faced several difficulties, such as intervention in historic buildings that now house public services and cultural activities, such as town halls, museums and theaters and should allow access, on equal terms to all people. The paper presents the application of a method for evaluating the spatial accessibility conditions and their results. For this, we sought to support the theoretical foundation about the main issue involved and legislation. From the method used--guided walks--it was possible to identify the main barriers to accessibility in historic buildings. From the identified barriers, possible solutions are presented according to the four components of accessibility: spatial orientation, displacement, use and communication. It is hoped also that the knowledge gained in this research contributes to an improvement of accessibility legislation in relation to the listed items.

  13. Space moving target detection using time domain feature

    NASA Astrophysics Data System (ADS)

    Wang, Min; Chen, Jin-yong; Gao, Feng; Zhao, Jin-yu

    2018-01-01

    The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects (target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10-5, which outperforms those of compared algorithms.

  14. Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.

    PubMed

    Wu, Yue; Nan, Bo; Chen, Liang

    2014-01-01

    3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.

  15. District heating with geothermally heated culinary water supply systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitts, D.R.; Schmitt, R.C.

    1979-09-01

    An initial feasibility study of using existing culinary water supply systems to provide hot water for space heating and air conditioning to a typical residential community is reported. The Phase I study has centered on methods of using low-to-moderate temperature water for heating purposes including institutional barriers, identification and description of a suitable residential commnity water system, evaluation of thermal losses in both the main distribution system and the street mains within the residential district, estimation of size and cost of the pumping station main heat exchanger, sizing of individual residential heat exchangers, determination of pumping and power requirements duemore » to increased flow through the residential area mains, and pumping and power requirements from the street mains through a typical residence. All results of the engineering study of Phase I are encouraging.« less

  16. Assigning Main Orientation to an EOH Descriptor on Multispectral Images.

    PubMed

    Li, Yong; Shi, Xiang; Wei, Lijun; Zou, Junwei; Chen, Fang

    2015-07-01

    This paper proposes an approach to compute an EOH (edge-oriented histogram) descriptor with main orientation. EOH has a better matching ability than SIFT (scale-invariant feature transform) on multispectral images, but does not assign a main orientation to keypoints. Alternatively, it tends to assign the same main orientation to every keypoint, e.g., zero degrees. This limits EOH to matching keypoints between images of translation misalignment only. Observing this limitation, we propose assigning to keypoints the main orientation that is computed with PIIFD (partial intensity invariant feature descriptor). In the proposed method, SIFT keypoints are detected from images as the extrema of difference of Gaussians, and every keypoint is assigned to the main orientation computed with PIIFD. Then, EOH is computed for every keypoint with respect to its main orientation. In addition, an implementation variant is proposed for fast computation of the EOH descriptor. Experimental results show that the proposed approach performs more robustly than the original EOH on image pairs that have a rotation misalignment.

  17. Evaluation of differences in quality of experience features for test stimuli of good-only and bad-only overall audiovisual quality

    NASA Astrophysics Data System (ADS)

    Strohmeier, Dominik; Kunze, Kristina; Göbel, Klemens; Liebetrau, Judith

    2013-01-01

    Assessing audiovisual Quality of Experience (QoE) is a key element to ensure quality acceptance of today's multimedia products. The use of descriptive evaluation methods allows evaluating QoE preferences and the underlying QoE features jointly. From our previous evaluations on QoE for mobile 3D video we found that mainly one dimension, video quality, dominates the descriptive models. Large variations of the visual video quality in the tests may be the reason for these findings. A new study was conducted to investigate whether test sets of low QoE are described differently than those of high audiovisual QoE. Reanalysis of previous data sets seems to confirm this hypothesis. Our new study consists of a pre-test and a main test, using the Descriptive Sorted Napping method. Data sets of good-only and bad-only video quality were evaluated separately. The results show that the perception of bad QoE is mainly determined one-dimensionally by visual artifacts, whereas the perception of good quality shows multiple dimensions. Here, mainly semantic-related features of the content and affective descriptors are used by the naïve test participants. The results show that, with increasing QoE of audiovisual systems, content semantics and users' a_ective involvement will become important for assessing QoE differences.

  18. Stress Corrosion Cracking of Pipeline Steels in Fuel Grade Ethanol and Blends - Study to Evaluate Alternate Standard Tests and Phenomenological Understanding of SCC

    DOT National Transportation Integrated Search

    2011-10-30

    Main aim of this project was to evaluate alternate standard test methods for stress corrosion cracking (SCC) and compare them with the results from slow strain rate test (SSRT) results under equivalent environmental conditions. Other important aim of...

  19. Globally convergent techniques in nonlinear Newton-Krylov

    NASA Technical Reports Server (NTRS)

    Brown, Peter N.; Saad, Youcef

    1989-01-01

    Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.

  20. Brain vascular image enhancement based on gradient adjust with split Bregman

    NASA Astrophysics Data System (ADS)

    Liang, Xiao; Dong, Di; Hui, Hui; Zhang, Liwen; Fang, Mengjie; Tian, Jie

    2016-04-01

    Light Sheet Microscopy is a high-resolution fluorescence microscopic technique which enables to observe the mouse brain vascular network clearly with immunostaining. However, micro-vessels are stained with few fluorescence antibodies and their signals are much weaker than large vessels, which make micro-vessels unclear in LSM images. In this work, we developed a vascular image enhancement method to enhance micro-vessel details which should be useful for vessel statistics analysis. Since gradient describes the edge information of the vessel, the main idea of our method is to increase the gradient values of the enhanced image to improve the micro-vessels contrast. Our method contained two steps: 1) calculate the gradient image of LSM image, and then amplify high gradient values of the original image to enhance the vessel edge and suppress low gradient values to remove noises. Then we formulated a new L1-norm regularization optimization problem to find an image with the expected gradient while keeping the main structure information of the original image. 2) The split Bregman iteration method was used to deal with the L1-norm regularization problem and generate the final enhanced image. The main advantage of the split Bregman method is that it has both fast convergence and low memory cost. In order to verify the effectiveness of our method, we applied our method to a series of mouse brain vascular images acquired from a commercial LSM system in our lab. The experimental results showed that our method could greatly enhance micro-vessel edges which were unclear in the original images.

  1. Individual Fit Testing of Hearing Protection Devices Based on Microphone in Real Ear.

    PubMed

    Biabani, Azam; Aliabadi, Mohsen; Golmohammadi, Rostam; Farhadian, Maryam

    2017-12-01

    Labeled noise reduction (NR) data presented by manufacturers are considered one of the main challenging issues for occupational experts in employing hearing protection devices (HPDs). This study aimed to determine the actual NR data of typical HPDs using the objective fit testing method with a microphone in real ear (MIRE) method. Five available commercially earmuff protectors were investigated in 30 workers exposed to reference noise source according to the standard method, ISO 11904-1. Personal attenuation rating (PAR) of the earmuffs was measured based on the MIRE method using a noise dosimeter (SVANTEK, model SV 102). The results showed that means of PAR of the earmuffs are from 49% to 86% of the nominal NR rating. The PAR values of earmuffs when a typical eyewear was worn differed statistically ( p < 0.05). It is revealed that a typical safety eyewear can reduce the mean of the PAR value by approximately 2.5 dB. The results also showed that measurements based on the MIRE method resulted in low variability. The variability in NR values between individuals, within individuals, and within earmuffs was not the statistically significant ( p > 0.05). This study could provide local individual fit data. Ergonomic aspects of the earmuffs and different levels of users experience and awareness can be considered the main factors affecting individual fitting compared with the laboratory condition for acquiring the labeled NR data. Based on the obtained fit testing results, the field application of MIRE can be employed for complementary studies in real workstations while workers perform their regular work duties.

  2. Bis(trifluoromethylsulfonyl)imide-based frozen ionic liquid for the hollow-fiber solid-phase microextraction of dichlorodiphenyltrichloroethane and its main metabolites.

    PubMed

    Pang, Long; Yang, Peijie; Pang, Rong; Li, Shunyi

    2017-08-01

    1-Hexadecyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide is a solid-phase ionic organic material under ambient temperature and is considered as a kind of "frozen" ionic liquid. Because of their solid-state and ultra-hydrophobicity, "frozen" ionic liquids are able to be confined in the pores of hollow fiber, based on which a simple method was developed for the hollow-fiber solid-phase microextraction of dichlorodiphenyltrichloroethane and its main metabolites. Under optimized conditions, the proposed method results in good linearity (R 2 > 0.9965) over the range of 0.5-50 μg/L, with low limits of detection and quantification in the range of 0.33-0.38 and 1.00-1.25 μg/L, respectively. Intra- and interday precisions evaluated by relative standard deviation were 3-6 and 1-6%, respectively. The spiked recoveries of dichlorodiphenyltrichloroethane and its main metabolites from real water samples were in the range of 64-113 and 79-112%, respectively, at two different concentration levels. The results suggest that "frozen" ionic liquids are promising for use as a class of novel sorbents. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Medicinal Plants Used for Neuropsychiatric Disorders Treatment in the Hauts Bassins Region of Burkina Faso

    PubMed Central

    Kinda, Prosper T.; Zerbo, Patrice; Guenné, Samson; Compaoré, Moussa; Ciobica, Alin; Kiendrebeogo, Martin

    2017-01-01

    Background: In Burkina Faso, phytotherapy is the main medical alternative used by populations to manage various diseases that affect the nervous system. The aim of the present study was to report medicinal plants with psychoactive properties used to treat neuropsychiatric disorders in the Hauts Bassins region, in the western zone of Burkina Faso. Methods: Through an ethnobotanical survey using structured questionnaire, 53 traditional healers (TH) were interviewed about neuropsychiatric disorders, medicinal plants and medical practices used to treat them. The survey was carried out over a period of three months. Results: The results report 66 plant species used to treat neuropsychiatric pathologies. Roots (36.2%) and leaves (29%) were the main plant parts used. Alone or associated, these parts were used to prepare drugs using mainly the decoction and the trituration methods. Remedies were administered via drink, fumigation and external applications. Conclusions: It appears from this study a real knowledge of neuropsychiatric disorders in the traditional medicine of Hauts Bassins area. The therapeutic remedies suggested in this work are a real interest in the fight against psychiatric and neurological diseases. In the future, identified plants could be used for searching antipsychotic or neuroprotective compounds. PMID:28930246

  4. A new model of Ishikawa diagram for quality assessment

    NASA Astrophysics Data System (ADS)

    Liliana, Luca

    2016-11-01

    The paper presents the results of a study concerning the use of the Ishikawa diagram in analyzing the causes that determine errors in the evaluation of theparts precision in the machine construction field. The studied problem was"errors in the evaluation of partsprecision” and this constitutes the head of the Ishikawa diagram skeleton.All the possible, main and secondary causes that could generate the studied problem were identified. The most known Ishikawa models are 4M, 5M, 6M, the initials being in order: materials, methods, man, machines, mother nature, measurement. The paper shows the potential causes of the studied problem, which were firstly grouped in three categories, as follows: causes that lead to errors in assessing the dimensional accuracy, causes that determine errors in the evaluation of shape and position abnormalities and causes for errors in roughness evaluation. We took into account the main components of parts precision in the machine construction field. For each of the three categories of causes there were distributed potential secondary causes on groups of M (man, methods, machines, materials, environment/ medio ambiente-sp.). We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy.

  5. Modelling a Compensation Standard for a Regional Forest Ecosystem: A Case Study in Yanqing District, Beijing, China

    PubMed Central

    Li, Tan; Zhang, Qingguo; Zhang, Ying

    2018-01-01

    The assessment of forest ecosystem services can quantify the impact of these services on human life and is the main basis for formulating a standard of compensation for these services. Moreover, the calculation of the indirect value of forest ecosystem services should not be ignored, as has been the case in some previous publications. A low compensation standard and the lack of a dynamic coordination mechanism are the main problems existing in compensation implementation. Using comparison and analysis, this paper employed accounting for both the costs and benefits of various alternatives. The analytic hierarchy process (AHP) method and the Pearl growth-curve method were used to adjust the results. This research analyzed the contribution of each service value from the aspects of forest produce services, ecology services, and society services. We also conducted separate accounting for cost and benefit, made a comparison of accounting and evaluation methods, and estimated the implementation period of the compensation standard. The main conclusions of this research include the fact that any compensation standard should be determined from the points of view of both benefit and cost in a region. The results presented here allow the range between the benefit and cost compensation to be laid out more reasonably. The practical implications of this research include the proposal that regional decision-makers should consider a dynamic compensation method to meet with the local economic level by using diversified ways to raise the compensation standard, and that compensation channels should offer a mixed mode involving both the market and government. PMID:29561789

  6. Modelling a Compensation Standard for a Regional Forest Ecosystem: A Case Study in Yanqing District, Beijing, China.

    PubMed

    Li, Tan; Zhang, Qingguo; Zhang, Ying

    2018-03-21

    The assessment of forest ecosystem services can quantify the impact of these services on human life and is the main basis for formulating a standard of compensation for these services. Moreover, the calculation of the indirect value of forest ecosystem services should not be ignored, as has been the case in some previous publications. A low compensation standard and the lack of a dynamic coordination mechanism are the main problems existing in compensation implementation. Using comparison and analysis, this paper employed accounting for both the costs and benefits of various alternatives. The analytic hierarchy process (AHP) method and the Pearl growth-curve method were used to adjust the results. This research analyzed the contribution of each service value from the aspects of forest produce services, ecology services, and society services. We also conducted separate accounting for cost and benefit, made a comparison of accounting and evaluation methods, and estimated the implementation period of the compensation standard. The main conclusions of this research include the fact that any compensation standard should be determined from the points of view of both benefit and cost in a region. The results presented here allow the range between the benefit and cost compensation to be laid out more reasonably. The practical implications of this research include the proposal that regional decision-makers should consider a dynamic compensation method to meet with the local economic level by using diversified ways to raise the compensation standard, and that compensation channels should offer a mixed mode involving both the market and government.

  7. Methods Data Qualification Interim Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Sam Alessi; Tami Grimmett; Leng Vang

    The overall goal of the Next Generation Nuclear Plant (NGNP) Data Management and Analysis System (NDMAS) is to maintain data provenance for all NGNP data including the Methods component of NGNP data. Multiple means are available to access data stored in NDMAS. A web portal environment allows users to access data, view the results of qualification tests and view graphs and charts of various attributes of the data. NDMAS also has methods for the management of the data output from VHTR simulation models and data generated from experiments designed to verify and validate the simulation codes. These simulation models representmore » the outcome of mathematical representation of VHTR components and systems. The methods data management approaches described herein will handle data that arise from experiment, simulation, and external sources for the main purpose of facilitating parameter estimation and model verification and validation (V&V). A model integration environment entitled ModelCenter is used to automate the storing of data from simulation model runs to the NDMAS repository. This approach does not adversely change the why computational scientists conduct their work. The method is to be used mainly to store the results of model runs that need to be preserved for auditing purposes or for display to the NDMAS web portal. This interim report demonstrates the currently development of NDMAS for Methods data and discusses data and its qualification that is currently part of NDMAS.« less

  8. Students’ Perceptions of Contraceptives in University of Ghana

    PubMed Central

    Kayi, Esinam Afi

    2013-01-01

    Objective This study sought to explore University of Ghana Business School diploma student's knowledge of contraceptives, types of contraceptives, attitudes towards contraceptive users, preference for contraceptives, benefits, and side-effects of contraceptives. Materials and methods Data was conducted with three sets of focus group discussions. Participants were systematically sampled from accounting and public administration departments. Results Findings showed that students had little knowledge of contraceptives. The male and female condoms were the main contraceptive types reported out of the many modern and traditional methods of contraceptives. The main benefits of contraceptives were; ability to protect against STIs, abortions, unwanted pregnancy and psychological trauma. Whilst most respondents preferred future use of pills, side-effects of contraceptives were mostly reported for condoms than other contraceptive methods. Results showed that participants had bad attitudes towards unmarried contraceptive users. Conclusion Generally, our findings show that detailed knowledge about contraceptives is low. There is a little gap of information on contraception knowledge, timing, and contraceptive types among university diploma students. Reproductive and maternal services should be available and accessible for tertiary students. PMID:24971101

  9. A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    NASA Technical Reports Server (NTRS)

    Abernethy, K.

    1986-01-01

    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples.

  10. Partitioning sources of variation in vertebrate species richness

    USGS Publications Warehouse

    Boone, R.B.; Krohn, W.B.

    2000-01-01

    Aim: To explore biogeographic patterns of terrestrial vertebrates in Maine, USA using techniques that would describe local and spatial correlations with the environment. Location: Maine, USA. Methods: We delineated the ranges within Maine (86,156 km2) of 275 species using literature and expert review. Ranges were combined into species richness maps, and compared to geomorphology, climate, and woody plant distributions. Methods were adapted that compared richness of all vertebrate classes to each environmental correlate, rather than assessing a single explanatory theory. We partitioned variation in species richness into components using tree and multiple linear regression. Methods were used that allowed for useful comparisons between tree and linear regression results. For both methods we partitioned variation into broad-scale (spatially autocorrelated) and fine-scale (spatially uncorrelated) explained and unexplained components. By partitioning variance, and using both tree and linear regression in analyses, we explored the degree of variation in species richness for each vertebrate group that Could be explained by the relative contribution of each environmental variable. Results: In tree regression, climate variation explained richness better (92% of mean deviance explained for all species) than woody plant variation (87%) and geomorphology (86%). Reptiles were highly correlated with environmental variation (93%), followed by mammals, amphibians, and birds (each with 84-82% deviance explained). In multiple linear regression, climate was most closely associated with total vertebrate richness (78%), followed by woody plants (67%) and geomorphology (56%). Again, reptiles were closely correlated with the environment (95%), followed by mammals (73%), amphibians (63%) and birds (57%). Main conclusions: Comparing variation explained using tree and multiple linear regression quantified the importance of nonlinear relationships and local interactions between species richness and environmental variation, identifying the importance of linear relationships between reptiles and the environment, and nonlinear relationships between birds and woody plants, for example. Conservation planners should capture climatic variation in broad-scale designs; temperatures may shift during climate change, but the underlying correlations between the environment and species richness will presumably remain.

  11. Asteroid family dynamics in the inner main belt

    NASA Astrophysics Data System (ADS)

    Dykhuis, Melissa Joy

    The inner main asteroid belt is an important source of near-Earth objects and terrestrial planet impactors; however, the dynamics and history of this region are challenging to understand, due to its high population density and the presence of multiple orbital resonances. This dissertation explores the properties of two of the most populous inner main belt family groups --- the Flora family and the Nysa-Polana complex --- investigating their memberships, ages, spin properties, collision dynamics, and range in orbital and reflectance parameters. Though diffuse, the family associated with asteroid (8) Flora dominates the inner main belt in terms of the extent of its members in orbital parameter space, resulting in its significant overlap with multiple neighboring families. This dissertation introduces a new method for membership determination (the core sample method) which enables the distinction of the Flora family from the background, permitting its further analysis. The Flora family is shown to have a signature in plots of semimajor axis vs. size consistent with that expected for a collisional family dispersed as a result of the Yarkovsky radiation effect. The family's age is determined from the Yarkovsky dispersion to be 950 My. Furthermore, a survey of the spin sense of 21 Flora-region asteroids, accomplished via a time-efficient modification of the epoch method for spin sense determination, confirms the single-collision Yarkovsky-dispersed model for the family's origin. The neighboring Nysa-Polana complex is the likely source region for many of the carbonaceous near-Earth asteroids, several of which are important targets for spacecraft reconnaissance and sample return missions. Family identification in the Nysa-Polana complex via the core sample method reveals two families associated with asteroid (135) Hertha, both with distinct age and reflectance properties. The larger of these two families demonstrates a correlation in semimajor axis and eccentricity indicating that its family-forming collision occurred near the parent body's aphelion. In addition, the Eulalia family is connected with a possible second component, suggesting an anisotropic distribution of ejecta from its collision event.

  12. Corrosion performance tests for reinforcing steel in concrete : test procedures.

    DOT National Transportation Integrated Search

    2009-09-01

    The existing test method to assess the corrosion performance of reinforcing steel embedded in concrete, mainly : ASTM G109, is labor intensive, time consuming, slow to provide comparative results, and often expensive. : However, corrosion of reinforc...

  13. Corrosion performance tests for reinforcing steel in concrete : technical report.

    DOT National Transportation Integrated Search

    2009-10-01

    The existing test method used to assess the corrosion performance of reinforcing steel embedded in : concrete, mainly ASTM G 109, is labor intensive, time consuming, slow to provide comparative results, : and can be expensive. However, with corrosion...

  14. Trueness verification of actual creatinine assays in the European market demonstrates a disappointing variability that needs substantial improvement. An international study in the framework of the EC4 creatinine standardization working group.

    PubMed

    Delanghe, Joris R; Cobbaert, Christa; Galteau, Marie-Madeleine; Harmoinen, Aimo; Jansen, Rob; Kruse, Rolf; Laitinen, Päivi; Thienpont, Linda M; Wuyts, Birgitte; Weykamp, Cas; Panteghini, Mauro

    2008-01-01

    The European In Vitro Diagnostics (IVD) directive requires traceability to reference methods and materials of analytes. It is a task of the profession to verify the trueness of results and IVD compatibility. The results of a trueness verification study by the European Communities Confederation of Clinical Chemistry (EC4) working group on creatinine standardization are described, in which 189 European laboratories analyzed serum creatinine in a commutable serum-based material, using analytical systems from seven companies. Values were targeted using isotope dilution gas chromatography/mass spectrometry. Results were tested on their compliance to a set of three criteria: trueness, i.e., no significant bias relative to the target value, between-laboratory variation and within-laboratory variation relative to the maximum allowable error. For the lower and intermediate level, values differed significantly from the target value in the Jaffe and the dry chemistry methods. At the high level, dry chemistry yielded higher results. Between-laboratory coefficients of variation ranged from 4.37% to 8.74%. Total error budget was mainly consumed by the bias. Non-compensated Jaffe methods largely exceeded the total error budget. Best results were obtained for the enzymatic method. The dry chemistry method consumed a large part of its error budget due to calibration bias. Despite the European IVD directive and the growing needs for creatinine standardization, an unacceptable inter-laboratory variation was observed, which was mainly due to calibration differences. The calibration variation has major clinical consequences, in particular in pediatrics, where reference ranges for serum and plasma creatinine are low, and in the estimation of glomerular filtration rate.

  15. Effect temperature of supercritical CO2 fluid extraction on phytochemical analysis and antioxidant activity of Zingiber officinale Roscoe

    NASA Astrophysics Data System (ADS)

    Sondari, Dewi; Irawadi, Tun Tedja; Setyaningsih, Dwi; Tursiloadi, Silvester

    2017-11-01

    Supercritical fluid extraction of Zingiber officinale Roscoe has been carried out at a pressure of 16 MPa, with temperatures between 20-40 °C, during extraction time of 6 hours and the flow rate of CO2 fluid 5.5 ml/min. The result of supercritical method was compared with the extraction maceration using a mixture of water and ethanol (70% v/v) for 24 hours. The main content in ginger that has a main role as an antioxidant is a gingerol compound that can help neutralize the damaging effects caused by free radicals in the body, as anti-coagulant, and inhibit the occurrence of blood clots. This study aims to determine the effect of temperature on chemical components contained in rough extract of Zingiber officinale Roscoe and its antioxidant activity, total phenol and total flavonoid content. To determine the chemical components contained in the crude extract of Zingiber officinale Roscoe extracted by supercritical fluid and maceration extraction, GC-MS analysis was performed. Meanwhile, the antioxidant activity of the extract was evaluated based on a 2.2-diphenyl-1-picrylhydrazyl (DPPH) free radical damping method. The results of the analysis show that the result of ginger extract by using the supercritical CO2 extraction method has high antioxidant activity than by using maceration method. The highest total phenol content and total flavonoids were obtained on ginger extraction using supercritical CO2 fluid extraction, indicating that phenol and flavonoid compounds contribute to antioxidant activity. Chromatographic analysis showed that the chemical profile of ginger extract containing oxygenated monoterpenes, monoterpene hydrocarbons, sesquiterpene hydrocarbons, oxygenated monoterpene gingerol and esters. In supercritical fluid extraction, the compounds that can be identified at a temperature of 20-40 °C contain 27 compounds, and 11 compounds from the result of maceration extract. The main component of Zingiber officinale Roscoe extracted using supercritical fluid at a temperature of 40 °C is Hexanal (6.04%), Butan-2-one, 4-(3-hydroxy-2-methoxyphenyl) (27.95%), [6]-Paradol (0.73%), Gingerol (8.22%), Bis (2-ethylhexyl) phthalate (1.62%), α-Citral (12.14%) and α-zingiberene (2.90%). The main component extracts of Zingiber officinale Roscoe by maceration is Hexanal (10.71%), Decanal (3.74%), Butan-2-one, 4-(3-hydroxy-2-methoxyphenyl) (38.33%), Gingerol (4.56%) and Zingiberene (0.99).

  16. [Classification of Priority Area for Soil Environmental Protection Around Water Sources: Method Proposed and Case Demonstration].

    PubMed

    Li, Lei; Wang, Tie-yu; Wang, Xiaojun; Xiao, Rong-bo; Li, Qi-feng; Peng, Chi; Han, Cun-liang

    2016-04-15

    Based on comprehensive consideration of soil environmental quality, pollution status of river, environmental vulnerability and the stress of pollution sources, a technical method was established for classification of priority area of soil environmental protection around the river-style water sources. Shunde channel as an important drinking water sources of Foshan City, Guangdong province, was studied as a case, of which the classification evaluation system was set up. In detail, several evaluation factors were selected according to the local conditions of nature, society and economy, including the pollution degree of heavy metals in soil and sediment, soil characteristics, groundwater sensitivity, vegetation coverage, the type and location of pollution sources. Data information was mainly obtained by means of field survey, sampling analysis, and remote sensing interpretation. Afterwards, Analytical Hierarchy Process (AHP) was adopted to decide the weight of each factor. The basic spatial data layers were set up respectively and overlaid based on the weighted summation assessment model in Geographical Information System (GIS), resulting in a classification map of soil environmental protection level in priority area of Shunde channel. Accordingly, the area was classified to three levels named as polluted zone, risky zone and safe zone, which respectively accounted for 6.37%, 60.90% and 32.73% of the whole study area. Polluted zone and risky zone were mainly distributed in Lecong, Longjiang and Leliu towns, with pollutants mainly resulted from the long-term development of aquaculture and the industries containing furniture, plastic constructional materials and textile and clothing. In accordance with the main pollution sources of soil, targeted and differentiated strategies were put forward. The newly established evaluation method could be referenced for the protection and sustainable utilization of soil environment around the water sources.

  17. Evaluation of environmental flow requirements using eco-hydrologic-hydraulic methods in perennial rivers.

    PubMed

    Abdi, Reza; Yasi, Mehdi

    2015-01-01

    The assessment of environmental flows in rivers is of vital importance for preserving riverine ecosystem processes. This paper addresses the evaluation of environmental flow requirements in three reaches along a typical perennial river (the Zab transboundary river, in north-west Iran), using different hydraulic, hydrological and ecological methods. The main objective of this study came from the construction of three dams and inter-basin transfer of water from the Zab River to the Urmia Lake. Eight hydrological methods (i.e. Tennant, Tessman, flow duration curve analysis, range of variability approach, Smakhtin, flow duration curve shifting, desktop reserve and 7Q2&10 (7-day low flow with a 2- and 10-year return period)); two hydraulic methods (slope value and maximum curvature); and two habitat simulation methods (hydraulic-ecologic, and Q Equation based on water quality indices) were used. Ecological needs of the riverine key species (mainly Barbus capito fish), river geometries, natural flow regime and the environmental status of river management were the main indices for determining the minimum flow requirements. The results indicate that the order of 35%, 17% and 18% of the mean annual flow are to be maintained for the upper, middle and downstream river reaches, respectively. The allocated monthly flow rates in the three Dams steering program are not sufficient to preserve the Zab River life.

  18. Image quality evaluation of full reference algorithm

    NASA Astrophysics Data System (ADS)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  19. Methods of reconstruction of multi-particle events in the new coordinate-tracking setup

    NASA Astrophysics Data System (ADS)

    Vorobyev, V. S.; Shutenko, V. V.; Zadeba, E. A.

    2018-01-01

    At the Unique Scientific Facility NEVOD (MEPhI), a large coordinate-tracking detector based on drift chambers for investigations of muon bundles generated by ultrahigh energy primary cosmic rays is being developed. One of the main characteristics of the bundle is muon multiplicity. Three methods of reconstruction of multiple events were investigated: the sequential search method, method of finding the straight line and method of histograms. The last method determines the number of tracks with the same zenith angle in the event. It is most suitable for the determination of muon multiplicity: because of a large distance to the point of generation of muons, their trajectories are quasiparallel. The paper presents results of application of three reconstruction methods to data from the experiment, and also first results of the detector operation.

  20. Convergence to Diagonal Form of Block Jacobi-type Processes

    NASA Astrophysics Data System (ADS)

    Hari, Vjeran

    2008-09-01

    The main result of recent research on convergence to diagonal form of block Jacobi-type processes is presented. For this purpose, all notions needed to describe the result are introduced. In particular, elementary block transformation matrices, simple and non-simple algorithms, block pivot strategies together with the appropriate equivalence relations are defined. The general block Jacobi-type process considered here can be specialized to take the form of almost any known Jacobi-type method for solving the ordinary or the generalized matrix eigenvalue and singular value problems. The assumptions used in the result are satisfied by many concrete methods.

  1. Factors influencing the results of faculty evaluation in Isfahan University of Medical Sciences.

    PubMed

    Kamali, Farahnaz; Yamani, Nikoo; Changiz, Tahereh; Zoubin, Fatemeh

    2018-01-01

    This study aimed to explore factors influencing the results of faculty member evaluation from the viewpoints of faculty members affiliated with Isfahan University of Medical Sciences, Isfahan, Iran. This qualitative study was done using a conventional content analysis method. Participants were faculty members of Isfahan University of Medical Sciences who, considering maximum variation in sampling, were chosen with a purposive sampling method. Semi-structured interviews were held with 11 faculty members until data saturation was reached. The interviews were transcribed verbatim and analyzed with conventional content analysis method for theme development. Further, the MAXQDA software was used for data management. The data analysis led to the development of two main themes, namely, "characteristics of the educational system" and "characteristics of the faculty member evaluation system." The first main theme consists of three categories, i.e. "characteristics of influential people in evaluation," "features of the courses," and "background characteristics." The other theme has the following as its categories: "evaluation methods," "evaluation tools," "evaluation process," and "application of evaluation results." Each category will have its subcategories. Many factors affect the evaluation of faculty members that should be taken into account by educational policymakers for improving the quality of the educational process. In addition to the factors that directly influence the educational system, methodological problems in the evaluation system need special attention.

  2. Youth Physical Activity Resources Use and Activity Measured by Accelerometry

    PubMed Central

    Maslow, Andréa L.; Colabianchi, Natalie

    2014-01-01

    Objectives To examine whether utilization of physical activity resources (eg, parks) was associated with daily physical activity measured by accelerometry. Methods 111 adolescents completed a travel diary with concurrent accelerometry. The main exposure was self-reported utilization of a physical activity resource (none/1+ resources). The main outcomes were total minutes spent in daily 1) moderate-vigorous physical activity and 2) vigorous physical activity. Results Utilizing a physical activity resource was significantly associated with total minutes in moderate-vigorous physical activity. African-Americans and males had significantly greater moderate-vigorous physical activity. Conclusions Results from this study support the development and use of physical activity resources. PMID:21204684

  3. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  4. ALTERNATIVE ROUTES FOR CATALYST PREPARATION: USE OF ULTRASOUND AND MICROWAVE IRRADIATION FOR THE PREPARATION OF VANADIUM PHOSPHORUS OXIDE CATALYST AND ITS ACTIVITY FOR HYDROCARBON OXIDATION

    EPA Science Inventory

    Vanadium phosphorus oxide (VPO) is a well-known catalyst used for the vapor phase n-butane oxidation to maleic anhydride. It is prepared by a variety of methods, all of which, however, eventually result in the same active phase. The two main methods for the preparation of its pr...

  5. At the origins of the Trojan Horse Method

    NASA Astrophysics Data System (ADS)

    Lattuada, Marcello

    2018-01-01

    During the seventies and eighties a long experimental research program on the quasi-free reactions at low energy was carried out by a small group of nuclear physicists, where Claudio Spitaleri was one of the main protagonists. Nowadays, a posteriori, the results of these studies can be considered an essential step preparatory to the application of the Trojan Horse Method (THM) in Nuclear Astrophysics.

  6. FAST TRACK COMMUNICATION: On the Liouvillian solution of second-order linear differential equations and algebraic invariant curves

    NASA Astrophysics Data System (ADS)

    Man, Yiu-Kwong

    2010-10-01

    In this communication, we present a method for computing the Liouvillian solution of second-order linear differential equations via algebraic invariant curves. The main idea is to integrate Kovacic's results on second-order linear differential equations with the Prelle-Singer method for computing first integrals of differential equations. Some examples on using this approach are provided.

  7. Polyadenylation site prediction using PolyA-iEP method.

    PubMed

    Kavakiotis, Ioannis; Tzanis, George; Vlahavas, Ioannis

    2014-01-01

    This chapter presents a method called PolyA-iEP that has been developed for the prediction of polyadenylation sites. More precisely, PolyA-iEP is a method that recognizes mRNA 3'ends which contain polyadenylation sites. It is a modular system which consists of two main components. The first exploits the advantages of emerging patterns and the second is a distance-based scoring method. The outputs of the two components are finally combined by a classifier. The final results reach very high scores of sensitivity and specificity.

  8. Little meteorological workshop

    NASA Astrophysics Data System (ADS)

    Poler Čanić, K. Å.; Rasol, D.

    2010-09-01

    Little meteorological workshop (LMW) is a project the main goal of which is promotion and popularisation of meteorology in Croatia. The project has been taking place at the Science Festival in Zagreb since 2007 where the audience includes the general public. Since 2009 the project has been introduced as an extracurricular school activity in some primary schools where the main audience are children and teachers. Here, the methods used in the LMWs will be presented. Furthermore, the evaluation results of the LMWs that were held in schools will be shown.

  9. The possibility of concrete production on the Moon

    NASA Technical Reports Server (NTRS)

    Ishikawa, Noboru; Kanamori, Hiroshi; Okada, Takeji

    1992-01-01

    When a long-term lunar base is constructed, most of the materials for the construction will be natural resources on the Moon, mainly for economic reasons. In terms of economy and exploiting natural resources, concrete would be the most suitable material for construction. This paper describes the possibility of concrete production on the Moon. The possible production methods are derived from the results of a series of experiments that were carried out taking two main environmental features, low gravity acceleration and vacuum, into consideration.

  10. Suitability of the methylene blue test for determination of cation exchange capacity of clay minerals related to ammonium acetate method

    NASA Astrophysics Data System (ADS)

    Milošević, Maja; Logar, Mihovil; Dojčinović, Biljana; Erić, Suzana

    2015-04-01

    Cation exchange capacity (CEC) represents one of the most important parameters of clay minerals which reflects their ability to exchange cations with liquid phases in near contact. Measurement of CEC is used for characterizing sample plasticity, adsorbing and swelling properties which later define their usage in industrial purposes. Several methods have been developed over the years for determination of layer charge, charge density, charge distribution, etc. and have been published in numerous papers (Czimerova et al., 2006; Yukselen and Kaya, 2008). The main goal of present study is comparison of suitability of more recent method - methylene blue test in regard to older method - ammonium acetate for determination of CEC. For this study, we selected one montmorillonite clay (Bogovina, Serbia) and two mainly kaolinite clays (Miličinica, Serbia). Chemicals used for CEC determinations were solution of methylene blue (MB)(14*10-6M/ml) and ammonium acetate (AA) solution (1M). The obtained results are showing generally lower values in case of MB method. The main difference is due to molecular aggregation of MB on the clay surface. AA method is highly sensitive to the presence of CaO. Release of Ca ion from the sample into the solution can limit the saturation of exchange sites by the ammonium ion. This is clearly visible in case of montmorillonite clay. Fe2+ and Mg ions are difficult to move by the ammonium ion because of their ion radius, but in case of MB molecule there is no such restriction in removing them from the exchange sites. MB solution, even in a low concentration (2*10-6M/ml), is showing preferable results in moving the ions from their positions which is already visible after adding a small quantity of solution (25cm3). Both MB-titration and MB-spot test yield similar results and are much simpler methods than AA and they also give other information such as specific surface area (external and internal) whereas AA method only provides information about cations in exchangeable positions. Both methods, methylene blue test and ammonium acetate method, have advantages and disadvantages and differ in their requirements for the sample preparations but in general method selection is depending on the specific application of the given sample. References: - Yukselen, Y. and Kaya, A., Engineering Geology 102 (2008) 38-45 - Czimerova, A., Bujdak, J. and Dohrmann, R., Applied Clay Science 34 (2006) 2-13

  11. A YinYang bipolar fuzzy cognitive TOPSIS method to bipolar disorder diagnosis.

    PubMed

    Han, Ying; Lu, Zhenyu; Du, Zhenguang; Luo, Qi; Chen, Sheng

    2018-05-01

    Bipolar disorder is often mis-diagnosed as unipolar depression in the clinical diagnosis. The main reason is that, different from other diseases, bipolarity is the norm rather than exception in bipolar disorder diagnosis. YinYang bipolar fuzzy set captures bipolarity and has been successfully used to construct a unified inference mathematical modeling method to bipolar disorder clinical diagnosis. Nevertheless, symptoms and their interrelationships are not considered in the existing method, circumventing its ability to describe complexity of bipolar disorder. Thus, in this paper, a YinYang bipolar fuzzy multi-criteria group decision making method to bipolar disorder clinical diagnosis is developed. Comparing with the existing method, the new one is more comprehensive. The merits of the new method are listed as follows: First of all, multi-criteria group decision making method is introduced into bipolar disorder diagnosis for considering different symptoms and multiple doctors' opinions. Secondly, the discreet diagnosis principle is adopted by the revised TOPSIS method. Last but not the least, YinYang bipolar fuzzy cognitive map is provided for the understanding of interrelations among symptoms. The illustrated case demonstrates the feasibility, validity, and necessity of the theoretical results obtained. Moreover, the comparison analysis demonstrates that the diagnosis result is more accurate, when interrelations about symptoms are considered in the proposed method. In a conclusion, the main contribution of this paper is to provide a comprehensive mathematical approach to improve the accuracy of bipolar disorder clinical diagnosis, in which both bipolarity and complexity are considered. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Qualitative risk assessment during polymer mortar test specimens preparation - methods comparison

    NASA Astrophysics Data System (ADS)

    Silva, F.; Sousa, S. P. B.; Arezes, P.; Swuste, P.; Ribeiro, M. C. S.; Baptista, J. S.

    2015-05-01

    Polymer binder modification with inorganic nanomaterials (NM) could be a potential and efficient solution to control matrix flammability of polymer concrete (PC) materials without sacrificing other important properties. Occupational exposures can occur all along the life cycle of a NM and “nanoproducts” from research through scale-up, product development, manufacturing, and end of life. The main objective of the present study is to analyse and compare different qualitative risk assessment methods during the production of polymer mortars (PM) with NM. The laboratory scale production process was divided in 3 main phases (pre-production, production and post-production), which allow testing the assessment methods in different situations. The risk assessment involved in the manufacturing process of PM was made by using the qualitative analyses based on: French Agency for Food, Environmental and Occupational Health & Safety method (ANSES); Control Banding Nanotool (CB Nanotool); Ecole Polytechnique Fédérale de Lausanne method (EPFL); Guidance working safely with nanomaterials and nanoproducts (GWSNN); Istituto Superiore per la Prevenzione e la Sicurezza del Lavoro, Italy method (ISPESL); Precautionary Matrix for Synthetic Nanomaterials (PMSN); and Stoffenmanager Nano. It was verified that the different methods applied also produce different final results. In phases 1 and 3 the risk assessment tends to be classified as medium-high risk, while for phase 2 the more common result is medium level. It is necessary to improve the use of qualitative methods by defining narrow criteria for the methods selection for each assessed situation, bearing in mind that the uncertainties are also a relevant factor when dealing with the risk related to nanotechnologies field.

  13. Comparison of Composition and Anticaries Effect of Galla Chinensis Extracts with Different Isolation Methods

    PubMed Central

    Huang, Xuelian; Deng, Meng; Liu, Mingdong; Cheng, Lei; Exterkate, R.A.M.; Li, Jiyao; Zhou, Xuedong; Ten Cate, Jacob. M.

    2017-01-01

    Objectives: Galla chinensis water extract (GCE) has been demonstrated to inhibit dental caries by favorably shifting the demineralization/remineralization balance of enamel and inhibiting the biomass and acid formation of dental biofilm. The present study focused on the comparison of composition and anticaries effect of Galla chinensis extracts with different isolation methods, aiming to improve the efficacy of caries prevention. Methods: The composition of water extract (GCE), ethanol extract (eGCE) and commercial tannic acid was compared. High performance liquid chromatography coupled to electrospray ionization-time of flight-mass spectrometry (HPLC-ESI-TOF-MS) analysis was used to analyze the main ingredients. In vitro pH-cycling regime and polymicrobial biofilms model were used to assess the ability of different Galla chinensis extracts to inhibit enamel demineralization, acid formation and biofilm formation. Results: All the GCE, eGCE and tannic acid contained a high level of total phenolics. HPLC-ESI-TOF-MS analysis showed that the main ingredients of GCE were gallic acid (GA), while eGCE mainly contained 4-7 galloylglucopyranoses (GGs) and tannic acid mainly contained 5-10 GGs. Furthermore, eGCE and tannic acid showed a better effect on inhibiting enamel demineralization, acid formation and biofilm formation compared to GCE. Conclusions: Galla chinensis extracts with higher tannin content were suggested to have higher potential to prevent dental caries. PMID:28979574

  14. Filtering methods for broadcast authentication against PKC-based denial of service in WSN: a survey

    NASA Astrophysics Data System (ADS)

    Afianti, Farah; Wirawan, Iwan; Suryani, Titiek

    2017-11-01

    Broadcast authentication is used to determine legitimate packet from authorized user. The received packet can be forwarded or used for the further purpose. The use of digital signature is one of the compromising methods but it is followed by high complexity especially in the verification process. That phenomenon is used by the adversary to force the user to verify a lot of false packet data. Kind of Denial of Service (DoS) which attacks the main signature can be mitigated by using pre-authentication methods as the first layer to filter false packet data. The objective of the filter is not replacing the main signature but as an addition to actual verification in the sensor node. This paper contributes in comparing the cost of computation, storage, and communication among several filters. The result shows Pre- Authenticator and Dos Attack-Resistant scheme have the lower overhead than the others. Thus followed by needing powerful sender. Moreover, the key chain is promising methods because of efficiency and effectiveness.

  15. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    NASA Technical Reports Server (NTRS)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  16. Evaluation of data requirements for computerized constructability analysis of pavement rehabilitation projects.

    DOT National Transportation Integrated Search

    2013-08-01

    This research aimed to evaluate the data requirements for computer assisted construction planning : and staging methods that can be implemented in pavement rehabilitation projects in the state of : Georgia. Results showed that two main issues for the...

  17. Implicit Runge-Kutta Methods with Explicit Internal Stages

    NASA Astrophysics Data System (ADS)

    Skvortsov, L. M.

    2018-03-01

    The main computational costs of implicit Runge-Kutta methods are caused by solving a system of algebraic equations at every step. By introducing explicit stages, it is possible to increase the stage (or pseudo-stage) order of the method, which makes it possible to increase the accuracy and avoid reducing the order in solving stiff problems, without additional costs of solving algebraic equations. The paper presents implicit methods with an explicit first stage and one or two explicit internal stages. The results of solving test problems are compared with similar methods having no explicit internal stages.

  18. Analysis on the hot spot and trend of the foreign assembly building research

    NASA Astrophysics Data System (ADS)

    Bi, Xiaoqing; Luo, Yanbing

    2017-03-01

    First of all, the paper analyzes the research on the front of the assembly building in the past 15 years. This article mainly adopts the method of CO word analysis, construct the co word matrix, correlation matrix, and then into a dissimilarity matrix, and on this basis, using factor analysis, cluster analysis and multi scale analysis method to study the structure of prefabricated construction field display. Finally, the results of the analysis are discussed, and summarized the current research focus of foreign prefabricated construction mainly concentrated in 7 aspects: embankment construction, wood construction, bridge construction, crane layout, PCM wall and glass system, based on neural network test, energy saving and recycling, and forecast the future trend of development study.

  19. Far-infrared-light shadowgraphy for high extraction efficiency of extreme ultraviolet light from a CO2-laser-generated tin plasma

    NASA Astrophysics Data System (ADS)

    Matsukuma, Hiraku; Hosoda, Tatsuya; Suzuki, Yosuke; Yogo, Akifumi; Yanagida, Tatsuya; Kodama, Takeshi; Nishimura, Hiroaki

    2016-08-01

    The two-color, double-pulse method is an efficient scheme to generate extreme ultraviolet light for fabricating the next generation semiconductor microchips. In this method, a Nd:YAG laser pulse is used to expand a several-tens-of-micrometers-scale tin droplet, and a CO2 laser pulse is subsequently directed at the expanded tin vapor after an appropriate delay time. We propose the use of shadowgraphy with a CO2 laser probe-pulse scheme to optimize the CO2 main-drive laser. The distribution of absorption coefficients is derived from the experiment, and the results are converted to a practical absorption rate for the CO2 main-drive laser.

  20. Monitoring of rock glacier dynamics by multi-temporal UAV images

    NASA Astrophysics Data System (ADS)

    Morra di Cella, Umberto; Pogliotti, Paolo; Diotri, Fabrizio; Cremonese, Edoardo; Filippa, Gianluca; Galvagno, Marta

    2015-04-01

    During the last years several steps forward have been made in the comprehension of rock glaciers dynamics mainly for their potential evolution into rapid mass movements phenomena. Monitoring the surface movement of creeping mountain permafrost is important for understanding the potential effect of ongoing climate change on such a landforms. This study presents the reconstruction of two years of surface movements and DEM changes obtained by multi-temporal analysis of UAV images (provided by SenseFly Swinglet CAM drone). The movement rate obtained by photogrammetry are compared to those obtained by differential GNSS repeated campaigns on almost fifty points distributed on the rock glacier. Results reveals a very good agreements between both rates velocities obtained by the two methods and vertical displacements on fixed points. Strengths, weaknesses and shrewdness of this methods will be discussed. Such a method is very promising mainly for remote regions with difficult access.

  1. Atmospheric turbulence characterization with the Keck adaptive optics systems. I. Open-loop data.

    PubMed

    Schöck, Matthias; Le Mignant, David; Chanan, Gary A; Wizinowich, Peter L; van Dam, Marcos A

    2003-07-01

    We present a detailed investigation of different methods of the characterization of atmospheric turbulence with the adaptive optics systems of the W. M. Keck Observatory. The main problems of such a characterization are the separation of instrumental and atmospheric effects and the accurate calibration of the devices involved. Therefore we mostly describe the practical issues of the analysis. We show that two methods, the analysis of differential image motion structure functions and the Zernike decomposition of the wave-front phase, produce values of the atmospheric coherence length r0 that are in excellent agreement with results from long-exposure images. The main error source is the calibration of the wave-front sensor. Values determined for the outer scale L0 are consistent between the methods and with typical L0 values found at other sites, that is, of the order of tens of meters.

  2. Knowledge-attitude-practice survey among Portuguese gynaecologists regarding combined hormonal contraceptives methods.

    PubMed

    Bombas, Teresa; Costa, Ana Rosa; Palma, Fátima; Vicente, Lisa; Sá, José Luís; Nogueira, Ana Maria; Andrade, Sofia

    2012-04-01

    ABSTRACT Objectives To evaluate knowledge, attitude and practices of Portuguese gynaecologists regarding combined hormonal contraceptives. Methods A cross-sectional survey was conducted among 303 gynaecologists. Results Ninety percent of the gynaecologists considered that deciding on contraceptive methods is a process wherein the woman has her say. Efficacy, safety and the woman's preference were the major factors influencing gynaecologists, while efficacy, tolerability and ease of use were the major factors perceived by the specialists to influence the women's choice. Gynaecologists believed that only 2% of women taking the pill were 100% compliant compared to 48% of those using the patch and 75% of those using the ring. The lower risk of omission was the strong point for the latter methods. Side effects were the main reason to change to another method. Vaginal manipulation was the most difficult topic to discuss. Conclusions Most gynaecologists decided with the woman on the contraceptive method. The main reasons for the gynaecologist's recommendation of a given contraceptive method and the women's choice were different. Counselling implies an open discussion and topics related to sexuality were considered difficult to discuss. Improving communication skills and understanding women's requirements are critical for contraceptive counselling.

  3. An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value

    NASA Astrophysics Data System (ADS)

    Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu

    2018-03-01

    Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.

  4. Aspects of rf-heating and gas-phase doping of large scale silicon crystals grown by the Float Zone technique

    NASA Astrophysics Data System (ADS)

    Zobel, F.; Mosel, F.; Sørensen, J.; Dold, P.

    2018-05-01

    Float Zone growth of silicon crystals is known as the method for providing excellent material properties. Basic principle of this technique is the radiofrequency induction heating, main aspects of this method will be discussed in this article. In contrast to other methods, one of the advantages of the Float Zone technique is the possibility for in-situ doping via gas phase. Experimental results on this topic will be shown and discussed.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c

    This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.

  6. Method of radiation degradation of PTFE under vacuum conditions

    NASA Astrophysics Data System (ADS)

    Korenev, Sergey

    2004-09-01

    A new method of radiation degradation of Polytetrafluoroethylene (PTFE) under vacuum conditions is considered in this report. The combination of glow gas discharge and electrical surface discharge (on surface and inside PTFE) increases the efficiency of thermal-radiation degradation. The main mechanism of this degradation method consists of the breaking of C-C and C-F bonds. The vacuum conditions allow decreasing of the concentration of toxic compounds, such as a HF. Experimental results for degradation of PTFE are presented.

  7. Research on Estimates of Xi’an City Life Garbage Pay-As-You-Throw Based on Two-part Tariff method

    NASA Astrophysics Data System (ADS)

    Yaobo, Shi; Xinxin, Zhao; Fuli, Zheng

    2017-05-01

    Domestic waste whose pricing can’t be separated from the pricing of public economics category is quasi public goods. Based on Two-part Tariff method on urban public utilities, this paper designs the pricing model in order to match the charging method and estimates the standard of pay-as-you-throw using data of the past five years in Xi’an. Finally, this paper summarizes the main results and proposes corresponding policy recommendations.

  8. Exact analytic solutions of Maxwell's equations describing propagating nonparaxial electromagnetic beams.

    PubMed

    Garay-Avendaño, Roger L; Zamboni-Rached, Michel

    2014-07-10

    In this paper, we propose a method that is capable of describing in exact and analytic form the propagation of nonparaxial scalar and electromagnetic beams. The main features of the method presented here are its mathematical simplicity and the fast convergence in the cases of highly nonparaxial electromagnetic beams, enabling us to obtain high-precision results without the necessity of lengthy numerical simulations or other more complex analytical calculations. The method can be used in electromagnetism (optics, microwaves) as well as in acoustics.

  9. Thin layer activation techniques at the U-120 cyclotron of Bucharest

    NASA Astrophysics Data System (ADS)

    Constantinescu, B.; Ivanov, E. A.; Pascovici, G.; Popa-Simil, L.; Racolta, P. M.

    1994-05-01

    The Thin Layer Activation (TLA) technique is a nuclear method especially used for different types of wear (or corrosion) investigations. Experimental results for selection criteria of nuclear reactions for various tribological studies, using the IPNE U-120 classical variable energy Cyclotron are presented. Measuring methods for the main types of wear phenomena and home made instrumentations dedicated for TLA industrial applications are also reported. Some typical TLA tribological applications, a nuclear scanning method to obtain wear profile of piston-rings are presented as well.

  10. Study on the Structures of Two Booster Pellets Having High Initiation Capacity

    NASA Astrophysics Data System (ADS)

    Shuang-Qi, Hu; Hong-Rong, Liu; Li-shuang, Hu; Xiong, Cao; Xiang-Chao, Mi; Hai-Xia, Zhao

    2014-05-01

    Insensitive munitions (IM) improve the survivability of both weapons and their associated platforms, which can lead to a reduction in casualties, mission losses, and whole life costs. All weapon systems contain an explosive train that needs to meet IM criteria but reliably initiate a main charge explosive. To ensure that these diametrically opposed requirements can be achieved, new highly effective booster charge structures were designed. The initiation capacity of the two booster pellets was studied using varied composition and axial-steel-dent methods. The results showed that the two new booster pellets can initiate standard main charge pellets with less explosive mass than the ordinary cylindrical booster pellet. The numerical simulation results were in good agreement with the experiment results.

  11. Verification of a neutronic code for transient analysis in reactors with Hex-z geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Pintor, S.; Verdu, G.; Ginestar, D.

    Due to the geometry of the fuel bundles, to simulate reactors such as VVER reactors it is necessary to develop methods that can deal with hexagonal prisms as basic elements of the spatial discretization. The main features of a code based on a high order finite element method for the spatial discretization of the neutron diffusion equation and an implicit difference method for the time discretization of this equation are presented and the performance of the code is tested solving the first exercise of the AER transient benchmark. The obtained results are compared with the reference results of the benchmarkmore » and with the results provided by PARCS code. (authors)« less

  12. Comparing the index-flood and multiple-regression methods using L-moments

    NASA Astrophysics Data System (ADS)

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin in central Iran. To estimate floods of various return periods for gauged catchments in the study area, the mean annual peak flood of the catchments may be multiplied by corresponding values of the growth factors, and computed using the GEV distribution.

  13. High-Level Ab Initio Calculations of Intermolecular Interactions: Heavy Main-Group Element π-Interactions.

    PubMed

    Krasowska, Małgorzata; Schneider, Wolfgang B; Mehring, Michael; Auer, Alexander A

    2018-05-02

    This work reports high-level ab initio calculations and a detailed analysis on the nature of intermolecular interactions of heavy main-group element compounds and π systems. For this purpose we have chosen a set of benchmark molecules of the form MR 3 , in which M=As, Sb, or Bi, and R=CH 3 , OCH 3 , or Cl. Several methods for the description of weak intermolecular interactions are benchmarked including DFT-D, DFT-SAPT, MP2, and high-level coupled cluster methods in the DLPNO-CCSD(T) approximation. Using local energy decomposition (LED) and an analysis of the electron density, details of the nature of this interaction are unraveled. The results yield insight into the nature of dispersion and donor-acceptor interactions in this type of system, including systematic trends in the periodic table, and also provide a benchmark for dispersion interactions in heavy main-group element compounds. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Automated Quantitative Spectral Classification of Stars in Areas of the main Meridional Section of the Galaxy

    NASA Astrophysics Data System (ADS)

    Shvelidze, T. D.; Malyuto, V. D.

    Quantitative spectral classification of F, G and K stars with the 70-cm telescope of the Ambastumani Astrophysical Observatory in areas of the main meridional section of the Galaxy, and for which proper motion data are available, has been performed. Fundamental parameters have been obtained for 333 stars in four areas. Space densities of stars of different spectral types, the stellar luminosity function and the relationships between the kinematics and metallicity of stars have been studied. The results have confirmed and completed the conclusions made on the basis of some previous spectroscopic and photometric surveys. Many plates have been obtained for other important directions in the sky: the Kapteyn areas, the Galactic anticentre and the main meridional section of the Galaxy. The data can be treated with the same quantitative method applied here. This method may also be applied to other available and future spectroscopic data of similar resolution, notably that obtained with large format CCD detectors on Schmidt-type telescopes.

  15. Leak Location and Classification in the Space Shuttle Main Engine Nozzle by Infrared Testing

    NASA Technical Reports Server (NTRS)

    Russell, Samuel S.; Walker, James L.; Lansing, Mathew

    2003-01-01

    The Space Shuttle Main Engine (SSME) is composed of cooling tubes brazed to the inside of a conical structural jacket. Because of the geometry there are regions that can't be inspected for leaks using the bubble solution and low-pressure method. The temperature change due escaping gas is detectable on the surface of the nozzle under the correct conditions. The methods and results presented in this summary address the thermographic identification of leaks in the Space Shuttle Main Engine nozzles. A highly sensitive digital infrared camera is used to record the minute temperature change associated with a leak source, such as a crack or pinhole, hidden within the nozzle wall by observing the inner "hot wall" surface as the nozzle is pressurized. These images are enhanced by digitally subtracting a thermal reference image taken before pressurization, greatly diminishing background noise. The method provides a nonintrusive way of localizing the tube that is leaking and the exact leak source position to within a very small axial distance. Many of the factors that influence the inspectability of the nozzle are addressed; including pressure rate, peak pressure, gas type, ambient temperature and surface preparation.

  16. Musculoskeletal disorders: OWAS review

    PubMed Central

    GÓMEZ-GALÁN, Marta; PÉREZ-ALONSO, José; CALLEJÓN-FERRE, Ángel-Jesús; LÓPEZ-MARTÍNEZ, Javier

    2017-01-01

    The prevention of musculoskeletal disorders (MSD) is very important in the world. Governments and companies are the most interested. The objective of the present work is to review the literature on the applications of the OWAS method in the diverse sectors or fields of knowledge and countries from its publication to March 2017. The use of OWAS method has been classified by categories of knowledge, by country and by year. The search was made by selecting only the main collection of the Web of Science. This was selected by the option “Advanced search” using the term OWAS (ts=OWAS) for the time period of 1900 to 2017. A total of 166 results were found, consisting of conference papers and articles in scientific journals. In conclusion, the OWAS has been applied mainly in two sectors: “Manufacturing industries” and “Healthcare and Social assistance activities”. This method needs to be complemented with other indirect or direct methods. Also, whenever the OWAS has been used, whether individually or together with other methods, musculoskeletal disorders risks have been detected, this perhaps being an indicator to review the evaluation parameters because overestimating the risk. PMID:28484144

  17. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  18. Electronic states with nontrivial topology in Dirac materials

    NASA Astrophysics Data System (ADS)

    Turkevich, R. V.; Perov, A. A.; Protogenov, A. P.; Chulkov, E. V.

    2017-08-01

    The theoretical studies of phase states with a linear dispersion of the spectrum of low-energy electron excitations have been reviewed. Some main properties and methods of experimental study of these states in socalled Dirac materials have been discussed in detail. The results of modern studies of symmetry-protected electronic states with nontrivial topology have been reported. Combination of approaches based on geometry with homotopic topology methods and results of condensed matter physics makes it possible to clarify new features of topological insulators, as well as Dirac and Weyl semimetals.

  19. Studies on transonic Double Circular Arc (DCA) profiles of axial flow compressor calculations of profile design

    NASA Astrophysics Data System (ADS)

    Rugun, Y.; Zhaoyan, Q.

    1986-05-01

    In this paper, the concepts and methods for design of high-Mach-number airfoils of axial flow compressor are described. The correlation-equations of main parameters such as geometries of airfoil and cascade, stream parameters and wake characteristic parameters of compressor are provided. For obtaining the total pressure loss coefficients of cascade and adopting the simplified calculating method, several curves and charts are provided by authors. The testing results and calculating values are compared, and both the results are in better agreement.

  20. DSP code optimization based on cache

    NASA Astrophysics Data System (ADS)

    Xu, Chengfa; Li, Chengcheng; Tang, Bin

    2013-03-01

    DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.

  1. Review of methods for determination of ammonia volatilization in farmland

    NASA Astrophysics Data System (ADS)

    Yang, J.; Jiao, Y.; Yang, W. Z.; Gu, P.; Bai, S. G.; Liu, L. J.

    2018-02-01

    Ammonia is one of the most abundant alkaline trace gases in the atmosphere, which is one of the important factors affecting atmospheric quality. Excessive application of nitrogen fertilizer is the main source of global ammonia emissions, which not only exacerbate greenhouse gas emissions, but also leads to eutrophication of water bodies. In this paper, the basic principle, the operation process, the advantages and disadvantages, and the previous research results of the method are summarized in detail, including the enclosure method, the venting method, the continuous airflow enclosure method, the wind tunnel method and the micro-meteorological method. So as to provide a theoretical basis for selecting the appropriate method for determination of ammonia volatilization.

  2. Optimization of rotor shaft shrink fit method for motor using "Robust design"

    NASA Astrophysics Data System (ADS)

    Toma, Eiji

    2018-01-01

    This research is collaborative investigation with the general-purpose motor manufacturer. To review construction method in production process, we applied the parameter design method of quality engineering and tried to approach the optimization of construction method. Conventionally, press-fitting method has been adopted in process of fitting rotor core and shaft which is main component of motor, but quality defects such as core shaft deflection occurred at the time of press fitting. In this research, as a result of optimization design of "shrink fitting method by high-frequency induction heating" devised as a new construction method, its construction method was feasible, and it was possible to extract the optimum processing condition.

  3. A Factor Graph Approach to Automated GO Annotation

    PubMed Central

    Spetale, Flavio E.; Tapia, Elizabeth; Krsticevic, Flavia; Roda, Fernando; Bulacio, Pilar

    2016-01-01

    As volume of genomic data grows, computational methods become essential for providing a first glimpse onto gene annotations. Automated Gene Ontology (GO) annotation methods based on hierarchical ensemble classification techniques are particularly interesting when interpretability of annotation results is a main concern. In these methods, raw GO-term predictions computed by base binary classifiers are leveraged by checking the consistency of predefined GO relationships. Both formal leveraging strategies, with main focus on annotation precision, and heuristic alternatives, with main focus on scalability issues, have been described in literature. In this contribution, a factor graph approach to the hierarchical ensemble formulation of the automated GO annotation problem is presented. In this formal framework, a core factor graph is first built based on the GO structure and then enriched to take into account the noisy nature of GO-term predictions. Hence, starting from raw GO-term predictions, an iterative message passing algorithm between nodes of the factor graph is used to compute marginal probabilities of target GO-terms. Evaluations on Saccharomyces cerevisiae, Arabidopsis thaliana and Drosophila melanogaster protein sequences from the GO Molecular Function domain showed significant improvements over competing approaches, even when protein sequences were naively characterized by their physicochemical and secondary structure properties or when loose noisy annotation datasets were considered. Based on these promising results and using Arabidopsis thaliana annotation data, we extend our approach to the identification of most promising molecular function annotations for a set of proteins of unknown function in Solanum lycopersicum. PMID:26771463

  4. Antioxidant activity and phenolic compositions of lentil (Lens culinaris var. Morton) extract and its fractions

    PubMed Central

    Zou, Yanping; Chang, Sam K.C.; Gu, Yan; Qian, Steven Y.

    2011-01-01

    Phenolic compounds were extracted from Morton lentils using acidified aqueous acetone. The crude Morton extract (CME) was applied onto a macroresin column and desorbed by aqueous methanol to obtain a semi-purified Morton extract (SPME). The SPME was further fractionated over Sephadex LH-20 column into five main fractions (Fr I – Fr V). The phytochemical contents such as total phenolic content (TPC), total flavonoid content (TFC), and condensed tannin content (CTC) of the CME, SPME, and its fractions were examined by colorimetric methods. Antioxidant activity of extracts and fractions were screened by DPPH scavenging activity, trolox equivalent antioxidant capacity (TEAC), ferric reduced antioxidant power (FRAP), and oxygen radical absorbing capacity (ORAC) methods. In addition, the compositions of active fractions were determined by HPLC-DAD and HPLC-MS methods. Results showed that fraction enriched in condensed tannins (Fr V) exhibited significantly higher value of TPC, CTC and higher antioxidant activity as compared to the crude extract, SPME and low-molecular-weight fractions (Fr I – IV). Eighteen compounds existed in those fractions, and seventeen were tentatively identified by UV and MS spectra. HPLC-MS analysis revealed Fr II contained mainly kaempferol glycoside, Fr III and Fr IV mainly contained flavonoid glycosides, and Fr V was composed of condensed tannins. The results suggested that extract of Morton lentils is a promising source of antioxidant phenolics, and may be used as a dietary supplement for health promotion. PMID:21332205

  5. A Factor Graph Approach to Automated GO Annotation.

    PubMed

    Spetale, Flavio E; Tapia, Elizabeth; Krsticevic, Flavia; Roda, Fernando; Bulacio, Pilar

    2016-01-01

    As volume of genomic data grows, computational methods become essential for providing a first glimpse onto gene annotations. Automated Gene Ontology (GO) annotation methods based on hierarchical ensemble classification techniques are particularly interesting when interpretability of annotation results is a main concern. In these methods, raw GO-term predictions computed by base binary classifiers are leveraged by checking the consistency of predefined GO relationships. Both formal leveraging strategies, with main focus on annotation precision, and heuristic alternatives, with main focus on scalability issues, have been described in literature. In this contribution, a factor graph approach to the hierarchical ensemble formulation of the automated GO annotation problem is presented. In this formal framework, a core factor graph is first built based on the GO structure and then enriched to take into account the noisy nature of GO-term predictions. Hence, starting from raw GO-term predictions, an iterative message passing algorithm between nodes of the factor graph is used to compute marginal probabilities of target GO-terms. Evaluations on Saccharomyces cerevisiae, Arabidopsis thaliana and Drosophila melanogaster protein sequences from the GO Molecular Function domain showed significant improvements over competing approaches, even when protein sequences were naively characterized by their physicochemical and secondary structure properties or when loose noisy annotation datasets were considered. Based on these promising results and using Arabidopsis thaliana annotation data, we extend our approach to the identification of most promising molecular function annotations for a set of proteins of unknown function in Solanum lycopersicum.

  6. Novel switching method for single-phase NPC three-level inverter with neutral-point voltage control

    NASA Astrophysics Data System (ADS)

    Lee, June-Seok; Lee, Seung-Joo; Lee, Kyo-Beum

    2018-02-01

    This paper proposes a novel switching method with the neutral-point voltage control in a single-phase neutral-point-clamped three-level inverter (SP-NPCI) used in photovoltaic systems. A proposed novel switching method for the SP-NPCI improves the efficiency. The main concept is to fix the switching state of one leg. As a result, the switching loss decreases and the total efficiency is improved. In addition, it enables the maximum power-point-tracking operation to be performed by applying the proposed neutral-point voltage control algorithm. This control is implemented by modifying the reference signal. Simulation and experimental results provide verification of the performance of a novel switching method with the neutral-point voltage control.

  7. OMICS-strategies and methods in the fight against doping.

    PubMed

    Reichel, Christian

    2011-12-10

    During the past decade OMICS-methods not only continued to have their impact on research strategies in life sciences and in particular molecular biology, but also started to be used for anti-doping control purposes. Research activities were mainly reasoned by the fact that several substances and methods, which were prohibited by the World Anti-Doping Agency (WADA), were or still are difficult to detect by direct methods. Transcriptomics, proteomics, and metabolomics in theory offer ideal platforms for the discovery of biomarkers for the indirect detection of the abuse of these substances and methods. Traditionally, the main focus of transcriptomics and proteomics projects has been on the prolonged detection of the misuse of human growth hormone (hGH), recombinant erythropoietin (rhEpo), and autologous blood transfusion. An additional benefit of the indirect or marker approach would also be that similarly acting substances might then be detected by a single method, without being forced to develop new direct detection methods for new but comparable prohibited substances (as has been the case, e.g. for the various forms of Epo analogs and biosimilars). While several non-OMICS-derived parameters for the indirect detection of doping are currently in use, for example the blood parameters of the hematological module of the athlete's biological passport, the outcome of most non-targeted OMICS-projects led to no direct application in routine doping control so far. The main reason is the inherent complexity of human transcriptomes, proteomes, and metabolomes and their inter-individual variability. The article reviews previous and recent research projects and their results and discusses future strategies for a more efficient application of OMICS-methods in doping control. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. The use of multi-criteria decision making models in evaluating anesthesia method options in circumcision surgery.

    PubMed

    Hancerliogullari, Gulsah; Hancerliogullari, Kadir Oymen; Koksalmis, Emrah

    2017-01-23

    Determining the most suitable anesthesia method for circumcision surgery plays a fundamental role in pediatric surgery. This study is aimed to present pediatric surgeons' perspective on the relative importance of the criteria for selecting anesthesia method for circumcision surgery by utilizing the multi-criteria decision making methods. Fuzzy set theory offers a useful tool for transforming linguistic terms into numerical assessments. Since the evaluation of anesthesia methods requires linguistic terms, we utilize the fuzzy Analytic Hierarchy Process (AHP) and fuzzy Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Both mathematical decision-making methods are originated from individual judgements for qualitative factors utilizing the pair-wise comparison matrix. Our model uses four main criteria, eight sub-criteria as well as three alternatives. To assess the relative priorities, an online questionnaire was completed by three experts, pediatric surgeons, who had experience with circumcision surgery. Discussion of the results with the experts indicates that time-related factors are the most important criteria, followed by psychology, convenience and duration. Moreover, general anesthesia with penile block for circumcision surgery is the preferred choice of anesthesia compared to general anesthesia without penile block, which has a greater priority compared to local anesthesia under the discussed main-criteria and sub-criteria. The results presented in this study highlight the need to integrate surgeons' criteria into the decision making process for selecting anesthesia methods. This is the first study in which multi-criteria decision making tools, specifically fuzzy AHP and fuzzy TOPSIS, are used to evaluate anesthesia methods for a pediatric surgical procedure.

  9. Front panel engineering with CAD simulation tool

    NASA Astrophysics Data System (ADS)

    Delacour, Jacques; Ungar, Serge; Mathieu, Gilles; Hasna, Guenther; Martinez, Pascal; Roche, Jean-Christophe

    1999-04-01

    THe progress made recently in display technology covers many fields of application. The specification of radiance, colorimetry and lighting efficiency creates some new challenges for designers. Photometric design is limited by the capability of correctly predicting the result of a lighting system, to save on the costs and time taken to build multiple prototypes or bread board benches. The second step of the research carried out by company OPTIS is to propose an optimization method to be applied to the lighting system, developed in the software SPEOS. The main features of the tool requires include the CAD interface, to enable fast and efficient transfer between mechanical and light design software, the source modeling, the light transfer model and an optimization tool. The CAD interface is mainly a prototype of transfer, which is not the subjects here. Photometric simulation is efficiently achieved by using the measured source encoding and a simulation by the Monte Carlo method. Today, the advantages and the limitations of the Monte Carlo method are well known. The noise reduction requires a long calculation time, which increases with the complexity of the display panel. A successful optimization is difficult to achieve, due to the long calculation time required for each optimization pass including a Monte Carlo simulation. The problem was initially defined as an engineering method of study. The experience shows that good understanding and mastering of the phenomenon of light transfer is limited by the complexity of non sequential propagation. The engineer must call for the help of a simulation and optimization tool. The main point needed to be able to perform an efficient optimization is a quick method for simulating light transfer. Much work has been done in this area and some interesting results can be observed. It must be said that the Monte Carlo method wastes time calculating some results and information which are not required for the needs of the simulation. Low efficiency transfer system cost a lot of lost time. More generally, the light transfer simulation can be treated efficiently when the integrated result is composed of elementary sub results that include quick analytical calculated intersections. The first axis of research appear. The quick integration research and the quick calculation of geometric intersections. The first axis of research brings some general solutions also valid for multi-reflection systems. The second axis requires some deep thinking on the intersection calculation. An interesting way is the subdivision of space in VOXELS. This is an adapted method of 3D division of space according to the objects and their location. An experimental software has been developed to provide a validation of the method. The gain is particularly high in complex systems. An important reduction in the calculation time has been achieved.

  10. An approach for multi-objective optimization of vehicle suspension system

    NASA Astrophysics Data System (ADS)

    Koulocheris, D.; Papaioannou, G.; Christodoulou, D.

    2017-10-01

    In this paper, a half car model of with nonlinear suspension systems is selected in order to study the vertical vibrations and optimize its suspension system with respect to ride comfort and road holding. A road bump was used as road profile. At first, the optimization problem is solved with the use of Genetic Algorithms with respect to 6 optimization targets. Then the k - ɛ optimization method was implemented to locate one optimum solution. Furthermore, an alternative approach is presented in this work: the previous optimization targets are separated in main and supplementary ones, depending on their importance in the analysis. The supplementary targets are not crucial to the optimization but they could enhance the main objectives. Thus, the problem was solved again using Genetic Algorithms with respect to the 3 main targets of the optimization. Having obtained the Pareto set of solutions, the k - ɛ optimality method was implemented for the 3 main targets and the supplementary ones, evaluated by the simulation of the vehicle model. The results of both cases are presented and discussed in terms of convergence of the optimization and computational time. The optimum solutions acquired from both cases are compared based on performance metrics as well.

  11. Methods of measuring radioactivity in the environment

    NASA Astrophysics Data System (ADS)

    Isaksson, Mats

    In this thesis a variety of sampling methods have been utilised to assess the amount of deposited activity, mainly of 137Cs, from the Chernobyl accident and from the nuclear weapons tests. Starting with the Chernobyl accident in 1986 sampling of air and rain was used to determine the composition and amount of radioactive debris from this accident, brought to southern Sweden by the weather systems. The resulting deposition and its removal from urban areas was than studied through measurements on sewage sludge and water. The main part of the thesis considers methods of determining the amount of radiocaesium in the ground through soil sampling. In connection with soil sampling a method of optimising the sampling procedure has been developed and tested in the areas of Sweden which have a comparatively high amount of 137Cs from the Chernobyl accident. This method was then used in a survey of the activity in soil in Lund and Skane, divided between nuclear weapons fallout and fallout from the Chernobyl accident. By comparing the results from this survey with deposition calculated from precipitation measurements it was found possible to predict the deposition pattern over Skane for both nuclear weapons fallout and fallout from the Chernobyl accident. In addition, the vertical distribution of 137Cs has been modelled and the temporal variation of the depth distribution has been described.

  12. Global model of zenith tropospheric delay proposed based on EOF analysis

    NASA Astrophysics Data System (ADS)

    Sun, Langlang; Chen, Peng; Wei, Erhu; Li, Qinzheng

    2017-07-01

    Tropospheric delay is one of the main error budgets in Global Navigation Satellite System (GNSS) measurements. Many empirical correction models have been developed to compensate this delay, and models which do not require meteorological parameters have received the most attention. This study established a global troposphere zenith total delay (ZTD) model, called Global Empirical Orthogonal Function Troposphere (GEOFT), based on the empirical orthogonal function (EOF, also known as geographically weighted PCAs) analysis method and the Global Geodetic Observing System (GGOS) Atmosphere data from 2012 to 2015. The results showed that ZTD variation could be well represented by the characteristics of the EOF base function Ek and associated coefficients Pk. Here, E1 mainly signifies the equatorial anomaly; E2 represents north-south asymmetry, and E3 and E4 reflects regional variation. Moreover, P1 mainly reflects annual and semiannual variation components; P2 and P3 mainly contains annual variation components, and P4 displays semiannual variation components. We validated the proposed GEOFT model using tropospheric delay data of GGOS ZTD grid data and the tropospheric product of the International GNSS Service (IGS) over the year 2016. The results showed that GEOFT model has high accuracy with bias and RMS of -0.3 and 3.9 cm, respectively, with respect to the GGOS ZTD data, and of -0.8 and 4.1 cm, respectively, with respect to the global IGS tropospheric product. The accuracy of GEOFT demonstrating that the use of the EOF analysis method to characterize ZTD variation is reasonable.

  13. [Preliminary study on effective components of Tripterygium wilfordii for liver toxicity based on spectrum-effect correlation analysis].

    PubMed

    Zhao, Xiao-Mei; Pu, Shi-Biao; Zhao, Qing-Guo; Gong, Man; Wang, Jia-Bo; Ma, Zhi-Jie; Xiao, Xiao-He; Zhao, Kui-Jun

    2016-08-01

    In this paper, the spectrum-effect correlation analysis method was used to explore the main effective components of Tripterygium wilfordii for liver toxicity, and provide reference for promoting the quality control of T. wilfordii. Chinese medicine T.wilfordii was taken as the study object, and LC-Q-TOF-MS was used to characterize the chemical components in T. wilfordii samples from different areas, and their main components were initially identified after referring to the literature. With the normal human hepatocytes (LO2 cell line)as the carrier, acetaminophen as positive medicine, and cell inhibition rate as testing index, the simple correlation analysis and multivariate linear correlation analysis methods were used to screen the main components of T. wilfordii for liver toxicity. As a result, 10 kinds of main components were identified, and the spectrum-effect correlation analysis showed that triptolide may be the toxic component, which was consistent with previous results of traditional literature. Meanwhile it was found that tripterine and demethylzeylasteral may greatly contribute to liver toxicity in multivariate linear correlation analysis. T. wilfordii samples of different varieties or different origins showed large difference in quality, and the T. wilfordii from southwest China showed lower liver toxicity, while those from Hunan and Anhui province showed higher liver toxicity. This study will provide data support for further rational use of T. wilfordii and research on its liver toxicity ingredients. Copyright© by the Chinese Pharmaceutical Association.

  14. Mass fraction assignment of folic acid in a high purity material

    NASA Astrophysics Data System (ADS)

    Westwood, Steven; Josephs, Ralf; Choteau, Tiphaine; Daireaux, Adeline; Stoppacher, Norbert; Wielgosz, Robert; Davies, Stephen; de Rego, Eliane; Wollinger, Wagner; Garrido, Bruno; Fernandes, Jane; Lima, Jonathan; Oliveira, Rodrigo; de Sena, Rodrigo; Windust, Anthony; Huang, Ting; Dai, Xinhua; Quan, Can; He, Haihong; Zhang, Wei; Wei, Chao; Li, Na; Gao, Dexin; Liu, Zhao; Lo, Man-fung; Wong, Wai-fun; Pfeifer, Dietmar; Koch, Matthias; Dorgerloh, Ute; Rothe, Robert; Philip, Rosemary; Hirari, Nobuyasu; Fazlin Rezali, Mohd; Salazar Arzate, Claudia Marcela; Pedraza Evelina Berenice, Mercado; Serrano Caballero, Victor; Arce Osuna, Mariana; Krylov, A.; Kharitonov, S.; Lopushanskaya, E.; Liu, Qinde; Tang Lin, Teo; Fernandes-Whaley, Maria; Quinn, Laura; Nhlapo, Nontete; Prevoo-Franzsen, Desiree; Archer, Marcelle; Kim, Byungjoo; Baek, Song-Yee; Lee, Sunyoung; Lee, Joonhee; Marbumrung, Sornkrit; Kankaew, Ponhatai; Chaorenpornpukdee, Kanokrat; Chaipet, Thitiphan; Shearman, Kittiya; Ceyhan Goren, Ahmet; Gunduz, Simay; Yilmaz, Hasibe; Un, Ilker; Bilsel, Gokhan; Clarkson, Cailean; Bedner, Mary; Camara, Johanna E.; Lang, Brian E.; Lippa, Katrice A.; Nelson, Michael A.; Toman, Blaza; Yu, Lee L.

    2018-01-01

    The comparison required the assignment of the mass fraction of folic acid present as the main component in the comparison sample. Performance in the comparison is representative of a laboratory's measurement capability for the purity assignment of organic compounds of medium structural complexity [molecular weight range 300–500] and high polarity (pKOW < ‑2). Methods used by the eighteen participating NMIs or DIs were based on a mass balance (summation of impurities) or qNMR approach, or the combination of data obtained using both methods. The qNMR results tended to give slightly lower values for the content of folic acid, albeit with larger associated uncertainties, compared with the results obtained by mass balance procedures. Possible reasons for this divergence are discussed in the report, without reaching a definitive conclusion as to their origin. The comparison demonstrates that for a structurally complex polar organic compound containing a high water content and presenting a number of additional analytical challenges, the assignment of the mass fraction content property value of the main component can reasonably be achieved with an associated relative standard uncertainty in the assigned value of 0.5% Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  15. Chemical composition and antibacterial activities of lupin seeds extracts.

    PubMed

    Lampart-Szczapa, Eleonora; Siger, Aleksander; Trojanowska, Krystyna; Nogala-Kalucka, Małgorzata; Malecka, Maria; Pacholek, Bogdan

    2003-10-01

    Determination of influence of lupin natural phenolic compounds on antibacterial properties of its seeds was carried out. Raw material were seeds of Lupinus albus, L. luteus, and L. angustifolius. The methods included the determination of the content of proteins, total phenolic compounds, free phenolic acids, and tannins as well as antibacterial properties with ethanol extracts. The content of total phenolic compounds was smaller in testas than in cotyledons and the highest levels are observed in bitter cultivars of Lupinus albus cv. Bac and L. angustifolius cv. Mirela. Lupin tannins mainly occurred in cotyledons of the white lupin, predominantly in the bitter cultivar Bac. Free phenolic acids were mainly found in testas. Only extracts from the testas displayed antibacterial properties, which excludes the possibility of alkaloid influence on the results. The results suggest that inhibition of test bacteria growth depended mainly upon the content of the total phenolic compounds.

  16. MASW on the standard seismic prospective scale using full spread recording

    NASA Astrophysics Data System (ADS)

    Białas, Sebastian; Majdański, Mariusz; Trzeciak, Maciej; Gałczyński, Edward; Maksym, Andrzej

    2015-04-01

    The Multichannel Analysis of Surface Waves (MASW) is one of seismic survey methods that use the dispersion curve of surface waves in order to describe the stiffness of the surface. Is is used mainly for geotechnical engineering scale with total length of spread between 5 - 450 m and spread offset between 1 - 100 m, the hummer is the seismic source on this surveys. The standard procedure of MASW survey is: data acquisition, dispersion analysis and inversion of extracting dispersion curve to obtain the closest theoretical curve. The final result includes share-wave velocity (Vs) values at different depth along the surveyed lines. The main goal of this work is to expand this engineering method to the bigger scale with the length of standard prospecting spread of 20 km using 4.5 Hz version of vertical component geophones. The standard vibroseis and explosive method are used as the seismic source. The acquisition were conducted on the full spread all the time during each single shoot. The seismic data acquisition used for this analysis were carried out on the Braniewo 2014 project in north of Poland. The results achieved during standard MASW procedure says that this method can be used on much bigger scale as well. The different methodology of this analysis requires only much stronger seismic source.

  17. Statistical results on restorative dentistry experiments: effect of the interaction between main variables

    PubMed Central

    CAVALCANTI, Andrea Nóbrega; MARCHI, Giselle Maria; AMBROSANO, Gláucia Maria Bovi

    2010-01-01

    Statistical analysis interpretation is a critical field in scientific research. When there is more than one main variable being studied in a research, the effect of the interaction between those variables is fundamental on experiments discussion. However, some doubts can occur when the p-value of the interaction is greater than the significance level. Objective To determine the most adequate interpretation for factorial experiments with p-values of the interaction nearly higher than the significance level. Materials and methods The p-values of the interactions found in two restorative dentistry experiments (0.053 and 0.068) were interpreted in two distinct ways: considering the interaction as not significant and as significant. Results Different findings were observed between the two analyses, and studies results became more coherent when the significant interaction was used. Conclusion The p-value of the interaction between main variables must be analyzed with caution because it can change the outcomes of research studies. Researchers are strongly advised to interpret carefully the results of their statistical analysis in order to discuss the findings of their experiments properly. PMID:20857003

  18. Flood risk assessment in France: comparison of extreme flood estimation methods (EXTRAFLO project, Task 7)

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Paquet, E.; Lang, M.; Renard, B.; Arnaud, P.; Aubert, Y.; Carre, J.

    2013-12-01

    In flood risk assessment the methods can be divided in two families: deterministic methods and probabilistic methods. In the French hydrologic community the probabilistic methods are historically preferred to the deterministic ones. Presently a French research project named EXTRAFLO (RiskNat Program of the French National Research Agency, https://extraflo.cemagref.fr) deals with the design values for extreme rainfall and floods. The object of this project is to carry out a comparison of the main methods used in France for estimating extreme values of rainfall and floods, to obtain a better grasp of their respective fields of application. In this framework we present the results of Task 7 of EXTRAFLO project. Focusing on French watersheds, we compare the main extreme flood estimation methods used in French background: (i) standard flood frequency analysis (Gumbel and GEV distribution), (ii) regional flood frequency analysis (regional Gumbel and GEV distribution), (iii) local and regional flood frequency analysis improved by historical information (Naulet et al., 2005), (iv) simplify probabilistic method based on rainfall information (i.e. Gradex method (CFGB, 1994), Agregee method (Margoum, 1992) and Speed method (Cayla, 1995)), (v) flood frequency analysis by continuous simulation approach and based on rainfall information (i.e. Schadex method (Paquet et al., 2013, Garavaglia et al., 2010), Shyreg method (Lavabre et al., 2003)) and (vi) multifractal approach. The main result of this comparative study is that probabilistic methods based on additional information (i.e. regional, historical and rainfall information) provide better estimations than the standard flood frequency analysis. Another interesting result is that, the differences between the various extreme flood quantile estimations of compared methods increase with return period, staying relatively moderate up to 100-years return levels. Results and discussions are here illustrated throughout with the example of five watersheds located in the South of France. References : O. CAYLA : Probability calculation of design floods abd inflows - SPEED. Waterpower 1995, San Francisco, California 1995 CFGB : Design flood determination by the gradex method. Bulletin du Comité Français des Grands Barrages News 96, 18th congress CIGB-ICOLD n2, nov:108, 1994. F. GARAVAGLIA et al. : Introducing a rainfall compound distribution model based on weather patterns subsampling. Hydrology and Earth System Sciences, 14, 951-964, 2010. J. LAVABRE et al. : SHYREG : une méthode pour l'estimation régionale des débits de crue. application aux régions méditerranéennes françaises. Ingénierie EAT, 97-111, 2003. M. MARGOUM : Estimation des crues rares et extrêmes : le modèle AGREGEE. Conceptions et remières validations. PhD, Ecole des Mines de Paris, 1992. R. NAULET et al. : Flood frequency analysis on the Ardèche river using French documentary sources from the two last centuries. Journal of Hydrology, 313:58-78, 2005. E. PAQUET et al. : The SCHADEX method: A semi-continuous rainfall-runoff simulation for extreme flood estimation, Journal of Hydrology, 495, 23-37, 2013.

  19. Improving reticle defect disposition via fully automated lithography simulation

    NASA Astrophysics Data System (ADS)

    Mann, Raunak; Goodman, Eliot; Lao, Keith; Ha, Steven; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan

    2016-03-01

    Most advanced wafer fabs have embraced complex pattern decoration, which creates numerous challenges during in-fab reticle qualification. These optical proximity correction (OPC) techniques create assist features that tend to be very close in size and shape to the main patterns as seen in Figure 1. A small defect on an assist feature will most likely have little or no impact on the fidelity of the wafer image, whereas the same defect on a main feature could significantly decrease device functionality. In order to properly disposition these defects, reticle inspection technicians need an efficient method that automatically separates main from assist features and predicts the resulting defect impact on the wafer image. Analysis System (ADAS) defect simulation system[1]. Up until now, using ADAS simulation was limited to engineers due to the complexity of the settings that need to be manually entered in order to create an accurate result. A single error in entering one of these values can cause erroneous results, therefore full automation is necessary. In this study, we propose a new method where all needed simulation parameters are automatically loaded into ADAS. This is accomplished in two parts. First we have created a scanner parameter database that is automatically identified from mask product and level names. Second, we automatically determine the appropriate simulation printability threshold by using a new reference image (provided by the inspection tool) that contains a known measured value of the reticle critical dimension (CD). This new method automatically loads the correct scanner conditions, sets the appropriate simulation threshold, and automatically measures the percentage of CD change caused by the defect. This streamlines qualification and reduces the number of reticles being put on hold, waiting for engineer review. We also present data showing the consistency and reliability of the new method, along with the impact on the efficiency of in-fab reticle qualification.

  20. A search for debris disks in the Herschel-ATLAS

    NASA Astrophysics Data System (ADS)

    Thompson, M. A.; Smith, D. J. B.; Stevens, J. A.; Jarvis, M. J.; Vidal Perez, E.; Marshall, J.; Dunne, L.; Eales, S.; White, G. J.; Leeuw, L.; Sibthorpe, B.; Baes, M.; González-Solares, E.; Scott, D.; Vieiria, J.; Amblard, A.; Auld, R.; Bonfield, D. G.; Burgarella, D.; Buttiglione, S.; Cava, A.; Clements, D. L.; Cooray, A.; Dariush, A.; de Zotti, G.; Dye, S.; Eales, S.; Frayer, D.; Fritz, J.; Gonzalez-Nuevo, J.; Herranz, D.; Ibar, E.; Ivison, R. J.; Lagache, G.; Lopez-Caniego, M.; Maddox, S.; Negrello, M.; Pascale, E.; Pohlen, M.; Rigby, E.; Rodighiero, G.; Samui, S.; Serjeant, S.; Temi, P.; Valtchanov, I.; Verma, A.

    2010-07-01

    Aims: We aim to demonstrate that the Herschel-ATLAS (H-ATLAS) is suitable for a blind and unbiased survey for debris disks by identifying candidate debris disks associated with main sequence stars in the initial science demonstration field of the survey. We show that H-ATLAS reveals a population of far-infrared/sub-mm sources that are associated with stars or star-like objects on the SDSS main-sequence locus. We validate our approach by comparing the properties of the most likely candidate disks to those of the known population. Methods: We use a photometric selection technique to identify main sequence stars in the SDSS DR7 catalogue and a Bayesian Likelihood Ratio method to identify H-ATLAS catalogue sources associated with these main sequence stars. Following this photometric selection we apply distance cuts to identify the most likely candidate debris disks and rule out the presence of contaminating galaxies using UKIDSS LAS K-band images. Results: We identify 78 H-ATLAS sources associated with SDSS point sources on the main-sequence locus, of which two are the most likely debris disk candidates: H-ATLAS J090315.8 and H-ATLAS J090240.2. We show that they are plausible candidates by comparing their properties to the known population of debris disks. Our initial results indicate that bright debris disks are rare, with only 2 candidates identified in a search sample of 851 stars. We also show that H-ATLAS can derive useful upper limits for debris disks associated with Hipparcos stars in the field and outline the future prospects for our debris disk search programme. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  1. The empirical Gaia G-band extinction coefficient

    NASA Astrophysics Data System (ADS)

    Danielski, C.; Babusiaux, C.; Ruiz-Dern, L.; Sartoretti, P.; Arenou, F.

    2018-06-01

    Context. The first Gaia data release unlocked the access to photometric information for 1.1 billion sources in the G-band. Yet, given the high level of degeneracy between extinction and spectral energy distribution for large passbands such as the Gaia G-band, a correction for the interstellar reddening is needed in order to exploit Gaia data. Aims: The purpose of this manuscript is to provide the empirical estimation of the Gaia G-band extinction coefficient kG for both the red giants and main sequence stars in order to be able to exploit the first data release DR1. Methods: We selected two samples of single stars: one for the red giants and one for the main sequence. Both samples are the result of a cross-match between Gaia DR1 and 2MASS catalogues; they consist of high-quality photometry in the G-, J- and KS-bands. These samples were complemented by temperature and metallicity information retrieved from APOGEE DR13 and LAMOST DR2 surveys, respectively. We implemented a Markov chain Monte Carlo method where we used (G - KS)0 versus Teff and (J - KS)0 versus (G - KS)0, calibration relations to estimate the extinction coefficient kG and we quantify its corresponding confidence interval via bootstrap resampling. We tested our method on samples of red giants and main sequence stars, finding consistent solutions. Results: We present here the determination of the Gaia extinction coefficient through a completely empirical method. Furthermore we provide the scientific community with a formula for measuring the extinction coefficient as a function of stellar effective temperature, the intrinsic colour (G - KS)0, and absorption.

  2. An integrated optimization method for river water quality management and risk analysis in a rural system.

    PubMed

    Liu, J; Li, Y P; Huang, G H; Zeng, X T; Nie, S

    2016-01-01

    In this study, an interval-stochastic-based risk analysis (RSRA) method is developed for supporting river water quality management in a rural system under uncertainty (i.e., uncertainties exist in a number of system components as well as their interrelationships). The RSRA method is effective in risk management and policy analysis, particularly when the inputs (such as allowable pollutant discharge and pollutant discharge rate) are expressed as probability distributions and interval values. Moreover, decision-makers' attitudes towards system risk can be reflected using a restricted resource measure by controlling the variability of the recourse cost. The RSRA method is then applied to a real case of water quality management in the Heshui River Basin (a rural area of China), where chemical oxygen demand (COD), total nitrogen (TN), total phosphorus (TP), and soil loss are selected as major indicators to identify the water pollution control strategies. Results reveal that uncertainties and risk attitudes have significant effects on both pollutant discharge and system benefit. A high risk measure level can lead to a reduced system benefit; however, this reduction also corresponds to raised system reliability. Results also disclose that (a) agriculture is the dominant contributor to soil loss, TN, and TP loads, and abatement actions should be mainly carried out for paddy and dry farms; (b) livestock husbandry is the main COD discharger, and abatement measures should be mainly conducted for poultry farm; (c) fishery accounts for a high percentage of TN, TP, and COD discharges but a has low percentage of overall net benefit, and it may be beneficial to cease fishery activities in the basin. The findings can facilitate the local authority in identifying desired pollution control strategies with the tradeoff between socioeconomic development and environmental sustainability.

  3. Plasmid Flux in Escherichia coli ST131 Sublineages, Analyzed by Plasmid Constellation Network (PLACNET), a New Method for Plasmid Reconstruction from Whole Genome Sequences

    PubMed Central

    Garcillán-Barcia, M. Pilar; Mora, Azucena; Blanco, Jorge; Coque, Teresa M.; de la Cruz, Fernando

    2014-01-01

    Bacterial whole genome sequence (WGS) methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET) that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage), comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC), comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ–proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages. PMID:25522143

  4. Plasmid flux in Escherichia coli ST131 sublineages, analyzed by plasmid constellation network (PLACNET), a new method for plasmid reconstruction from whole genome sequences.

    PubMed

    Lanza, Val F; de Toro, María; Garcillán-Barcia, M Pilar; Mora, Azucena; Blanco, Jorge; Coque, Teresa M; de la Cruz, Fernando

    2014-12-01

    Bacterial whole genome sequence (WGS) methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET) that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage), comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC), comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ-proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages.

  5. Successive ratio subtraction as a novel manipulation of ratio spectra for quantitative determination of a mixture of furosemide, spironolactone and canrenone

    NASA Astrophysics Data System (ADS)

    Emam, Aml A.; Abdelaleem, Eglal A.; Naguib, Ibrahim A.; Abdallah, Fatma F.; Ali, Nouruddin W.

    2018-03-01

    Furosemide and spironolactone are commonly prescribed antihypertensive drugs. Canrenone is the main degradation product and main metabolite of spironolactone. Ratio subtraction and extended ratio subtraction spectrophotometric methods were previously applied for quantitation of only binary mixtures. An extension of the above mentioned methods; successive ratio subtraction, is introduced in the presented work for quantitative determination of ternary mixtures exemplified by furosemide, spironolactone and canrenone. Manipulating the ratio spectra of the ternary mixture allowed their determination at 273.6 nm, 285 nm and 240 nm and in the concentration ranges of (2-16 μg mL- 1), (4-32 μg mL- 1) and (1-18 μg mL- 1) for furosemide, spironolactone and canrenone, respectively. Method specificity was ensured by the application to laboratory prepared mixtures. The introduced method was ensured to be accurate and precise. Validation of the developed method was done with respect to ICH guidelines and its validity was further ensured by the application to the pharmaceutical formulation. Statistical comparison between the obtained results and those obtained from the reported HPLC method was achieved concerning student's t-test and F ratio test where no significant difference was observed.

  6. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  7. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  8. Flow Liner Slot Edge Replication Feasibility Study

    NASA Technical Reports Server (NTRS)

    Newman, John A.; Willard, Scott A.; Smith, Stephen W.; Piascik, Robert S.

    2006-01-01

    Surface replication has been proposed as a method for crack detection in space shuttle main engine flowliner slots. The results of a feasibility study show that examination of surface replicas with a scanning electron microscope can result in the detection of cracks as small as 0.005 inch, and surface flaws as small as 0.001 inch, for the flowliner material.

  9. Using the Computer to Teach Methods and Interpretative Skills in the Humanities: Implementing a Project.

    ERIC Educational Resources Information Center

    Jones, Bruce William

    The results of implementing computer-assisted instruction (CAI) in two religion courses and a logic course at California State College, Bakersfield, are examined along with student responses. The main purpose of the CAI project was to teach interpretive skills. The most positive results came in the logic course. The programs in the New Testament…

  10. Existence results for degenerate p(x)-Laplace equations with Leray-Lions type operators

    NASA Astrophysics Data System (ADS)

    Ho, Ky; Sim, Inbo

    2017-01-01

    We show the various existence results for degenerate $p(x)$-Laplace equations with Leray-Lions type operators. A suitable condition on degeneracy is discussed and proofs are mainly based on direct methods and critical point theories in Calculus of Variations. In particular, we investigate the various situations of the growth rates between principal operators and nonlinearities.

  11. Determination of n-alkanes in C. annuum (bell pepper) fruit and seed using GC-MS: comparison of extraction methods and application to samples of different geographical origin.

    PubMed

    de Rijke, E; Fellner, C; Westerveld, J; Lopatka, M; Cerli, C; Kalbitz, K; de Koster, C G

    2015-07-01

    An efficient extraction and analysis method was developed for the isolation and quantification of n-alkanes from bell peppers of different geographical locations. Five extraction techniques, i.e., accelerated solvent extraction (ASE), ball mill extraction, ultrasonication, rinsing, and shaking, were quantitatively compared using gas chromatography coupled to mass spectrometry (GC-MS). Rinsing of the surface wax layer of freeze-dried bell peppers with chloroform proved to be a relatively quick and easy method to efficiently extract the main n-alkanes C27, C29, C31, and C33. A combined cleanup and fractionation approach on Teflon-coated silica SPE columns resulted in clean chromatograms and gave reproducible results (recoveries 90-95 %). The GC-MS method was reproducible (R(2) = 0.994-0.997, peak area standard deviation = 2-5%) and sensitive (LODs, S/N = 3, 0.05-0.15 ng/μL). The total main n-alkane concentrations were in the range of 5-50 μg/g dry weight. Seed extractions resulted in much lower total amounts of extracted n-alkanes compared to flesh and surface extractions, demonstrating the need for further improvement of pre-concentration and cleanup. The method was applied to 131 pepper samples from four different countries, and by using the relative n-alkane concentration ratios, Dutch peppers could be discriminated from those of the other countries, with the exception of peppers from the same cultivar. Graphical Abstract Procedure for pepper origin determination.

  12. Water footprint of European cars: potential impacts of water consumption along automobile life cycles.

    PubMed

    Berger, Markus; Warsen, Jens; Krinke, Stephan; Bach, Vanessa; Finkbeiner, Matthias

    2012-04-03

    Due to global increase of freshwater scarcity, knowledge about water consumption in product life cycles is important. This study analyzes water consumption and the resulting impacts of Volkswagen's car models Polo, Golf, and Passat and represents the first application of impact-oriented water footprint methods on complex industrial products. Freshwater consumption throughout the cars' life cycles is allocated to material groups and assigned to countries according to import mix shares or location of production sites. Based on these regionalized water inventories, consequences for human health, ecosystems, and resources are determined by using recently developed impact assessment methods. Water consumption along the life cycles of the three cars ranges from 52 to 83 m(3)/car, of which more than 95% is consumed in the production phase, mainly resulting from producing iron, steel, precious metals, and polymers. Results show that water consumption takes place in 43 countries worldwide and that only 10% is consumed directly at Volkswagen's production sites. Although impacts on health tend to be dominated by water consumption in South Africa and Mozambique, resulting from the production of precious metals and aluminum, consequences for ecosystems and resources are mainly caused by water consumption of material production in Europe.

  13. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  14. Interaction of methotrexate with trypsin analyzed by spectroscopic and molecular modeling methods

    NASA Astrophysics Data System (ADS)

    Wang, Yanqing; Zhang, Hongmei; Cao, Jian; Zhou, Qiuhua

    2013-11-01

    Trypsin is one of important digestive enzymes that have intimate correlation with human health and illness. In this work, the interaction of trypsin with methotrexate was investigated by spectroscopic and molecular modeling methods. The results revealed that methotrexate could interact with trypsin with about one binding site. Methotrexate molecule could enter into the primary substrate-binding pocket, resulting in inhibition of trypsin activity. Furthermore, the thermodynamic analysis implied that electrostatic force, hydrogen bonding, van der Waals and hydrophobic interactions were the main interactions for stabilizing the trypsin-methotrexate system, which agreed well with the results from the molecular modeling study.

  15. Detecting Network Communities: An Application to Phylogenetic Analysis

    PubMed Central

    Andrade, Roberto F. S.; Rocha-Neto, Ivan C.; Santos, Leonardo B. L.; de Santana, Charles N.; Diniz, Marcelo V. C.; Lobão, Thierry Petit; Goés-Neto, Aristóteles; Pinho, Suani T. R.; El-Hani, Charbel N.

    2011-01-01

    This paper proposes a new method to identify communities in generally weighted complex networks and apply it to phylogenetic analysis. In this case, weights correspond to the similarity indexes among protein sequences, which can be used for network construction so that the network structure can be analyzed to recover phylogenetically useful information from its properties. The analyses discussed here are mainly based on the modular character of protein similarity networks, explored through the Newman-Girvan algorithm, with the help of the neighborhood matrix . The most relevant networks are found when the network topology changes abruptly revealing distinct modules related to the sets of organisms to which the proteins belong. Sound biological information can be retrieved by the computational routines used in the network approach, without using biological assumptions other than those incorporated by BLAST. Usually, all the main bacterial phyla and, in some cases, also some bacterial classes corresponded totally (100%) or to a great extent (>70%) to the modules. We checked for internal consistency in the obtained results, and we scored close to 84% of matches for community pertinence when comparisons between the results were performed. To illustrate how to use the network-based method, we employed data for enzymes involved in the chitin metabolic pathway that are present in more than 100 organisms from an original data set containing 1,695 organisms, downloaded from GenBank on May 19, 2007. A preliminary comparison between the outcomes of the network-based method and the results of methods based on Bayesian, distance, likelihood, and parsimony criteria suggests that the former is as reliable as these commonly used methods. We conclude that the network-based method can be used as a powerful tool for retrieving modularity information from weighted networks, which is useful for phylogenetic analysis. PMID:21573202

  16. Partition method and experimental validation for impact dynamics of flexible multibody system

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Liu, Z. Y.; Hong, J. Z.

    2018-06-01

    The impact problem of a flexible multibody system is a non-smooth, high-transient, and strong-nonlinear dynamic process with variable boundary. How to model the contact/impact process accurately and efficiently is one of the main difficulties in many engineering applications. The numerical approaches being used widely in impact analysis are mainly from two fields: multibody system dynamics (MBS) and computational solid mechanics (CSM). Approaches based on MBS provide a more efficient yet less accurate analysis of the contact/impact problems, while approaches based on CSM are well suited for particularly high accuracy needs, yet require very high computational effort. To bridge the gap between accuracy and efficiency in the dynamic simulation of a flexible multibody system with contacts/impacts, a partition method is presented considering that the contact body is divided into two parts, an impact region and a non-impact region. The impact region is modeled using the finite element method to guarantee the local accuracy, while the non-impact region is modeled using the modal reduction approach to raise the global efficiency. A three-dimensional rod-plate impact experiment is designed and performed to validate the numerical results. The principle for how to partition the contact bodies is proposed: the maximum radius of the impact region can be estimated by an analytical method, and the modal truncation orders of the non-impact region can be estimated by the highest frequency of the signal measured. The simulation results using the presented method are in good agreement with the experimental results. It shows that this method is an effective formulation considering both accuracy and efficiency. Moreover, a more complicated multibody impact problem of a crank slider mechanism is investigated to strengthen this conclusion.

  17. Comparative analysis of methods for detecting interacting loci.

    PubMed

    Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue

    2011-07-05

    Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list.

  18. First results of the delayed fluorescence velocimetry as applied to diesel spray diagnostics

    NASA Astrophysics Data System (ADS)

    Megahed, M.; Roosen, P.

    1993-08-01

    One of the main parameters governing diesel spray formation is the fuel's velocity just beneath the nozzle. The high density of the injected liquid within the first few millimeters under the injector prohibits accurate measurements of this velocity. The liquid's velocity in this region has been mainly measured using intrusive methods and has been numerically calculated without considering the complex flow fields in the nozzle. A new optical method based on laser induced delayed fluorescence allowing the measurement of the fuel's velocity close to the nozzle is reported. The results are accurate to about 14% and represent the velocities of heavy oils within the first 2 - 5 mm beneath the nozzle. The development of the velocity over the injection period showed a drastic deceleration of the fuel within the first 3 mm beneath the nozzle. This is assumed to be due to the complex interaction of cavitation in the injection hole and pressure waves in the injection system which causes the start of atomization in the nozzle hole.

  19. Social Support of Patients with Type 2 Diabetes in Marginalized Contexts in Mexico and Its Relation to Compliance with Treatment: A Sociocultural Approach

    PubMed Central

    Juárez-Ramírez, Clara; Théodore, Florence L.; Villalobos, Aremis; Jiménez-Corona, Aida; Lerin, Sergio; Nigenda, Gustavo; Lewis, Sarah

    2015-01-01

    Objective This study aimed to describe the ways social support works in the daily life of patients with type 2 diabetes living in conditions of social and economic marginality, in order to understand how that support relates to treatment compliance. Methods Sequential mixed methods research was used. The sample of patients was obtained from primary health care units and selected considering regional representativeness, and levels of morbidity and mortality for type 2 diabetes. Results Results point to the nuclear family as the main source of support. Regardless of the area of residence, four main dimensions of support were identified: economic support, help with treatment compliance, emotional support, and material aid. Conclusions We conclude that the support network assists the patient in different ways and helps cope with the disease, but in conditions of social and economic marginality, does not guarantee the quality of attention nor enable the self-management of treatment. PMID:26545122

  20. Comparison of Various Similarity Measures for Average Image Hash in Mobile Phone Application

    NASA Astrophysics Data System (ADS)

    Farisa Chaerul Haviana, Sam; Taufik, Muhammad

    2017-04-01

    One of the main issue in Content Based Image Retrieval (CIBR) is similarity measures for resulting image hashes. The main key challenge is to find the most benefits distance or similarity measures for calculating the similarity in term of speed and computing costs, specially under limited computing capabilities device like mobile phone. This study we utilize twelve most common and popular distance or similarity measures technique implemented in mobile phone application, to be compared and studied. The results show that all similarity measures implemented in this study was perform equally under mobile phone application. This gives more possibilities for method combinations to be implemented for image retrieval.

  1. A Generalised Fault Protection Structure Proposed for Uni-grounded Low-Voltage AC Microgrids

    NASA Astrophysics Data System (ADS)

    Bui, Duong Minh; Chen, Shi-Lin; Lien, Keng-Yu; Jiang, Jheng-Lun

    2016-04-01

    This paper presents three main configurations of uni-grounded low-voltage AC microgrids. Transient situations of a uni-grounded low-voltage (LV) AC microgrid (MG) are simulated through various fault tests and operation transition tests between grid-connected and islanded modes. Based on transient simulation results, available fault protection methods are proposed for main and back-up protection of a uni-grounded AC microgrid. In addition, concept of a generalised fault protection structure of uni-grounded LVAC MGs is mentioned in the paper. As a result, main contributions of the paper are: (i) definition of different uni-grounded LVAC MG configurations; (ii) analysing transient responses of a uni-grounded LVAC microgrid through line-to-line faults, line-to-ground faults, three-phase faults and a microgrid operation transition test, (iii) proposing available fault protection methods for uni-grounded microgrids, such as: non-directional or directional overcurrent protection, under/over voltage protection, differential current protection, voltage-restrained overcurrent protection, and other fault protection principles not based on phase currents and voltages (e.g. total harmonic distortion detection of currents and voltages, using sequence components of current and voltage, 3I0 or 3V0 components), and (iv) developing a generalised fault protection structure with six individual protection zones to be suitable for different uni-grounded AC MG configurations.

  2. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    PubMed

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  3. Research on dynamic characteristics of motor vibration isolation system through mechanical impedance method

    NASA Astrophysics Data System (ADS)

    Zhao, Xingqian; Xu, Wei; Shuai, Changgeng; Hu, Zechao

    2017-12-01

    A mechanical impedance model of a coupled motor-shaft-bearing system has been developed to predict the dynamic characteristics and partially validated by comparing the computing results with finite element method (FEM), including the comparison of displacement amplitude in x and z directions at the two ends of the flexible coupling, the comparison of normalized vertical reaction force in z direction at bearing pedestals. The results demonstrate that the developed model can precisely predict the dynamic characteristics and the main advantage of such a method is that it can clearly illustrate the vibration property of the motor subsystem, which plays an important role in the isolation system design.

  4. The main peculiarities of the processes of the deformation and destruction of lunar soil

    NASA Technical Reports Server (NTRS)

    Leonovich, A. K.; Gromov, V. V.; Dmitriyev, A. D.; Penetrigov, V. N.; Senevov, P. S.; Shvarev, V. V.

    1977-01-01

    The main results of study of the physical and mechanical properties of lunar soil, obtained by laboratory study of samples returned from the moon by Luna 16 and Luna 20, as well as by operation of the self-propelled Lunokhod 1 and Lunokhod 2 on the surface of the moon, are analyzed in the report. All studies were carried out by single methods and by means of unified instruments, allowing a confident comparison of the results obtained. The investigations conducted allowed the following values of the main physical-mechanical properties of lunar soil to be determined: in the natural condition the solid density corresponds to the porosity of 0.8; the modal value of the carrying capacity is 0.4 kg/square cm; adhesion is 0.04 to 0.06 kg/square cm; and the internal angle of friction is 20 to 25 degree. The main mechanisms of deformation and destruction of the soil are analyzed in the report, and the relationships between the mechanical properties and physical parameters of the soil are presented.

  5. Thermographic Nondestructive Evaluation of the Space Shuttle Main Engine Nozzle

    NASA Technical Reports Server (NTRS)

    Walker, James L.; Lansing, Matthew D.; Russell, Samuel S.; Caraccioli, Paul; Whitaker, Ann F. (Technical Monitor)

    2000-01-01

    The methods and results presented in this summary address the thermographic identification of interstitial leaks in the Space Shuttle Main Engine nozzles. A highly sensitive digital infrared camera is used to record the minute cooling effects associated with a leak source, such as a crack or pinhole, hidden within the nozzle wall by observing the inner "hot wall" surface as the nozzle is pressurized. These images are enhanced by digitally subtracting a thermal reference image taken before pressurization, greatly diminishing background noise. The method provides a nonintrusive way of localizing the tube that is leaking and the exact leak source position to within a very small axial distance. Many of the factors that influence the inspectability of the nozzle are addressed; including pressure rate, peak pressure, gas type, ambient temperature and surface preparation.

  6. Kinetic Analysis of the Main Temperature Stage of Fast Pyrolysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiaoxiao; Zhao, Yuying; Xu, Lanshu; Li, Rui

    2017-10-01

    Kinetics of the thermal decomposition of eucalyptus chips was evaluated using a high-rate thermogravimetric analyzer (BL-TGA) designed by our research group. The experiments were carried out under non-isothermal condition in order to determine the fast pyrolysis behavior of the main temperature stage (350-540ºC) at heating rates of 60, 120, 180, and 360ºC min-1. The Coats-Redfern integral method and four different reaction mechanism models were adopted to calculate the kinetic parameters including apparent activation energy and pre-exponential factor, and the Flynn-Wall-Ozawa method was employed to testify apparent activation energy. The results showed that estimation value was consistent with the values obtained by linear fitting equations, and the best-fit model for fast pyrolysis was found.

  7. PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization

    NASA Astrophysics Data System (ADS)

    Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh

    2017-05-01

    Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.

  8. Determination of Ca content of coral skeleton by analyte additive method using the LIBS technique

    NASA Astrophysics Data System (ADS)

    Haider, A. F. M. Y.; Khan, Z. H.

    2012-09-01

    Laser-induced breakdown spectroscopic (LIBS) technique was used to study the elemental profile of coral skeletons. Apart from calcium and carbon, which are the main elemental constituents of coral skeleton, elements like Sr, Na, Mg, Li, Si, Cu, Ti, K, Mn, Zn, Ba, Mo, Br and Fe were detected in the coral skeletons from the Inani Beach and the Saint Martin's island of Bangladesh and the coral from the Philippines. In addition to the qualitative analysis, the quantitative analysis of the main elemental constituent, calcium (Ca), was done. The result shows the presence of (36.15±1.43)% by weight of Ca in the coral skeleton collected from the Inani Beach, Cox's Bazar, Bangladesh. It was determined by using six calibration curves, drawn for six emission lines of Ca I (428.301 nm, 428.936 nm, 431.865 nm, 443.544 nm, 443.569 nm, and 445.589 nm), by standard analyte additive method. Also from AAS measurement the percentage content of Ca in the same sample of coral skeleton obtained was 39.87% by weight which compares fairly well with the result obtained by the analyte additive method.

  9. Experiences of abortion: A narrative review of qualitative studies

    PubMed Central

    Lie, Mabel LS; Robson, Stephen C; May, Carl R

    2008-01-01

    Background Although abortion or termination of pregnancy (TOP) has become an increasingly normalized component of women's health care over the past forty years, insufficient attention has been paid to women's experiences of surgical or medical methods of TOP. Objective To undertake a narrative review of qualitative studies of women's experiences of TOP and their perspectives on surgical or medical methods. Methods Keyword searches of Medline, CINAHL, ISI, and IBSS databases. Manual searches of other relevant journals and reference lists of primary articles. Results Qualitative studies (n = 18) on women's experiences of abortion were identified. Analysis of the results of studies reviewed revealed three main themes: experiential factors that promote or inhibit the choice to seek TOP; experiences of TOP; and experiential aspects of the environment in which TOP takes place. Conclusion Women's choices about TOP are mainly pragmatic ones that are related to negotiating finite personal and family and emotional resources. Women who are well informed and supported in their choices experience good psychosocial outcomes from TOP. Home TOP using mifepristone appears attractive to women who are concerned about professionals' negative attitudes and lack of privacy in formal healthcare settings but also leads to concerns about management and safety. PMID:18637178

  10. Calculations of dose distributions using a neural network model

    NASA Astrophysics Data System (ADS)

    Mathieu, R.; Martin, E.; Gschwind, R.; Makovicka, L.; Contassot-Vivier, S.; Bahi, J.

    2005-03-01

    The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journées Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.

  11. Calculations of dose distributions using a neural network model.

    PubMed

    Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J

    2005-03-07

    The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.

  12. A Monte-Carlo method which is not based on Markov chain algorithm, used to study electrostatic screening of ion potential

    NASA Astrophysics Data System (ADS)

    Šantić, Branko; Gracin, Davor

    2017-12-01

    A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.

  13. Structural dynamic analysis of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Scott, L. P.; Jamison, G. T.; Mccutcheon, W. A.; Price, J. M.

    1981-01-01

    This structural dynamic analysis supports development of the SSME by evaluating components subjected to critical dynamic loads, identifying significant parameters, and evaluating solution methods. Engine operating parameters at both rated and full power levels are considered. Detailed structural dynamic analyses of operationally critical and life limited components support the assessment of engine design modifications and environmental changes. Engine system test results are utilized to verify analytic model simulations. The SSME main chamber injector assembly is an assembly of 600 injector elements which are called LOX posts. The overall LOX post analysis procedure is shown.

  14. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews.

    PubMed

    Pluye, Pierre; Hong, Quan Nha

    2014-01-01

    This article provides an overview of mixed methods research and mixed studies reviews. These two approaches are used to combine the strengths of quantitative and qualitative methods and to compensate for their respective limitations. This article is structured in three main parts. First, the epistemological background for mixed methods will be presented. Afterward, we present the main types of mixed methods research designs and techniques as well as guidance for planning, conducting, and appraising mixed methods research. In the last part, we describe the main types of mixed studies reviews and provide a tool kit and examples. Future research needs to offer guidance for assessing mixed methods research and reporting mixed studies reviews, among other challenges.

  15. Surface doping with Al in Ba-hexaferrite powders (abstract)

    NASA Astrophysics Data System (ADS)

    Turilli, G.; Paoluzi, A.; Lucenti, M.

    1991-04-01

    Barium M-hexaferrites were intensively studied in order to improve their magnetic characteristics for application as permanent magnets using different ion substitutions. However, substitutions that improve the BHmax energy product have not been found. We propose a new method in order to modify the extrinsic magnetic characteristics of Ba-hexaferrite powders without reducing drastically the magnetization and the magnetic anisotropy. This method consists in the surface doping of the hexaferrite particles, giving as a result a modification of the energy pinning of the domain walls at the grain boundary. Ba ferrite powders having a mean diameter of 3.2 μm have been dry mixed with Al2O3 powders with a diameter <0.5 μm. From the mixed powder a series of 10 cylindrically shaped samples was obtained by isostatically pressing the powders. The samples were thermically treated from 900 to 1200 °C, together with 10 cylindrical samples of pure hexaferrite, for 1 h each. For all the samples we have measured the Curie temperature (Tc), the anisotropy field (HA), the coercive field (Hc), and the saturation magnetization σ. The main results are that up to 1000 °C the Al diffusion is mainly localized at the surface of the grain so that the main part of the grain is undoped as confirmed by the Tc and HA values that are the same as those found in pure hexaferrites. From 900 to 1000 °C the saturation magnetization decreases of the 3% while Hc increases of the 9% with respect to the pure hexaferrite. This result seems to confirm the validity of the proposed method. Above 1000 °C Al begin to diffuse in the grain and above 1200 °C it is possible to say, from thermomagnetic analysis, that Al has diffused uniformly throughout the grain. In this last temperature range the Al substitution leads to a 10% reduction in σ as expected1 while Hc only increases 12%. These preliminary results suggest that the method of surface doping of the powders could be used in order to increase or decrease the H values without strongly influencing the σ values.

  16. A constraint optimization based virtual network mapping method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen

    2013-03-01

    Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.

  17. [Simultaneous quantitative analysis of five alkaloids in Sophora flavescens by multi-components assay by single marker].

    PubMed

    Chen, Jing; Wang, Shu-Mei; Meng, Jiang; Sun, Fei; Liang, Sheng-Wang

    2013-05-01

    To establish a new method for quality evaluation and validate its feasibilities by simultaneous quantitative assay of five alkaloids in Sophora flavescens. The new quality evaluation method, quantitative analysis of multi-components by single marker (QAMS), was established and validated with S. flavescens. Five main alkaloids, oxymatrine, sophocarpine, matrine, oxysophocarpine and sophoridine, were selected as analytes to evaluate the quality of rhizome of S. flavescens, and the relative correction factor has good repeatibility. Their contents in 21 batches of samples, collected from different areas, were determined by both external standard method and QAMS. The method was evaluated by comparison of the quantitative results between external standard method and QAMS. No significant differences were found in the quantitative results of five alkaloids in 21 batches of S. flavescens determined by external standard method and QAMS. It is feasible and suitable to evaluate the quality of rhizome of S. flavescens by QAMS.

  18. Linear least-squares method for global luminescent oil film skin friction field analysis

    NASA Astrophysics Data System (ADS)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  19. Sampling based State of Health estimation methodology for Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Camci, Fatih; Ozkurt, Celil; Toker, Onur; Atamuradov, Vepa

    2015-03-01

    Storage and management of energy is becoming a more and more important problem every day, especially for electric and hybrid vehicle applications. Li-ion battery is one of the most important technological alternatives for high capacity energy storage and related industrial applications. State of Health (SoH) of Li-ion batteries plays a critical role in their deployment from economic, safety, and availability aspects. Most, if not all, of the studies related to SoH estimation focus on the measurement of a new parameter/physical phenomena related to SoH, or development of new statistical/computational methods using several parameters. This paper presents a new approach for SoH estimation for Li-ion battery systems with multiple battery cells: The main idea is a new circuit topology which enables separation of battery cells into two groups, main and test batteries, whenever a SoH related measurement is to be conducted. All battery cells will be connected to the main battery during the normal mode of operation. When a measurement is needed for SoH estimation, some of the cells will be separated from the main battery, and SoH estimation related measurements will be performed on these units. Compared to classical SoH measurement methods which deal with whole battery system, the proposed method estimates the SoH of the system by separating a small but representative set of cells. While SoH measurements are conducted on these isolated cells, remaining cells in the main battery continue to function in normal mode, albeit in slightly reduced performance levels. Preliminary experimental results are quite promising, and validate the feasibility of the proposed approach. Technical details of the proposed circuit architecture are also summarized in the paper.

  20. RECOVERY ACT - Methods for Decision under Technological Change Uncertainty and Risk Assessment for Integrated Assessment of Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webster, Mort David

    2015-03-10

    This report presents the final outcomes and products of the project as performed at the Massachusetts Institute of Technology. The research project consists of three main components: methodology development for decision-making under uncertainty, improving the resolution of the electricity sector to improve integrated assessment, and application of these methods to integrated assessment. Results in each area is described in the report.

  1. Study of process of trueing diamond grinding wheels on metal bonds by method of free abrasive after processing of leucosapphire blanks

    NASA Astrophysics Data System (ADS)

    Fedonin, O. N.; Handozhko, A. V.; Fedukov, A. G.

    2018-03-01

    The problem of mechanical processing, in particular, grinding products from leucosapphire, is considered. The main problem with this treatment is the need to adjust the diamond tool. One of the methods of tool trueing using loose abrasive technique is considered. The results of a study on restoring the tool cutting ability, its shape and profile after straightening are given.

  2. Inclusion of Multiple Functional Types in an Automaton Model of Bioturbation and Their Effects on Sediments Properties

    DTIC Science & Technology

    2007-09-30

    if the traditional models adequately parameterize and characterize the actual mixing. As an example of the application of this method , we have...2) Deterministic Modelling Results. As noted above, we are working on a stochastic method of modelling transient and short-lived tracers...heterogeneity. RELATED PROJECTS We have worked in collaboration with Peter Jumars (Univ. Maine), and his PhD student Kelley Dorgan, who are measuring

  3. Funding California Schools: The Revenue Limit System. Technical Appendices

    ERIC Educational Resources Information Center

    Weston, Margaret

    2010-01-01

    This document presents the technical appendices accompanying the report, "Funding California Schools: The Revenue Limit System." Included are: (1) Revenue Limit Calculation and Decomposition; (2) Data and Methods; and (3) Base Funding Alternative Simulation Results. (Contains 5 tables and 26 footnotes.) [For the main report,…

  4. The parts of a research paper? What your readers expect

    USDA-ARS?s Scientific Manuscript database

    Scientific papers are organized into sections that are easy for scientific readers to follow. This second part of a three part series, summarizes the points that should be considered when writing the main sections of a research report. These sections typically include Introduction Methods, Results,...

  5. Multi-Dimensional Full Boltzmann-Neutrino-Radiation Hydrodynamic Simulations and Their Detailed Comparisons with Monte-Carlo Methods in Core Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Nagakura, H.; Richers, S.; Ott, C. D.; Iwakami, W.; Furusawa, S.; Sumiyoshi, K.; Yamada, S.; Matsufuru, H.; Imakura, A.

    2016-10-01

    We have developed a 7-dimensional Full Boltzmann-neutrino-radiation-hydrodynamical code and carried out ab-initio axisymmetric CCSNe simulations. I will talk about main results of our simulations and also discuss current ongoing projects.

  6. Residual effects of applied chemical fertilisers on growth and seed yields of sunflower (Helianthus annuus cv. high sun 33) after the harvests of initial main crops of maize (Zea mays L.), soybean (Glycine max L.) and sunflower (Helianthus annuus).

    PubMed

    Srisa-ard, K

    2007-03-15

    The experiments consisted of two locations, i.e., the first one was carried out on a growers's upland area at Saraburi Province, Central Plane region of Thailand with the use of Chatturat soil series (Typic Haplustalfs, fine, mixed) and the second experiment was carried out at Suranaree Technology university Experimental Farm, Suranaree Technology University Northeast Thailand with the use of Korat soil series (Oxic Paleustults). The experiments aimed to investigate the effect of residual effects of applied chemical fertilisers on growth and seed yields of sunflower (Helianthus annuus) after the harvests of initial main crops of maize, soybean and sunflower. The experiments consisted of four cultural methods being practiced by growers in both regions. For Methods 1 and 2, each had four fertiliser treatments; Method 3 consisted of two fertiliser treatments and Method 4 was used as a control treatment. The results showed that soil pH, organic matter and nutrients of Korat soil series were most suited soil conditions for growth of sunflower plants, whilst that of Chatturat soil series at Saraburi province was an alkaline soil with a mean value of soil pH of 7.8. Chatturat soil series, in most cases, gave higher amounts of seed yields (1,943.75 kg ha(-1)) than Korat soil series. Residual effects of applied chemical fertilisers to main crops of soybean gave better growth and seed yields of sunflower plants and it is considered to be the first choice. The use of sunflower and maize as main crops gave a second choice for subsequent crop of sunflower.

  7. Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.

    2018-04-01

    Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.

  8. [Nonuniformity in the evolutionary rate in the virilis: II. group of Drosophilas: application of the method of Tajima's test].

    PubMed

    Kulikov, A M; Lazebnyĭ, O E; Chekunova, A I; Mitrofanov, V G

    2010-01-01

    The steadiness of the molecular clock was estimated in 11 Drosophila species of the virilis group by sequences of five genes by applying Tajima's Simple Method. The main characteristic of this method is the independence of its phylogenetic constructions. The obtained results have completely confirmed the conclusions drawn relying on the application of the two-cluster test and the Takezaki branch-length test. In addition, the deviation of the molecular clock has found confirmation in D. virilis evolutionary lineages.

  9. Utilizing the N beam position monitor method for turn-by-turn optics measurements

    NASA Astrophysics Data System (ADS)

    Langner, A.; Benedetti, G.; Carlà, M.; Iriso, U.; Martí, Z.; de Portugal, J. Coello; Tomás, R.

    2016-09-01

    The N beam position monitor method (N -BPM) which was recently developed for the LHC has significantly improved the precision of optics measurements that are based on BPM turn-by-turn data. The main improvement is due to the consideration of correlations for statistical and systematic error sources, as well as increasing the amount of BPM combinations which are used to derive the β -function at one location. We present how this technique can be applied at light sources like ALBA, and compare the results with other methods.

  10. Lingual straight wire method.

    PubMed

    Takemoto, Kyoto; Scuzzo, Giuseppe; Lombardo, L U C A; Takemoto, Y U I

    2009-12-01

    The mushroom arch-wire is mainly used in lingual orthodontic treatment but the complicated wire bending it requires affects both the treatment results and the time spent at the chair. The author proposes a new lingual straight wire method (LSW) in order to facilitate arch coordination and simplify the mechanics. The attention paid to the set-up model and bracket positioning and bonding plus the use of the new LSW method will also improve patient comfort. Copyright 2009 Collège Européen d'Orthodontie. Published by Elsevier Masson SAS.. All rights reserved.

  11. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    NASA Astrophysics Data System (ADS)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  12. Comparison of three nondestructive and contactless techniques for investigations of recombination parameters on an example of silicon samples

    NASA Astrophysics Data System (ADS)

    Chrobak, Ł.; Maliński, M.

    2018-06-01

    This paper presents a comparison of three nondestructive and contactless techniques used for determination of recombination parameters of silicon samples. They are: photoacoustic method, modulated free carriers absorption method and the photothermal radiometry method. In the paper the experimental set-ups used for measurements of the recombination parameters in these methods as also theoretical models used for interpretation of obtained experimental data have been presented and described. The experimental results and their respective fits obtained with these nondestructive techniques are shown and discussed. The values of the recombination parameters obtained with these methods are also presented and compared. Main advantages and disadvantages of presented methods have been discussed.

  13. Problems and methods of calculating the Legendre functions of arbitrary degree and order

    NASA Astrophysics Data System (ADS)

    Novikova, Elena; Dmitrenko, Alexander

    2016-12-01

    The known standard recursion methods of computing the full normalized associated Legendre functions do not give the necessary precision due to application of IEEE754-2008 standard, that creates a problems of underflow and overflow. The analysis of the problems of the calculation of the Legendre functions shows that the problem underflow is not dangerous by itself. The main problem that generates the gross errors in its calculations is the problem named the effect of "absolute zero". Once appeared in a forward column recursion, "absolute zero" converts to zero all values which are multiplied by it, regardless of whether a zero result of multiplication is real or not. Three methods of calculating of the Legendre functions, that removed the effect of "absolute zero" from the calculations are discussed here. These methods are also of interest because they almost have no limit for the maximum degree of Legendre functions. It is shown that the numerical accuracy of these three methods is the same. But, the CPU calculation time of the Legendre functions with Fukushima method is minimal. Therefore, the Fukushima method is the best. Its main advantage is computational speed which is an important factor in calculation of such large amount of the Legendre functions as 2 401 336 for EGM2008.

  14. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  15. Elemental Analysis of Beryllium Samples Using a Microzond-EGP-10 Unit

    NASA Astrophysics Data System (ADS)

    Buzoverya, M. E.; Karpov, I. A.; Gorodnov, A. A.; Shishpor, I. V.; Kireycheva, V. I.

    2017-12-01

    Results concerning the structural and elemental analysis of beryllium samples obtained via different technologies using a Microzond-EGP-10 unit with the help of the PIXE and RBS methods are presented. As a result, the overall chemical composition and the nature of inclusions were determined. The mapping method made it possible to reveal the structural features of beryllium samples: to select the grains of the main substance having different size and chemical composition, to visualize the interfaces between the regions of different composition, and to describe the features of the distribution of impurities in the samples.

  16. Gross domestic product estimation based on electricity utilization by artificial neural network

    NASA Astrophysics Data System (ADS)

    Stevanović, Mirjana; Vujičić, Slađana; Gajić, Aleksandar M.

    2018-01-01

    The main goal of the paper was to estimate gross domestic product (GDP) based on electricity estimation by artificial neural network (ANN). The electricity utilization was analyzed based on different sources like renewable, coal and nuclear sources. The ANN network was trained with two training algorithms namely extreme learning method and back-propagation algorithm in order to produce the best prediction results of the GDP. According to the results it can be concluded that the ANN model with extreme learning method could produce the acceptable prediction of the GDP based on the electricity utilization.

  17. Eventogram: A Visual Representation of Main Events in Biomedical Signals.

    PubMed

    Elgendi, Mohamed

    2016-09-22

    Biomedical signals carry valuable physiological information and many researchers have difficulty interpreting and analyzing long-term, one-dimensional, quasi-periodic biomedical signals. Traditionally, biomedical signals are analyzed and visualized using periodogram, spectrogram, and wavelet methods. However, these methods do not offer an informative visualization of main events within the processed signal. This paper attempts to provide an event-related framework to overcome the drawbacks of the traditional visualization methods and describe the main events within the biomedical signal in terms of duration and morphology. Electrocardiogram and photoplethysmogram signals are used in the analysis to demonstrate the differences between the traditional visualization methods, and their performance is compared against the proposed method, referred to as the " eventogram " in this paper. The proposed method is based on two event-related moving averages that visualizes the main time-domain events in the processed biomedical signals. The traditional visualization methods were unable to find dominant events in processed signals while the eventogram was able to visualize dominant events in signals in terms of duration and morphology. Moreover, eventogram -based detection algorithms succeeded with detecting main events in different biomedical signals with a sensitivity and positive predictivity >95%. The output of the eventogram captured unique patterns and signatures of physiological events, which could be used to visualize and identify abnormal waveforms in any quasi-periodic signal.

  18. Community pharmacists' views of the use of oral rehydration salt in Nigeria.

    PubMed

    Oyetunde, Olubukola; Williams, Veronika

    2018-06-01

    Background Oral rehydration salt (ORS) is an affordable and effective intervention for the management of acute watery diarrhoea (AWD), especially in children under 5 years. A knowledge/practice gap exists among community pharmacists (CPs) in Lagos, Nigeria, and in many low to middle income countries. This gap results in underutilization of ORS for diarrhoea management. Objective The objective was to explore CPs' views of the barriers and facilitators to the use of ORS in practice. Setting Community pharmacy practices, Lagos, Nigeria. Methods Qualitative methods were used to explore pharmacists' views. Recruitment of participants were mainly at zonal meetings. A total of ten CPs participated based on maximum variation and snowballing sampling. Semi-structured interviews conducted covered knowledge, experiences and contextual issues. Interviews were audiorecorded, transcribed and analysed using framework approach to thematic analysis. Main outcome measure Pharmacists' views of barriers and facilitators to the use of ORS. Results Barriers to the use of ORS include caregivers' expectation for an antimicrobial, which was often explicitly and specifically for metronidazole. Also, CPs seemed to doubt applicability of ORS alone, therefore, responded to caregivers' complaints about ORS, by dispensing metronidazole. These barriers appeared to have normalised metronidazole for AWD treatment in this setting. Current facilitators include the caregivers' improved awareness of ORS and access to primary health centers that often resulted in increased demand for ORS in pharmacies. Conclusion CPs' views showed that caregivers' expectations for an antimicrobial may be the main barrier to the use of ORS in their practices.

  19. On-orbit calibration for star sensors without priori information.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang

    2017-07-24

    The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.

  20. Improving the Method of Roof Fall Susceptibility Assessment based on Fuzzy Approach

    NASA Astrophysics Data System (ADS)

    Ghasemi, Ebrahim; Ataei, Mohammad; Shahriar, Kourosh

    2017-03-01

    Retreat mining is always accompanied by a great amount of accidents and most of them are due to roof fall. Therefore, development of methodologies to evaluate the roof fall susceptibility (RFS) seems essential. Ghasemi et al. (2012) proposed a systematic methodology to assess the roof fall risk during retreat mining based on risk assessment classic approach. The main defect of this method is ignorance of subjective uncertainties due to linguistic input value of some factors, low resolution, fixed weighting, sharp class boundaries, etc. To remove this defection and improve the mentioned method, in this paper, a novel methodology is presented to assess the RFS using fuzzy approach. The application of fuzzy approach provides an effective tool to handle the subjective uncertainties. Furthermore, fuzzy analytical hierarchy process (AHP) is used to structure and prioritize various risk factors and sub-factors during development of this method. This methodology is applied to identify the susceptibility of roof fall occurrence in main panel of Tabas Central Mine (TCM), Iran. The results indicate that this methodology is effective and efficient in assessing RFS.

  1. COOMET pilot comparison 473/RU-a/09: Comparison of hydrophone calibrations in the frequency range 250 Hz to 200 kHz

    NASA Astrophysics Data System (ADS)

    Yi, Chen; Isaev, A. E.; Yuebing, Wang; Enyakov, A. M.; Teng, Fei; Matveev, A. N.

    2011-01-01

    A description is given of the COOMET project 473/RU-a/09: a pilot comparison of hydrophone calibrations at frequencies from 250 Hz to 200 kHz between Hangzhou Applied Acoustics Research Institute (HAARI, China)—pilot laboratory—and Russian National Research Institute for Physicotechnical and Radio Engineering Measurements (VNIIFTRI, Designated Institute of Russia of the CIPM MRA). Two standard hydrophones, B&K 8104 and TC 4033, were calibrated and compared to assess the current state of hydrophone calibration of HAARI (China) and Russia. Three different calibration methods were applied: a vibrating column method, a free-field reciprocity method and a comparison method. The standard facilities of each laboratory were used, and three different sound fields were applied: pressure field, free-field and reverberant field. The maximum deviation of the sensitivities of two hydrophones between the participants' results was 0.36 dB. Main text. To reach the main text of this paper, click on Final Report. The final report has been peer-reviewed and approved for publication by the CCAUV-KCWG.

  2. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis

    PubMed Central

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. PMID:29377956

  3. Radiation noise of the bearing applied to the ceramic motorized spindle based on the sub-source decomposition method

    NASA Astrophysics Data System (ADS)

    Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.

    2017-12-01

    This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.

  4. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis.

    PubMed

    Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.

  5. Separation and quantification of monoclonal-antibody aggregates by hollow-fiber-flow field-flow fractionation.

    PubMed

    Fukuda, Jun; Iwura, Takafumi; Yanagihara, Shigehiro; Kano, Kenji

    2014-10-01

    Hollow-fiber-flow field-flow fractionation (HF5) separates protein molecules on the basis of the difference in the diffusion coefficient, and can evaluate the aggregation ratio of proteins. However, HF5 is still a minor technique because information on the separation conditions is limited. We examined in detail the effect of different settings, including the main-flow rate, the cross-flow rate, the focus point, the injection amount, and the ionic strength of the mobile phase, on fractographic characteristics. On the basis of the results, we proposed optimized conditions of the HF5 method for quantification of monoclonal antibody in sample solutions. The HF5 method was qualified regarding the precision, accuracy, linearity of the main peak, and quantitation limit. In addition, the HF5 method was applied to non-heated Mab A and heat-induced-antibody-aggregate-containing samples to evaluate the aggregation ratio and the distribution extent. The separation performance was comparable with or better than that of conventional methods including analytical ultracentrifugation-sedimentation velocity and asymmetric-flow field-flow fractionation.

  6. College Students or Criminals? A Postcolonial Geographic Analysis of the Social Field of Whiteness at an Urban Community College Branch Campus and Suburban Main Campus

    ERIC Educational Resources Information Center

    Dache-Gerbino, Amalia; White, Julie A.

    2016-01-01

    Objective: This study illustrates how external factors of urban and suburban racializations contribute to criminalization and surveillance of an urban community college campus and bus shelters surrounding it. Method: A postcolonial geographic research design is used to analyze geographic and qualitative data. Results: Results show that an urban…

  7. The effect of different propolis harvest methods on its lead contents determined by ET AAS and UV-visS.

    PubMed

    Sales, A; Alvarez, A; Areal, M Rodriguez; Maldonado, L; Marchisio, P; Rodríguez, M; Bedascarrasbure, E

    2006-10-11

    Argentinean propolis is exported to different countries, specially Japan. The market demands propolis quality control according to international standards. The analytical determination of some metals, as lead in food, is very important for their high toxicity even in low concentrations and because of their harmful effects on health. Flavonoids, the main bioactive compounds of propolis, tend to chelate metals as lead, which becomes one of the main polluting agents of propolis. The lead found in propolis may come from the atmosphere or it may be incorporated in the harvest, extraction and processing methods. The aim of this work is to evaluate lead level on Argentinean propolis determined by electrothermal atomic absorption spectrometry (ET AAS) and UV-vis spectrophotometry (UV-visS) methods, as well as the effect of harvest methods on those contents. A randomized test with three different treatments of collection was made to evaluate the effect of harvest methods. These procedures were: separating wedges (traditional), netting plastic meshes and stamping out plastic meshes. By means of the analysis of variance technique for multiple comparisons (ANOVA) it was possible to conclude that there are significant differences between scraped and mesh methods (stamped out and mosquito netting meshes). The results obtained in the present test would allow us to conclude that mesh methods are more advisable than scraped ones in order to obtain innocuous and safe propolis with minor lead contents. A statistical comparison of lead determination by both, ET AAS and UV-visS methods, demonstrated that there is not a significant difference in the results achieved with the two analytical techniques employed.

  8. Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper

    NASA Astrophysics Data System (ADS)

    Renza, Diego; Martinez, Estibaliz; Molina, Iñigo; Ballesteros L., Dora M.

    2017-04-01

    This paper presents a new unsupervised change detection methodology for multispectral images applied to specific land covers. The proposed method involves comparing each image against a reference spectrum, where the reference spectrum is obtained from the spectral signature of the type of coverage you want to detect. In this case the method has been tested using multispectral images (SPOT5) of the community of Madrid (Spain), and multispectral images (Quickbird) of an area over Indonesia that was impacted by the December 26, 2004 tsunami; here, the tests have focused on the detection of changes in vegetation. The image comparison is obtained by applying Spectral Angle Mapper between the reference spectrum and each multitemporal image. Then, a threshold to produce a single image of change is applied, which corresponds to the vegetation zones. The results for each multitemporal image are combined through an exclusive or (XOR) operation that selects vegetation zones that have changed over time. Finally, the derived results were compared against a supervised method based on classification with the Support Vector Machine. Furthermore, the NDVI-differencing and the Spectral Angle Mapper techniques were selected as unsupervised methods for comparison purposes. The main novelty of the method consists in the detection of changes in a specific land cover type (vegetation), therefore, for comparison purposes, the best scenario is to compare it with methods that aim to detect changes in a specific land cover type (vegetation). This is the main reason to select NDVI-based method and the post-classification method (SVM implemented in a standard software tool). To evaluate the improvements using a reference spectrum vector, the results are compared with the basic-SAM method. In SPOT5 image, the overall accuracy was 99.36% and the κ index was 90.11%; in Quickbird image, the overall accuracy was 97.5% and the κ index was 82.16%. Finally, the precision results of the method are comparable to those of a supervised method, supported by low detection of false positives and false negatives, along with a high overall accuracy and a high kappa index. On the other hand, the execution times were comparable to those of unsupervised methods of low computational load.

  9. Research on the electromagnetic radiation characteristics of the gas main switch of a capacitive intense electron-beam accelerator

    NASA Astrophysics Data System (ADS)

    Qiu, Yongfeng; Liu, Jinliang; Yang, Jianhua; Cheng, Xinbing; Li, Guolin

    2017-11-01

    Strong electromagnetic fields are radiated during the operation of the intense electron-beam accelerator (IEBA), which may lead to the nearby electronic devices out of order. In this paper, the research on the electromagnetic radiation characteristic of the gas main switch of a capacitive IEBA is carried out by the methods of theory analysis and experiment investigation. It is obtained that the gas main switch is the dominating radiation resource. In the absence of electromagnetic shielding for the gas main switch, when the pulse forming line of the IEBA is charged to 700 kV, the radiation field with amplitude of 3280 V/m, dominant frequency of 84 MHz and high frequency 100 MHz is obtained at a distance of 10 meters away from the gas main switch. The experimental results of the radiation field agree with the theoretical calculations. We analyze the achievements of several research groups and find that there is a relationship between the rise time (T) of the transient current of the gas main switch and the dominant frequency (F) of the radiation field, namely, F*T=1. Contrast experiment is carried out with a metal shield cover for the gas main switch. Experimental results show that for the shielded setup the radiation field reduces to 115 V/m, the dominant frequency increases to 86.5 MHz at a distance of 10 away meters from the gas main switch. These conclusions are beneficial for further research on the electromagnetic radiation and protection of the IEBA.

  10. Comparison between amperometric and true potentiometric end-point detection in the determination of water by the Karl Fischer method.

    PubMed

    Cedergren, A

    1974-06-01

    A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.

  11. Non-Linear Steady State Vibrations of Beams Excited by Vortex Shedding

    NASA Astrophysics Data System (ADS)

    LEWANDOWSKI, R.

    2002-05-01

    In this paper the non-linear vibrations of beams excited by vortex-shedding are considered. In particular, the steady state responses of beams near the synchronization region are taken into account. The main aerodynamic properties of wind are described by using the semi-empirical model proposed by Hartlen and Currie. The finite element method and the strip method are used to formulate the equation of motion of the system treated. The harmonic balance method is adopted to derive the amplitude equations. These equations are solved with the help of the continuation method which is very convenient to perform the parametric studies of the problem and to determine the response curve in the synchronization region. Moreover, the equations of motion are also integrated using the Newmark method. The results of calculations of several example problems are also shown to confirm the efficiency and accuracy of the presented method. The results obtained by the harmonic balance method and by the Newmark methods are in good agreement with each other.

  12. Final report on APMP.RF-S21.F

    NASA Astrophysics Data System (ADS)

    Ishii, Masanori; Kim, Jeong Hwan; Ji, Yu; Cho, Chi Hyun; Zhang, Tim

    2018-01-01

    The supplementary comparison report APMP.RF-S21.F describes the comparison of loop antennas, which was conducted between April 2013 and January 2014. The two comparison artefacts were well-characterised active loop antennas of diameter 30 cm and 60 cm respectively, which typically operate in a frequency range from 9 kHz to 30 MHz. These antennas represent the main groups of antennas which are used around the world for EMC measurements in the frequency range below 30 MHz. There are several well-known methods for calibrating the antenna factor of these devices. The calibration systems used in this comparison for the loop antennas employed the standard magnetic field method or the three-antenna method. Despite the limitations of the algorithm, which we used to derive the reference value for each case (particularly for small samples), the actual calculated reference values seem to be reasonable. As a result, the agreement between each participant was very good in all cases. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  13. Simultaneous determination of main reaction components in the reaction mixture during biodiesel production.

    PubMed

    Sánek, Lubomír; Pecha, Jiří; Kolomazník, Karel

    2013-03-01

    The proposed analytical method allows for simultaneous determination by GC using a programed temperature vaporization injector and a flame ionization detector of the main reaction components (i.e. glycerol, methyl esters, mono-, di-, and triacylglycerols) in the reaction mixture during biodiesel production. The suggested method is convenient for the rapid and simple evaluation of the kinetic data gained during the transesterification reaction and, also partially serves as an indicator of the quality of biodiesel and mainly, as the indicator of the efficiency of the whole production process (i.e. the conversion of triacylglycerols to biodiesel and its time progress). The optimization of chromatographic conditions (e.g. the oven temperature program, injector setting, amount of derivatization reagent, and the derivatization reaction time) was performed. The method has been validated with crude samples of biodiesel made from waste-cooking oils in terms of linearity, precision, accuracy, sensitivity, and limits of detection and quantification. The results confirmed a satisfactory degree of accuracy and repeatability (the mean RSDs were usually below 2%) necessary for the reliable quantitative determination of all components in the considerable concentration range (e.g. 10-1100 μg/mL in case of methyl esters). Compound recoveries ranging from 96 to 104% were obtained. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. State of the Art and Development Trends of the Digital Radiography Systems for Cargo Inspection

    NASA Astrophysics Data System (ADS)

    Udod, V.; Van, J.; Osipov, S.; Chakhlov, S.; Temnik, A.

    2016-01-01

    Increasing requirements for technical parameters of inspection digital radiography systems are caused by increasing incidences of terrorism, drug trafficking and explosives via variety of transport. These requirements have determined research for new technical solutions that enable to ensure the safety of passengers and cargos in real-time. The main efforts in the analyzed method of testing are aimed at the creation of new and modernization of operated now systems of digital radiography as a whole and their main components and elements in particular. The number of these main components and elements includes sources of X-ray recording systems and transformation of radiometric information as well as algorithms and software that implements these algorithms for processing, visualization and results interpretation of inspection. Recent developments of X-ray units and betatrons used for inspection of small- and large-sized objects that are made from different materials are deserve special attention. The most effective X-ray detectors are a line and a radiometric detector matrix based on various scintillators. The most promising methods among the algorithms of material identification of testing objects are dual-energy methods. The article describes various models of digital radiography systems applied in Russia and abroad to inspection of baggage, containers, vehicles and large trucks.

  15. A Time-Space Domain Information Fusion Method for Specific Emitter Identification Based on Dempster-Shafer Evidence Theory.

    PubMed

    Jiang, Wen; Cao, Ying; Yang, Lin; He, Zichang

    2017-08-28

    Specific emitter identification plays an important role in contemporary military affairs. However, most of the existing specific emitter identification methods haven't taken into account the processing of uncertain information. Therefore, this paper proposes a time-space domain information fusion method based on Dempster-Shafer evidence theory, which has the ability to deal with uncertain information in the process of specific emitter identification. In this paper, radars will generate a group of evidence respectively based on the information they obtained, and our main task is to fuse the multiple groups of evidence to get a reasonable result. Within the framework of recursive centralized fusion model, the proposed method incorporates a correlation coefficient, which measures the relevance between evidence and a quantum mechanical approach, which is based on the parameters of radar itself. The simulation results of an illustrative example demonstrate that the proposed method can effectively deal with uncertain information and get a reasonable recognition result.

  16. Surveying immigrants without sampling frames - evaluating the success of alternative field methods.

    PubMed

    Reichel, David; Morales, Laura

    2017-01-01

    This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.

  17. Does classroom-based Crew Resource Management training improve patient safety culture? A systematic review

    PubMed Central

    de Bruijne, Martine C; Zwijnenberg, Nicolien C; Jansma, Elise P; van Dyck, Cathy; Wagner, Cordula

    2014-01-01

    Aim: To evaluate the evidence of the effectiveness of classroom-based Crew Resource Management training on safety culture by a systematic review of literature. Methods: Studies were identified in PubMed, Cochrane Library, PsycINFO, and Educational Resources Information Center up to 19 December 2012. The Methods Guide for Comparative Effectiveness Reviews was used to assess the risk of bias in the individual studies. Results: In total, 22 manuscripts were included for review. Training settings, study designs, and evaluation methods varied widely. Most studies reporting only a selection of culture dimensions found mainly positive results, whereas studies reporting all safety culture dimensions of the particular survey found mixed results. On average, studies were at moderate risk of bias. Conclusion: Evidence of the effectiveness of Crew Resource Management training in health care on safety culture is scarce and the validity of most studies is limited. The results underline the necessity of more valid study designs, preferably using triangulation methods. PMID:26770720

  18. Ethnocentrism and Black Students with Disabilities: Bridging the Cultural Gap, Volume I.

    ERIC Educational Resources Information Center

    Handy, Adam J.

    This book investigates the educational methods, achievements, and teacher expectations among black and white students with disabilities. It finds that poverty, racism, cultural differences between blacks and whites, and inferior socioeconomic conditions are the main causal factors that result in black children being "labeled" as exceptional and…

  19. Happiness in Midlife Parental Roles: A Contextual Mixed Methods Analysis

    ERIC Educational Resources Information Center

    Mitchell, Barbara A.

    2010-01-01

    This article focuses on midlife parental role satisfaction using date from a culturally diverse sample of 490 Metro Vancouver, British Columbia, Canada, parents. Results show that most parents are happy in their roles. Income satisfaction, intergenerational relationship quality, parents' main activity, health, age, ethnic background, and…

  20. Circuit Riding: A Method for Providing Reference Services.

    ERIC Educational Resources Information Center

    Plunket, Linda; And Others

    1983-01-01

    Discussion of the design and implementation of the Circuit Rider Librarian Program, a shared services project for delivering reference services to eight hospitals in Maine, includes a cost analysis of services and description of user evaluation survey. Five references, composite results of the survey, and postgrant options proposal are appended.…

  1. Biological pretreatment of corn stover with white-rot fungus for enzymatic hydrolysis and bioethanol production

    USDA-ARS?s Scientific Manuscript database

    Pretreatment, as the first step towards conversion of lignocellulosic feedstocks to biofuels and/or chemicals remains one of the main barriers to commercial success. Typically, harsh methods are used to pretreat lignocellulosic biomass prior to its breakdown to sugars by enzymes, which also result ...

  2. Novel deboning method of chilled broiler carcasses (prior to evisceration) and its effect on meat quality

    USDA-ARS?s Scientific Manuscript database

    During traditional poultry processing, the two main sources of contamination of the broiler carcasses are (1) microorganisms on the exterior of the carcasses, that results in skin surface contamination and (2) microorganisms from the gastrointestinal contents of the carcass and subsequent cross cont...

  3. Teachers' and Students' Beliefs regarding Aspects of Language Learning

    ERIC Educational Resources Information Center

    Davis, Adrian

    2003-01-01

    The similarities and dissimilarities between teachers' and students' conceptions of language learning were addressed through a questionnaire survey concerning the nature and methods of language learning. The results indicate points of congruence between teachers' and students' beliefs about language learning in respect of eight main areas.…

  4. A survey of unclassified axial-flow-compressor literature

    NASA Technical Reports Server (NTRS)

    Herzig, Howard Z; Hansen, Arthur G

    1955-01-01

    A survey of unclassified axial-flow-compressor literature is presented in the form of brief reviews of the methods, results, and conclusions of selected reports. The reports are organized into several main categories with subdivisions, and frequent references are made within the individual reviews to pertinent material elsewhere in the survey.

  5. [Enamel damage depending on the method of bracket removal].

    PubMed

    Fischer-Brandies, H; Kremers, L; Reicheneder, C; Kluge, G; Hüsler, K

    1993-04-01

    Two different methods of removing brackets, on the one side by torsion and on the other by bending, were compared for the purpose of analyzing the respective enamel lesions. Each test group consisted of 19 extracted human molars with metal brackets attached to the molars by means of the "concise etching technique". Bracket removal was standardized through the use of a Wolpert "Universalprüfmaschine TZZ 707" with modified torsion and bending mechanism. A scanning electron microscope was used to analyze the enamel surface. When using the torsion method, the mean extension of the enamel lesions was 48.3% of the adhesive free enamel surface. These lesions often reached into the deeper enamel layers and were mainly to be found on the broad side of the bonded area. On the other hand, when using the bending method, the enamel lesions were less frequent. They were mainly superficial and were confined almost exclusively to the pressure zones. The stress required to remove the brackets and the stress distribution were calculated on mechanical models and these results corresponded well with the enamel lesions observed on the molars. It can thus be concluded that the method of removing brackets is clinically relevant in relation to enamel lesions.

  6. Evaluation of Anomaly Detection Method Based on Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Fontugne, Romain; Himura, Yosuke; Fukuda, Kensuke

    The number of threats on the Internet is rapidly increasing, and anomaly detection has become of increasing importance. High-speed backbone traffic is particularly degraded, but their analysis is a complicated task due to the amount of data, the lack of payload data, the asymmetric routing and the use of sampling techniques. Most anomaly detection schemes focus on the statistical properties of network traffic and highlight anomalous traffic through their singularities. In this paper, we concentrate on unusual traffic distributions, which are easily identifiable in temporal-spatial space (e.g., time/address or port). We present an anomaly detection method that uses a pattern recognition technique to identify anomalies in pictures representing traffic. The main advantage of this method is its ability to detect attacks involving mice flows. We evaluate the parameter set and the effectiveness of this approach by analyzing six years of Internet traffic collected from a trans-Pacific link. We show several examples of detected anomalies and compare our results with those of two other methods. The comparison indicates that the only anomalies detected by the pattern-recognition-based method are mainly malicious traffic with a few packets.

  7. A risk assessment methodology using intuitionistic fuzzy set in FMEA

    NASA Astrophysics Data System (ADS)

    Chang, Kuei-Hu; Cheng, Ching-Hsue

    2010-12-01

    Most current risk assessment methods use the risk priority number (RPN) value to evaluate the risk of failure. However, conventional RPN methodology has been criticised as having five main shortcomings as follows: (1) the assumption that the RPN elements are equally weighted leads to over simplification; (2) the RPN scale itself has some non-intuitive statistical properties; (3) the RPN elements have many duplicate numbers; (4) the RPN is derived from only three factors mainly in terms of safety; and (5) the conventional RPN method has not considered indirect relations between components. To address the above issues, an efficient and comprehensive algorithm to evaluate the risk of failure is needed. This article proposes an innovative approach, which integrates the intuitionistic fuzzy set (IFS) and the decision-making trial and evaluation laboratory (DEMATEL) approach on risk assessment. The proposed approach resolves some of the shortcomings of the conventional RPN method. A case study, which assesses the risk of 0.15 µm DRAM etching process, is used to demonstrate the effectiveness of the proposed approach. Finally, the result of the proposed method is compared with the listing approaches of risk assessment methods.

  8. The identification of helicopter noise using a neural network

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.; Fuller, Chris R.; O'Brien, Walter F.

    1990-01-01

    Experiments were carried out to demonstrate the ability of an artificial neural network (ANN) system to distinguish between the noise of two helicopters. The ANN is taught to identify helicopters by using two types of features: one that is associated with the ratio of the main-rotor to tail-rotor blade passage frequency (BPF), and the ohter that describes the distribution of peaks in the main-rotor spectrum, which is independent of the tail-rotor. It is shown that the ability of the ANN to identify helicopters is comparable to that of a conventional recognition system using the ratio of the main-rotor BPF to the tail-rotor BPF (when both the main- and the tail-rotor noise are present), but the performoance of ANN exceeds the conventional-method performance when the tail-rotor noise is absent. In addition, the results of ANN can be obtained as a function of propagation distance.

  9. Haze in Apple-Based Beverages: Detailed Polyphenol, Polysaccharide, Protein, and Mineral Compositions.

    PubMed

    Millet, Melanie; Poupard, Pascal; Le Quéré, Jean-Michel; Bauduin, Remi; Guyot, Sylvain

    2017-08-09

    Producers of apple-based beverages are confronted with colloidal instability. Haze is caused by interactions between molecules that lead to the formation of aggregates. Haze composition in three apple-based beverages, namely, French sparkling cider, apple juice, and pommeau, was studied. Phenolic compounds, proteins, polysaccharides, and minerals were analyzed using global and detailed analytical methods. The results explained <75% (w/w) of haze dry mass. Polyphenols, represented mainly by procyanidins, were the main compounds identified and accounted for 10-31% of haze. However, oxidized phenolic compounds were probably underestimated and may represent a high proportion of haze. Proteins were present in all of the samples in proportions of <6% of haze except in two apple juice hazes, where they were the main constituents (18 and 24%). Polysaccharides accounted for 0-30% of haze. Potassium and calcium were the main minerals.

  10. Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.

    PubMed

    Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin

    2017-08-16

    The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.

  11. Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting

    NASA Astrophysics Data System (ADS)

    Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD

    2018-01-01

    Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.

  12. Development of fault tolerant adaptive control laws for aerospace systems

    NASA Astrophysics Data System (ADS)

    Perez Rocha, Andres E.

    The main topic of this dissertation is the design, development and implementation of intelligent adaptive control techniques designed to maintain healthy performance of aerospace systems subjected to malfunctions, external parameter changes and/or unmodeled dynamics. The dissertation is focused on the development of novel adaptive control configurations that rely on non-linear functions that appear in the immune system of living organisms as main source of adaptation. One of the main goals of this dissertation is to demonstrate that these novel adaptive control architectures are able to improve overall performance and protect the system while reducing control effort and maintaining adequate operation outside bounds of nominal design. This research effort explores several phases, ranging from theoretical stability analysis, simulation and hardware implementation on different types of aerospace systems including spacecraft, aircraft and quadrotor vehicles. The results presented in this dissertation are focused on two main adaptivity approaches, the first one is intended for aerospace systems that do not attain large angles and use exact feedback linearization of Euler angle kinematics. A proof of stability is presented by means of the circle Criterion and Lyapunov's direct method. The second approach is intended for aerospace systems that can attain large attitude angles (e.g. space systems in gravity-less environments), the adaptation is incorporated on a baseline architecture that uses partial feedback linearization of quaternions kinematics. In this case, the closed loop stability was analyzed using Lyapunov's direct method and Barbalat's Lemma. It is expected that some results presented in this dissertation can contribute towards the validation and certification of direct adaptive controllers.

  13. Identifying and quantifying main components of physiological noise in functional near infrared spectroscopy on the prefrontal cortex.

    PubMed

    Kirilina, Evgeniya; Yu, Na; Jelzow, Alexander; Wabnitz, Heidrun; Jacobs, Arthur M; Tachtsidis, Ilias

    2013-01-01

    Functional Near-Infrared Spectroscopy (fNIRS) is a promising method to study functional organization of the prefrontal cortex. However, in order to realize the high potential of fNIRS, effective discrimination between physiological noise originating from forehead skin haemodynamic and cerebral signals is required. Main sources of physiological noise are global and local blood flow regulation processes on multiple time scales. The goal of the present study was to identify the main physiological noise contributions in fNIRS forehead signals and to develop a method for physiological de-noising of fNIRS data. To achieve this goal we combined concurrent time-domain fNIRS and peripheral physiology recordings with wavelet coherence analysis (WCA). Depth selectivity was achieved by analyzing moments of photon time-of-flight distributions provided by time-domain fNIRS. Simultaneously, mean arterial blood pressure (MAP), heart rate (HR), and skin blood flow (SBF) on the forehead were recorded. WCA was employed to quantify the impact of physiological processes on fNIRS signals separately for different time scales. We identified three main processes contributing to physiological noise in fNIRS signals on the forehead. The first process with the period of about 3 s is induced by respiration. The second process is highly correlated with time lagged MAP and HR fluctuations with a period of about 10 s often referred as Mayer waves. The third process is local regulation of the facial SBF time locked to the task-evoked fNIRS signals. All processes affect oxygenated haemoglobin concentration more strongly than that of deoxygenated haemoglobin. Based on these results we developed a set of physiological regressors, which were used for physiological de-noising of fNIRS signals. Our results demonstrate that proposed de-noising method can significantly improve the sensitivity of fNIRS to cerebral signals.

  14. A usability evaluation of Lazada mobile application

    NASA Astrophysics Data System (ADS)

    Hussain, Azham; Mkpojiogu, Emmanuel O. C.; Jamaludin, Nur Hafiza; Moh, Somia T. L.

    2017-10-01

    This paper reports on a usability evaluation of Lazada mobile application, an online shopping app for mobile devices. The evaluation was conducted using 12 users of ages 18 to 24. Seven (7) were expert users and the other 5 were novice users. The study objectives were to evaluate the perceived effectiveness, efficiency and satisfaction of the mobile application. The result provides a positive feedback and shows that the mobile shopping app is effective, efficient, and satisfying as perceived by the study participants. However, there are some observed usability issues with the main menu and the payment method that necessitates improvements to increase the application's effectiveness, efficiency and satisfaction. The suggested improvements include: 1) the main menu should be capitalized and place on the left side of mobile app and 2) payment method tutorial should be included as a hyperlink in the payment method page. This observation will be helpful to the owners of the application in future version development of the app.

  15. A Comparative Analysis of Perceptions of Pharmacy Students’ Stress and Stressors across Two Multicampus Universities

    PubMed Central

    Gaither, Caroline A.; Crawford, Stephanie Y.; Tieman, Jami

    2016-01-01

    Objective. To compare perceived levels of stress, stressors, and academic self-efficacy among students at two multicampus colleges of pharmacy. Methods. A survey instrument using previously validated items was developed and administered to first-year, second-year, and third-year pharmacy students at two universities with multiple campuses in spring 2013. Results. Eight hundred twenty students out of 1115 responded (73.5% response rate). Institutional differences were found in perceived student stress levels, self-efficacy, and stress-related causes. An interaction effect was demonstrated between institution and campus type (main or branch) for perceived stress and self-efficacy although campus type alone did not demonstrate a direct effect. Institutional and campus differences existed in awareness of campus counseling services, as did a few differences in coping methods. Conclusion. Stress measures were similar for pharmacy students at main or branch campuses. Institutional differences in student stress might be explained by instructional methods, campus support services, institutional climate, and nonuniversity factors. PMID:27402985

  16. Analysis of axial compressive loaded beam under random support excitations

    NASA Astrophysics Data System (ADS)

    Xiao, Wensheng; Wang, Fengde; Liu, Jian

    2017-12-01

    An analytical procedure to investigate the response spectrum of a uniform Bernoulli-Euler beam with axial compressive load subjected to random support excitations is implemented based on the Mindlin-Goodman method and the mode superposition method in the frequency domain. The random response spectrum of the simply supported beam subjected to white noise excitation and to Pierson-Moskowitz spectrum excitation is investigated, and the characteristics of the response spectrum are further explored. Moreover, the effect of axial compressive load is studied and a method to determine the axial load is proposed. The research results show that the response spectrum mainly consists of the beam's additional displacement response spectrum when the excitation is white noise; however, the quasi-static displacement response spectrum is the main component when the excitation is the Pierson-Moskowitz spectrum. Under white noise excitation, the amplitude of the power spectral density function decreased as the axial compressive load increased, while the frequency band of the vibration response spectrum increased with the increase of axial compressive load.

  17. Fiber-optical sensor with intensity compensation model in college teaching of physics experiment

    NASA Astrophysics Data System (ADS)

    Su, Liping; Zhang, Yang; Li, Kun; Zhang, Yu

    2017-08-01

    Optical fiber sensor technology is one of the main contents of modern information technology, which has a very important position in modern science and technology. Fiber optic sensor experiment can improve students' enthusiasm and broaden their horizons in college physics experiment. In this paper the main structure and working principle of fiberoptical sensor with intensity compensation model are introduced. And thus fiber-optical sensor with intensity compensation model is applied to measure micro displacement of Young's modulus measurement experiment and metal linear expansion coefficient measurement experiment in the college physics experiment. Results indicate that the measurement accuracy of micro displacement is higher than that of the traditional methods using fiber-optical sensor with intensity compensation model. Meanwhile this measurement method makes the students understand on the optical fiber, sensor and nature of micro displacement measurement method and makes each experiment strengthen relationship and compatibility, which provides a new idea for the reform of experimental teaching.

  18. A DESIGN METHOD FOR RETAINING WALL BASED ON RETURN PERIOD OF RAINFALL AND SNOWMELT

    NASA Astrophysics Data System (ADS)

    Ebana, Ryo; Uehira, Kenichiro; Yamada, Tadashi

    The main purpose of this study is to develop a new design method for the retaining wall in a cold district. In the cold district, snowfall and snowmelt is one of the main factors in sediment related disaster. However, the effect of the snowmelt is not being taken account of sediment disasters precaution and evacuation system. In this study, we target at past slope failure disaster and quantitatively evaluate that the effect of rainfall and snowmelt on groundwater level and then verify the stability of slope. Water supplied on the slope was determined from the probabilistic approach of the snowmelt using DegreeDay method in this study. Furthermore, a slope stability analysis was carried out based on the ground water level that was obtained from the unsaturated infiltration flow with the saturated seepage flow simulations. From the result of the slope stability analysis, it was found that the effect of ground water level on the stability of slope is much bigger than that of other factors.

  19. Recording 13C-15N HMQC 2D sparse spectra in solids in 30 s

    NASA Astrophysics Data System (ADS)

    Kupče, Ēriks; Trébosc, Julien; Perrone, Barbara; Lafon, Olivier; Amoureux, Jean-Paul

    2018-03-01

    We propose a dipolar HMQC Hadamard-encoded (D-HMQC-Hn) experiment for fast 2D correlations of abundant nuclei in solids. The main limitation of the Hadamard methods resides in the length of the encoding pulses, which results from a compromise between the selectivity and the sensitivity due to losses. For this reason, these methods should mainly be used with sparse spectra, and they profit from the increased separation of the resonances at high magnetic fields. In the case of the D-HMQC-Hn experiments, we give a simple rule that allows directly setting the optimum length of the selective pulses, versus the minimum separation of the resonances in the indirect dimension. The demonstration has been performed on a fully 13C,15N labelled f-MLF sample, and it allowed recording the build-up curves of the 13C-15N cross-peaks within 10 min. However, the method could also be used in the case of less sensitive samples, but with more accumulations.

  20. A Comparative Analysis of Perceptions of Pharmacy Students' Stress and Stressors across Two Multicampus Universities.

    PubMed

    Awé, Clara; Gaither, Caroline A; Crawford, Stephanie Y; Tieman, Jami

    2016-06-25

    Objective. To compare perceived levels of stress, stressors, and academic self-efficacy among students at two multicampus colleges of pharmacy. Methods. A survey instrument using previously validated items was developed and administered to first-year, second-year, and third-year pharmacy students at two universities with multiple campuses in spring 2013. Results. Eight hundred twenty students out of 1115 responded (73.5% response rate). Institutional differences were found in perceived student stress levels, self-efficacy, and stress-related causes. An interaction effect was demonstrated between institution and campus type (main or branch) for perceived stress and self-efficacy although campus type alone did not demonstrate a direct effect. Institutional and campus differences existed in awareness of campus counseling services, as did a few differences in coping methods. Conclusion. Stress measures were similar for pharmacy students at main or branch campuses. Institutional differences in student stress might be explained by instructional methods, campus support services, institutional climate, and nonuniversity factors.

  1. A novel fruit shape classification method based on multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  2. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  3. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    PubMed Central

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-01-01

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763

  4. Ink dating using thermal desorption and gas chromatography/mass spectrometry: comparison of results obtained in two laboratories.

    PubMed

    Koenig, Agnès; Bügler, Jürgen; Kirsch, Dieter; Köhler, Fritz; Weyermann, Céline

    2015-01-01

    An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in "normal" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity. © 2014 American Academy of Forensic Science.

  5. Cognitive-Behavioral Therapy.

    PubMed

    An, Hong; He, Ri-Hui; Zheng, Yun-Rong; Tao, Ran

    2017-01-01

    Cognitive-behavioral therapy (CBT) is the main method of psychotherapy generally accepted in the field of substance addiction and non-substance addiction. This chapter mainly introduces the methods and technology of cognitive-behavior therapy of substance addiction, especially in order to prevent relapse. In the cognitive-behavior treatment of non-substance addiction, this chapter mainly introduces gambling addiction and food addiction.

  6. The difference of level CO2 emissions from the transportation sector between weekdays and weekend days on the City Centre of Pemalang

    NASA Astrophysics Data System (ADS)

    Sawitri, E.; Hardiman, G.; Buchori, I.

    2017-06-01

    The high growth of human activity potentially increases the number of vehicles and the use of fossil fuels that contribute the increase of CO2 emissions in atmosphere. Controlling CO2 emission that causes greenhouse effect becomes the main agenda of Indonesian Government. The first step control CO2 emissions is by measuring the level of CO2 emissions, especially CO2 emissions from fossil fuel consumption in the transport sector. This research aims to assess the level of CO2 emissions from transportation sector on the main roads in the city centre of Pemalang both in weekdays and weekend days. The methods applied to calculate CO2 emissions using Intergovernmental Panel on Climate Change (IPCC) 2006 method. For this, a survey on the number of vehicles passing through the main roads using hand tally counter is firstly done. The results, CO2 emissions in working day, i.e. 49,006.95 tons/year compared to weekend i.e. 38,865.50 tons/year.

  7. Wind Extraction for Natural Ventilation

    NASA Astrophysics Data System (ADS)

    Fagundes, Tadeu; Yaghoobian, Neda; Kumar, Rajan; Ordonez, Juan

    2017-11-01

    Due to the depletion of energy resources and the environmental impact of pollution and unsustainable energy resources, energy consumption has become one of the main concerns in our rapidly growing world. Natural ventilation, a traditional method to remove anthropogenic and solar heat gains, proved to be a cost-effective, alternative method to mechanical ventilation. However, while natural ventilation is simple in theory, its detailed design can be a challenge, particularly for wind-driven ventilation, which its performance highly involves the buildings' form, surrounding topography, turbulent flow characteristics, and climate. One of the main challenges with wind-driven natural ventilation schemes is due to the turbulent and unpredictable nature of the wind around the building that impose complex pressure loads on the structure. In practice, these challenges have resulted in founding the natural ventilation mainly on buoyancy (rather than the wind), as the primary force. This study is the initial step for investigating the physical principals of wind extraction over building walls and investigating strategies to reduce the dependence of the wind extraction on the incoming flow characteristics and the target building form.

  8. The Faintest WISE Debris Disks: Enhanced Methods for Detection and Verification

    NASA Astrophysics Data System (ADS)

    Patel, Rahul I.; Metchev, Stanimir A.; Heinze, Aren; Trollo, Joseph

    2017-02-01

    In an earlier study, we reported nearly 100 previously unknown dusty debris disks around Hipparcos main-sequence stars within 75 pc by selecting stars with excesses in individual WISE colors. Here, we further scrutinize the Hipparcos 75 pc sample to (1) gain sensitivity to previously undetected, fainter mid-IR excesses and (2) remove spurious excesses contaminated by previously unidentified blended sources. We improve on our previous method by adopting a more accurate measure of the confidence threshold for excess detection and by adding an optimally weighted color average that incorporates all shorter-wavelength WISE photometry, rather than using only individual WISE colors. The latter is equivalent to spectral energy distribution fitting, but only over WISE bandpasses. In addition, we leverage the higher-resolution WISE images available through the unWISE.me image service to identify contaminated WISE excesses based on photocenter offsets among the W3- and W4-band images. Altogether, we identify 19 previously unreported candidate debris disks. Combined with the results from our earlier study, we have found a total of 107 new debris disks around 75 pc Hipparcos main-sequence stars using precisely calibrated WISE photometry. This expands the 75 pc debris disk sample by 22% around Hipparcos main-sequence stars and by 20% overall (including non-main-sequence and non-Hipparcos stars).

  9. The Association of Chronic Hepatitis C with Respiratory Microbiota Disturbance on the Basis of Decreased Haemophilus Spp. Colonization

    PubMed Central

    Kosikowska, Urszula; Biernasiuk, Anna; Korona-Głowniak, Izabela; Kiciak, Sławomir; Tomasiewicz, Krzysztof; Malm, Anna

    2016-01-01

    Background Haemophilus species are the most common microbiota in humans. The aim of this paper was to investigate Haemophilus spp., mainly H. parainfluenzae prevalence, in the upper respiratory tract of chronic hepatitis C (CHC-positive) patients with or without therapy using pegylated interferon alfa and ribavirin. Material/Methods We collected 462 samples from 54 healthy people and 100 CHC-positive patients at various stages: before (group A), during (group B), and after (group C) antiviral therapy. Identification of bacterial isolates including biotypes and antimicrobials susceptibility was accomplished by means of standard microbiological methods. Results In 70.4% of healthy people (control group) and in 27.0% of CHC-positive patients, the presence of haemophili, mainly H. parainfluenzae was observed, and those differences were statistically significant (p<0.0001). Statistically significant differences in Haemophilus spp. colonization were also observed among healthy people and CHC-positive patients from group A (p=0.0012) and from B or C groups (p<0.0001). Resistance to ampicillin in beta-lactamase-positive isolates and multidrug resistance (MDR) of H. parainfluenzae was detected mainly in group A. Conclusions The obtained data suggest that chronic hepatitis C, together with antiviral therapy, may influence the respiratory tract microbiota composition as found using haemophili, mainly H. parainfluenzae. PMID:26912163

  10. Visual analysis as a method of interpretation of the results of satellite ionospheric measurements for exploratory problems

    NASA Astrophysics Data System (ADS)

    Korneva, N. N.; Mogilevskii, M. M.; Nazarov, V. N.

    2016-05-01

    Traditional methods of time series analysis of satellite ionospheric measurements have some limitations and disadvantages that are mainly associated with the complex nonstationary signal structure. In this paper, the possibility of identifying and studying the temporal characteristics of signals via visual analysis is considered. The proposed approach is illustrated by the example of the visual analysis of wave measurements on the DEMETER microsatellite during its passage over the HAARP facility.

  11. 40 CFR Appendix A-2 to Part 60 - Test Methods 2G through 3C

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the main probe to extend its reach. 3.6 “May,” “Must,” “Shall,” “Should,” and the imperative form of...,” and the imperative form of verbs (such as “record” or “enter”) are used to indicate that a provision... results on a form similar to Table 2F-1 presented in Method 2F. If there is visible damage to the 3-D...

  12. 40 CFR Appendix A-2 to Part 60 - Test Methods 2G through 3C

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the main probe to extend its reach. 3.6 “May,” “Must,” “Shall,” “Should,” and the imperative form of...,” and the imperative form of verbs (such as “record” or “enter”) are used to indicate that a provision... results on a form similar to Table 2F-1 presented in Method 2F. If there is visible damage to the 3-D...

  13. 40 CFR Appendix A-2 to Part 60 - Test Methods 2G through 3C

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the main probe to extend its reach. 3.6 “May,” “Must,” “Shall,” “Should,” and the imperative form of...,” and the imperative form of verbs (such as “record” or “enter”) are used to indicate that a provision... results on a form similar to Table 2F-1 presented in Method 2F. If there is visible damage to the 3-D...

  14. 40 CFR Appendix A-2 to Part 60 - Test Methods 2G through 3C

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the main probe to extend its reach. 3.6 “May,” “Must,” “Shall,” “Should,” and the imperative form of...,” and the imperative form of verbs (such as “record” or “enter”) are used to indicate that a provision... results on a form similar to Table 2F-1 presented in Method 2F. If there is visible damage to the 3-D...

  15. 40 CFR Appendix A-2 to Part 60 - Test Methods 2G through 3C

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the main probe to extend its reach. 3.6 “May,” “Must,” “Shall,” “Should,” and the imperative form of...,” and the imperative form of verbs (such as “record” or “enter”) are used to indicate that a provision... results on a form similar to Table 2F-1 presented in Method 2F. If there is visible damage to the 3-D...

  16. EAS fluctuation approach to primary mass composition investigation

    NASA Technical Reports Server (NTRS)

    Stamenov, J. N.; Janminchev, V. D.

    1985-01-01

    The analysis of muon and electron fluctuation distribution shapes by statistical method of invers problem solution gives the possibility to obtain the relative contribution values of the five main primary nuclei groups. The method is model-independent for a big class of interaction models and can give good results for observation levels not too far from the development maximum and for the selection of showers with fixed sizes and zenith angles not bigger than 30 deg.

  17. A multimembership catalogue for 1876 open clusters using UCAC4 data

    NASA Astrophysics Data System (ADS)

    Sampedro, L.; Dias, W. S.; Alfaro, E. J.; Monteiro, H.; Molino, A.

    2017-10-01

    The main objective of this work is to determine the cluster members of 1876 open clusters, using positions and proper motions of the astrometric fourth United States Naval Observatory (USNO) CCD Astrograph Catalog (UCAC4). For this purpose, we apply three different methods, all based on a Bayesian approach, but with different formulations: a purely parametric method, another completely non-parametric algorithm and a third, recently developed by Sampedro & Alfaro, using both formulations at different steps of the whole process. The first and second statistical moments of the members' phase-space subspace, obtained after applying the three methods, are compared for every cluster. Although, on average, the three methods yield similar results, there are also specific differences between them, as well as for some particular clusters. The comparison with other published catalogues shows good agreement. We have also estimated, for the first time, the mean proper motion for a sample of 18 clusters. The results are organized in a single catalogue formed by two main files, one with the most relevant information for each cluster, partially including that in UCAC4, and the other showing the individual membership probabilities for each star in the cluster area. The final catalogue, with an interface design that enables an easy interaction with the user, is available in electronic format at the Stellar Systems Group (SSG-IAA) web site (http://ssg.iaa.es/en/content/sampedro-cluster-catalog).

  18. A method for mapping flood hazard along roads.

    PubMed

    Kalantari, Zahra; Nickman, Alireza; Lyon, Steve W; Olofsson, Bo; Folkeson, Lennart

    2014-01-15

    A method was developed for estimating and mapping flood hazard probability along roads using road and catchment characteristics as physical catchment descriptors (PCDs). The method uses a Geographic Information System (GIS) to derive candidate PCDs and then identifies those PCDs that significantly predict road flooding using a statistical modelling approach. The method thus allows flood hazards to be estimated and also provides insights into the relative roles of landscape characteristics in determining road-related flood hazards. The method was applied to an area in western Sweden where severe road flooding had occurred during an intense rain event as a case study to demonstrate its utility. The results suggest that for this case study area three categories of PCDs are useful for prediction of critical spots prone to flooding along roads: i) topography, ii) soil type, and iii) land use. The main drivers among the PCDs considered were a topographical wetness index, road density in the catchment, soil properties in the catchment (mainly the amount of gravel substrate) and local channel slope at the site of a road-stream intersection. These can be proposed as strong indicators for predicting the flood probability in ungauged river basins in this region, but some care is needed in generalising the case study results other potential factors are also likely to influence the flood hazard probability. Overall, the method proposed represents a straightforward and consistent way to estimate flooding hazards to inform both the planning of future roadways and the maintenance of existing roadways. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. AC Power at Power Frequencies: Bilateral comparison between SASO NMCC and TÜBİTAK UME

    NASA Astrophysics Data System (ADS)

    Çaycı, Hüseyin; Yılmaz, Özlem; AlRobaish, Abdullah M.; AlAnazi, Shafi S.; AlAyali, Ahmed R.; AlRumie, Rashed A.

    2018-01-01

    A supplementary bilateral comparison measurement on AC Power at 50/60 Hz between SASO NMCC (GULFMET) and TÜBİTAK UME (EURAMET) was performed with the primary power standards of each partner. Measurement methods and setups which are very similar of the participants, measurement results, calculation of differences in the results, evaluation of uncertainties are given within this report. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  20. Methodology for the Evaluation of the Algorithms for Text Line Segmentation Based on Extended Binary Classification

    NASA Astrophysics Data System (ADS)

    Brodic, D.

    2011-01-01

    Text line segmentation represents the key element in the optical character recognition process. Hence, testing of text line segmentation algorithms has substantial relevance. All previously proposed testing methods deal mainly with text database as a template. They are used for testing as well as for the evaluation of the text segmentation algorithm. In this manuscript, methodology for the evaluation of the algorithm for text segmentation based on extended binary classification is proposed. It is established on the various multiline text samples linked with text segmentation. Their results are distributed according to binary classification. Final result is obtained by comparative analysis of cross linked data. At the end, its suitability for different types of scripts represents its main advantage.

  1. Resolvent-Techniques for Multiple Exercise Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no

    2015-02-15

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less

  2. An algal model for predicting attainment of tiered biological criteria of Maine's streams and rivers

    USGS Publications Warehouse

    Danielson, Thomas J.; Loftin, Cyndy; Tsomides, Leonidas; DiFranco, Jeanne L.; Connors, Beth; Courtemanch, David L.; Drummond, Francis; Davies, Susan

    2012-01-01

    State water-quality professionals developing new biological assessment methods often have difficulty relating assessment results to narrative criteria in water-quality standards. An alternative to selecting index thresholds arbitrarily is to include the Biological Condition Gradient (BCG) in the development of the assessment method. The BCG describes tiers of biological community condition to help identify and communicate the position of a water body along a gradient of water quality ranging from natural to degraded. Although originally developed for fish and macroinvertebrate communities of streams and rivers, the BCG is easily adapted to other habitats and taxonomic groups. We developed a discriminant analysis model with stream algal data to predict attainment of tiered aquatic-life uses in Maine's water-quality standards. We modified the BCG framework for Maine stream algae, related the BCG tiers to Maine's tiered aquatic-life uses, and identified appropriate algal metrics for describing BCG tiers. Using a modified Delphi method, 5 aquatic biologists independently evaluated algal community metrics for 230 samples from streams and rivers across the state and assigned a BCG tier (1–6) and Maine water quality class (AA/A, B, C, nonattainment of any class) to each sample. We used minimally disturbed reference sites to approximate natural conditions (Tier 1). Biologist class assignments were unanimous for 53% of samples, and 42% of samples differed by 1 class. The biologists debated and developed consensus class assignments. A linear discriminant model built to replicate a priori class assignments correctly classified 95% of 150 samples in the model training set and 91% of 80 samples in the model validation set. Locally derived metrics based on BCG taxon tolerance groupings (e.g., sensitive, intermediate, tolerant) were more effective than were metrics developed in other regions. Adding the algal discriminant model to Maine's existing macroinvertebrate discriminant model will broaden detection of biological impairment and further diagnose sources of impairment. The algal discriminant model is specific to Maine, but our approach of explicitly tying an assessment tool to tiered aquatic-life goals is widely transferrable to other regions, taxonomic groups, and waterbody types.

  3. Waste management barriers in developing country hospitals: Case study and AHP analysis.

    PubMed

    Delmonico, Diego V de Godoy; Santos, Hugo H Dos; Pinheiro, Marco Ap; de Castro, Rosani; de Souza, Regiane M

    2018-01-01

    Healthcare waste management is an essential field for both researchers and practitioners. Although there have been few studies using statistical methods for its evaluation, it has been the subject of several studies in different contexts. Furthermore, the known precarious practices for waste management in developing countries raise questions about its potential barriers. This study aims to investigate the barriers in healthcare waste management and their relevance. For this purpose, this paper analyses waste management practices in two Brazilian hospitals by using case study and the Analytic Hierarchy Process method. The barriers were organized into three categories - human factors, management, and infrastructure, and the main findings suggest that cost and employee awareness were the most significant barriers. These results highlight the main barriers to more sustainable waste management, and provide an empirical basis for multi-criteria evaluation of the literature.

  4. Detecting text in natural scenes with multi-level MSER and SWT

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Liu, Renjun

    2018-04-01

    The detection of the characters in the natural scene is susceptible to factors such as complex background, variable viewing angle and diverse forms of language, which leads to poor detection results. Aiming at these problems, a new text detection method was proposed, which consisted of two main stages, candidate region extraction and text region detection. At first stage, the method used multiple scale transformations of original image and multiple thresholds of maximally stable extremal regions (MSER) to detect the text regions which could detect character regions comprehensively. At second stage, obtained SWT maps by using the stroke width transform (SWT) algorithm to compute the candidate regions, then using cascaded classifiers to propose non-text regions. The proposed method was evaluated on the standard benchmark datasets of ICDAR2011 and the datasets that we made our own data sets. The experiment results showed that the proposed method have greatly improved that compared to other text detection methods.

  5. Effective modern methods of protecting metal road structures from corrosion

    NASA Astrophysics Data System (ADS)

    Panteleeva, Margarita

    2017-10-01

    In the article the ways of protection of barrier road constructions from various external influences which cause development of irreversible corrosion processes are considered. The author studied modern methods of action on metal for corrosion protection and chose the most effective of them: a method of directly affecting the metal structures themselves. This method was studied in more detail in the framework of the experiment. As a result, the article describes the experiment of using a three-layer polymer coating, which includes a thermally activated primer, an elastomeric thermoplastic layer with a spatial structure, and a strong outer polyolefin layer. As a result of the experiment, the ratios of the ingredients for obtaining samples of the treated metal having the best parameters of corrosion resistance, elasticity, and strength were revealed. The author constructed a regression equation describing the main properties of the protective polymer coating using the simplex-lattice planning method in the composition-property diagrams.

  6. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    PubMed

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  7. Monte-Carlo simulation of a stochastic differential equation

    NASA Astrophysics Data System (ADS)

    Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG

    2017-12-01

    For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.

  8. Characterization, Degradation, and Reaction Pathways of Indoor Toluene over Visible-light-driven S, Zn Co-doped TiO2

    NASA Astrophysics Data System (ADS)

    Chu, H.; Lin, Y. H.; Lin, C. Y.

    2017-01-01

    Sulfur and Zinc co-doped TiO2 prepared by a sol-gel method to degrade toluene under a fluorescent lamp was investigated. The results indicate that S,Zn co-doped TiO2 photocatalysts are mainly nano-size with an anatase phase structure. The degradation reactions of toluene were performed under various operation conditions. The results show that the toluene conversion increases with increasing toluene concentration and decreasing relative humidity. Based on the results of activity test, S0.05Zn0.001/TiO2 was chosen for further studies. The main oxidation products of toluene photodegradation are CO2, H2O, benzyl alcohol, acetone, butadiene and acetic acid. Two possible mechanisms have been developed for photodegradation of toluene in a dry and a humid environment.

  9. Using the Surface Renewal Technique to Estimate CO2 Exchange from a Rice Field to the Atmosphere

    NASA Astrophysics Data System (ADS)

    Suvocarev, K.; Reba, M. L.; Runkle, B.

    2015-12-01

    Measuring CO2 emissions as surface fluxes is crucial for climate change predictions. One major set of techniques to measure surface fluxes is through continuous micrometeorological observations over different landscapes. Recent approaches of the surface renewal method (SR) are becoming important for their capacity to independently measure sensible (H) and latent heat (LE) fluxes while avoiding some of the shortcomings of the eddy covariance method (EC). Unlike EC, SR avoids orientation limitations, leveling requirements and instrumentation separation and shadowing issues. The main advantage of SR over EC method is in its applicability in both roughness and inertial sub-layers. Therefore, SR measurements can be planned in cases where fetch requirements are not adequate for EC application. We applied the recent approach as suggested by Castellvi et al. (2008) over two months (May to July, 2015) of high-frequency data collected by EC equipment from a rice field in Arkansas. The main goal was to extend this SR application to CO2 fluxes (Fc) over agricultural fields. The results show high correlation between EC and SR fluxes (H, LE and Fc) when they are compared for all atmospheric stability conditions (R2 > 0.75). Some overestimation is observed for SR with respect to EC fluxes, similar to the findings of Castellvi et al. (2008) for rangeland grass. For all the data, SR analysis results were about 11%, 18% and 17% higher than the EC results for H, LE and Fc, respectively. These higher flux estimates resulted in better energy balance closure. The root mean square error for Fc was 6.55 μmol m-2 s-1. The observed overestimation will be addressed in the future by using additional methods for the turbulent fluxes quantification.

  10. Optimum tuned mass damper design using harmony search with comparison of classical methods

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail; Sayin, Baris

    2017-07-01

    As known, tuned mass dampers (TMDs) are added to mechanical systems in order to obtain a good vibration damping. The main aim is to reduce the maximum amplitude at the resonance state. In this study, a metaheuristic algorithm called harmony search employed for the optimum design of TMDs. As the optimization objective, the transfer function of the acceleration of the system with respect to ground acceleration was minimized. The numerical trails were conducted for 4 single degree of freedom systems and the results were compared with classical methods. As a conclusion, the proposed method is feasible and more effective than the other documented methods.

  11. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    NASA Astrophysics Data System (ADS)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  12. Comparison of Phenoldisulfonic Acid, Nondispersive Infrared, and Saltzman methods for the determination of oxides of nitrogen in automotive exhaust

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, G.E.; Huls, T.A.

    1970-10-01

    The Saltzman, Phenoldisulfonic Acid and Nondispersive Infrared methods have been compared for the determination of oxides of nitrogen in automobile exhaust. The main purpose of this investigation was to determine whether the Nondispersive Infrared method could be used as a possible replacement for the Saltzman method. Results show that the Nondispersive Infrared analyzer can be used to measure NO/sub x/ in exhaust gases with advantages over both the Saltzman and Phenoldisulfonic Acid methods. These advantages include simplicity, speed, less complicated analytical technique, and the fact that it is better adapted to be carried out by technicians at the test site.

  13. Helicopter external noise prediction and correlation with flight test

    NASA Technical Reports Server (NTRS)

    Gupta, B. P.

    1978-01-01

    Mathematical analysis procedures for predicting the main and tail rotor rotational and broadband noise are presented. The aerodynamic and acoustical data from Operational Loads Survey (OLS) flight program are used for validating the analysis and noise prediction methodology. For the long method of rotational noise prediction, the spanwise, chordwise, and azimuthwise airloading is used. In the short method, the airloads are assumed to be concentrated at a single spanwise station and for higher harmonics an airloading harmonic exponent of 2.0 is assumed. For the same flight condition, the predictions from long and short methods of rotational noise prediction are compared with the flight test results. The short method correlates as well or better than the long method.

  14. Mobile/android application for QRS detection using zero cross method

    NASA Astrophysics Data System (ADS)

    Rizqyawan, M. I.; Simbolon, A. I.; Suhendra, M. A.; Amri, M. F.; Kusumandari, D. E.

    2018-03-01

    In automatic ECG signal processing, one of the main topics of research is QRS complex detection. Detecting correct QRS complex or R peak is important since it is used to measure several other ECG metrics. One of the robust methods for QRS detection is Zero Cross method. This method uses an addition of high-frequency signal and zero crossing count to detect QRS complex which has a low-frequency oscillation. This paper presents an application of QRS detection using Zero Cross algorithm in the Android-based system. The performance of the algorithm in the mobile environment is measured. The result shows that this method is suitable for real-time QRS detection in a mobile application.

  15. Multi-Sectional Views Textural Based SVM for MS Lesion Segmentation in Multi-Channels MRIs

    PubMed Central

    Abdullah, Bassem A; Younis, Akmal A; John, Nigel M

    2012-01-01

    In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI) data. The technique uses a trained support vector machine (SVM) to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions mainly based on the textural features with aid of the other features. The classification is done on each of the axial, sagittal and coronal sectional brain view independently and the resultant segmentations are aggregated to provide more accurate output segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated approach that does not rely on manually delineating the MS lesions. In addition, the technique introduces the concept of the multi-sectional view segmentation to produce verified segmentation. The proposed textural-based SVM technique was evaluated using three simulated datasets and more than fifty real MRI datasets. The results were compared with state of the art methods. The obtained results indicate that the proposed method would be viable for use in clinical practice for the detection of MS lesions in MRI. PMID:22741026

  16. Lebanese household carbon footprint: Measurements, analysis and challenges

    NASA Astrophysics Data System (ADS)

    Nasr, Rawad; Tall, Ibrahim; Nachabe, Nour; Chaaban, Farid

    2016-07-01

    The main purpose of this paper is to estimate the carbon footprint of a typical Lebanese household, and compare the results with international standards and trends. The estimation of this footprint will reflect the impact of the daily Lebanese household activities on the environment in terms of carbon dioxide emissions. The method used in estimating the carbon emissions is based on gathering the primary footprints from various household activities. Another proposed method that provides more accurate results is the estimation of emissions based on secondary footprint, which reflects the total emissions not only from the regular activities but also from a lifecycle perspective. Practical and feasible solutions were proposed to help reduce the amount of C02 emissions per household. This would lead to a better air quality, money savings, greenhouse gases emissions reduction and would ensure the sustainability and prosperity of future generations. A detailed survey was conducted in which the questions were focused mainly on energy, food, and transportation issues. The fourteen questions were addressed to one hundred families in different Lebanese regions coming from different social and economic backgrounds. This diversity would constitute a reflective sample of the actual Lebanese society, allowing us to extrapolate the gathered results on a national level.

  17. Model potentials for main group elements Li through Rn

    NASA Astrophysics Data System (ADS)

    Sakai, Yoshiko; Miyoshi, Eisaku; Klobukowski, Mariusz; Huzinaga, Sigeru

    1997-05-01

    Model potential (MP) parameters and valence basis sets were systematically determined for the main group elements Li through Rn. For alkali and alkaline-earth metal atoms, the outermost core (n-1)p electrons were treated explicitly together with the ns valence electrons. For the remaining atoms, only the valence ns and np electrons were treated explicitly. The major relativistic effects at the level of Cowan and Griffin's quasi-relativistic Hartree-Fock method (QRHF) were incorporated in the MPs for all atoms heavier than Kr. The valence orbitals thus obtained have inner nodal structure. The reliability of the MP method was tested in calculations for X-, X, and X+ (X=Br, I, and At) at the SCF level and the results were compared with the corresponding values given by the numerical HF (or QRHF) calculations. Calculations that include electron correlation were done for X-, X, and X+ (X=Cl and Br) at the SDCI level and for As2 at the CASSCF and MRSDCI levels. These results were compared with those of all-electron (AE) calculations using the well-tempered basis sets. Close agreement between the MP and AE results was obtained at all levels of the treatment.

  18. Calculation of the compounded uncertainty of 14C AMS measurements

    NASA Astrophysics Data System (ADS)

    Nadeau, Marie-Josée; Grootes, Pieter M.

    2013-01-01

    The correct method to calculate conventional 14C ages from the carbon isotopic ratios was summarised 35 years ago by Stuiver and Polach (1977) and is now accepted as the only method to calculate 14C ages. There is, however, no consensus regarding the treatment of AMS data, mainly of the uncertainty of the final result. The estimation and treatment of machine background, process blank, and/or in situ contamination is not uniform between laboratories, leading to differences in 14C results, mainly for older ages. As Donahue (1987) and Currie (1994), among others, mentioned, some laboratories find it important to use the scatter of several measurements as uncertainty while others prefer to use Poisson statistics. The contribution of the scatter of the standards, machine background, process blank, and in situ contamination to the uncertainty of the final 14C result is also treated in different ways. In the early years of AMS, several laboratories found it important to describe their calculation process in details. In recent years, this practise has declined. We present an overview of the calculation process for 14C AMS measurements looking at calculation practises published from the beginning of AMS until present.

  19. A two dimensional power spectral estimate for some nonstationary processes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Smith, Gregory L.

    1989-01-01

    A two dimensional estimate for the power spectral density of a nonstationary process is being developed. The estimate will be applied to helicopter noise data which is clearly nonstationary. The acoustic pressure from the isolated main rotor and isolated tail rotor is known to be periodically correlated (PC) and the combined noise from the main and tail rotors is assumed to be correlation autoregressive (CAR). The results of this nonstationary analysis will be compared with the current method of assuming that the data is stationary and analyzing it as such. Another method of analysis is to introduce a random phase shift into the data as shown by Papoulis to produce a time history which can then be accurately modeled as stationary. This method will also be investigated for the helicopter data. A method used to determine the period of a PC process when the period is not know is discussed. The period of a PC process must be known in order to produce an accurate spectral representation for the process. The spectral estimate is developed. The bias and variability of the estimate are also discussed. Finally, the current method for analyzing nonstationary data is compared to that of using a two dimensional spectral representation. In addition, the method of phase shifting the data is examined.

  20. Homogenization versus homogenization-free method to measure muscle glycogen fractions.

    PubMed

    Mojibi, N; Rasouli, M

    2016-12-01

    The glycogen is extracted from animal tissues with or without homogenization using cold perchloric acid. Three methods were compared for determination of glycogen in rat muscle at different physiological states. Two groups of five rats were kept at rest or 45 minutes muscular activity. The glycogen fractions were extracted and measured by using three methods. The data of homogenization method shows that total glycogen decreased following 45 min physical activity and the change occurred entirely in acid soluble glycogen (ASG), while AIG did not change significantly. Similar results were obtained by using "total-glycogen-fractionation methods". The findings of "homogenization-free method" indicate that the acid insoluble fraction (AIG) was the main portion of muscle glycogen and the majority of changes occurred in AIG fraction. The results of "homogenization method" are identical with "total glycogen fractionation", but differ with "homogenization-free" protocol. The ASG fraction is the major portion of muscle glycogen and is more metabolically active form.

  1. The comparative analysis of the current-meter method and the pressure-time method used for discharge measurements in the Kaplan turbine penstocks

    NASA Astrophysics Data System (ADS)

    Adamkowski, A.; Krzemianowski, Z.

    2012-11-01

    The paper presents experiences gathered during many years of utilizing the current-meter and pressure-time methods for flow rate measurements in many hydropower plants. The integration techniques used in these both methods are different from the recommendations contained in the relevant international standards, mainly from the graphical and arithmetical ones. The results of the comparative analysis of both methods applied at the same time during the hydraulic performance tests of two Kaplan turbines in one of the Polish hydropower plant are presented in the final part of the paper. In the case of the pressure-time method application, the concrete penstocks of the tested turbines required installing a special measuring instrumentation inside the penstock. The comparison has shown a satisfactory agreement between the results of discharge measurements executed using the both considered methods. Maximum differences between the discharge values have not exceeded 1.0 % and the average differences have not been greater than 0.5 %.

  2. Development of a graphical method for choosing the optimal mode of traffic light

    NASA Astrophysics Data System (ADS)

    Novikov, A. N.; Katunin, A. A.; Novikov, I. A.; Kravchenko, A. A.; Shevtsova, A. G.

    2018-05-01

    Changing the transportation infrastructure for improving the main characteristics of the transportation flow is the key problem in transportation planning, therefore the main question lies in the ability to plan the change of the main indicators for the long term. In this investigation, an analysis of the city’s population has been performed and the most difficult transportation segment has been identified. During its identification, the main characteristics of the transportation flow have been established. For the evaluation of these characteristics until 2025, an analysis of the available methods of establishing changes in their values has been conducted. During the analysis of the above mentioned methods of evaluation of the change in intensity, based on the method of extrapolation, three scenarios of the development of the transportation system have been identified. It has been established that the most favorable method of controlling the transportation flow in the entrance to the city is the long term control of the traffic system. For the first time, with the help of the authors, based on the investigations of foreign scientists and the mathematical analysis of the changes in intensiveness on the main routes of the given road, the method of graphically choosing the required control plan has been put forward. The effectiveness of said organization scheme of the transportation system has been rated in the Transyt-14 program, with the analysis of changes in the main characteristics of the transportation flow.

  3. Development of salt production technology using prism greenhouse method

    NASA Astrophysics Data System (ADS)

    Guntur, G.; Jaziri, A. A.; Prihanto, A. A.; Arisandi, D. M.; Kurniawan, A.

    2018-01-01

    The main problem of salt production in Indonesia is low productivity and quality because the technology used commonly by Indonesian salt farmers is traditional method. This research aims to increase production of salt by using the prism greenhouse method. The prism greenhouse method is a salt production system with a combination of several salt production technologies, including geomembrane, threaded filter, and prism greenhouse technology. This research method used descriptive method. The results of this study were the productivity increased threefold, and the quality of salt produced also increased in terms of the content of NaCl from 85% to 95%. In addition, salt production with the prism greenhouse method has several advantages, such as faster harvest time, weather resistance, easy to use, and higher profit than traditional methods.

  4. A Model for QoS – Aware Wireless Communication in Hospitals

    PubMed Central

    Alavikia, Zahra; Khadivi, Pejman; Hashemi, Masoud Reza

    2012-01-01

    In the recent decade, research regarding wireless applications in electronic health (e-Health) services has been increasing. The main benefits of using wireless technologies in e-Health applications are simple communications, fast delivery of medical information, reducing treatment cost and also reducing the medical workers’ error rate. However, using wireless communications in sensitive healthcare environment raises electromagnetic interference (EMI). One of the most effective methods to avoid the EMI problem is power management. To this end, some of methods have been proposed in the literature to reduce EMI effects in health care environments. However, using these methods may result in nonaccurate interference avoidance and also may increase network complexity. To overcome these problems, we introduce two approaches based on per-user location and hospital sectoring for power management in sensitive healthcare environments. Although reducing transmission power could avoid EMI, it causes a number of successful message deliveries to the access point to decrease and, hence, the quality of service requirements cannot be meet. In this paper, we propose the use of relays for decreasing the probability of outage in the aforementioned scenario. Relay placement is the main factor to enjoy the usefulness of relay station benefits in the network and, therefore, we use the genetic algorithm to compute the optimum positions of a fixed number of relays. We have considered delay and maximum blind point coverage as two main criteria in relay station problem. The performance of the proposed method in outage reduction is investigated through simulations. PMID:23493832

  5. A Model for QoS - Aware Wireless Communication in Hospitals.

    PubMed

    Alavikia, Zahra; Khadivi, Pejman; Hashemi, Masoud Reza

    2012-01-01

    In the recent decade, research regarding wireless applications in electronic health (e-Health) services has been increasing. The main benefits of using wireless technologies in e-Health applications are simple communications, fast delivery of medical information, reducing treatment cost and also reducing the medical workers' error rate. However, using wireless communications in sensitive healthcare environment raises electromagnetic interference (EMI). One of the most effective methods to avoid the EMI problem is power management. To this end, some of methods have been proposed in the literature to reduce EMI effects in health care environments. However, using these methods may result in nonaccurate interference avoidance and also may increase network complexity. To overcome these problems, we introduce two approaches based on per-user location and hospital sectoring for power management in sensitive healthcare environments. Although reducing transmission power could avoid EMI, it causes a number of successful message deliveries to the access point to decrease and, hence, the quality of service requirements cannot be meet. In this paper, we propose the use of relays for decreasing the probability of outage in the aforementioned scenario. Relay placement is the main factor to enjoy the usefulness of relay station benefits in the network and, therefore, we use the genetic algorithm to compute the optimum positions of a fixed number of relays. We have considered delay and maximum blind point coverage as two main criteria in relay station problem. The performance of the proposed method in outage reduction is investigated through simulations.

  6. Research of Medical Expenditure among Inpatients with Unstable Angina Pectoris in a Single Center

    PubMed Central

    Wu, Suo-Wei; Pan, Qi; Chen, Tong; Wei, Liang-Yu; Xuan, Yong; Wang, Qin; Li, Chao; Song, Jing-Chen

    2017-01-01

    Background: With the rising incidence as well as the medical expenditure among patients with unstable angina pectoris, the research aimed to investigate the inpatient medical expenditure through the combination of diagnosis-related groups (DRGs) among patients with unstable angina pectoris in a Grade A tertiary hospital to conduct the referential standards of medical costs for the diagnosis. Methods: Single-factor analysis and multiple linear stepwise regression method were used to investigate 3933 cases between 2014 and 2016 in Beijing Hospital (China) whose main diagnosis was defined as unstable angina pectoris to determine the main factors influencing the inpatient medical expenditure, and decision tree method was adopted to establish the model of DRGs grouping combinations. Results: The major influential factors of inpatient medical expenditure included age, operative method, therapeutic effects as well as comorbidity and complications (CCs) of the disease, and the 3933 cases were divided into ten DRGs by four factors: age, CCs, therapeutic effects, and the type of surgery with corresponding inpatient medical expenditure standards setup. Data of nonparametric test on medical costs among different groups were all significant (P < 0.001, by Kruskal-Wallis test), with R2 = 0.53 and coefficient of variation (CV) = 0.524. Conclusions: The classification of DRGs by adopting the type of surgery as the main branch node to develop cost control standards in inpatient treatment of unstable angina pectoris is conducive in standardizing the diagnosis and treatment behaviors of the hospital and reducing economic burdens among patients. PMID:28639566

  7. LiNbO{sub 3}: A photovoltaic substrate for massive parallel manipulation and patterning of nano-objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrascosa, M.; García-Cabañes, A.; Jubera, M.

    The application of evanescent photovoltaic (PV) fields, generated by visible illumination of Fe:LiNbO{sub 3} substrates, for parallel massive trapping and manipulation of micro- and nano-objects is critically reviewed. The technique has been often referred to as photovoltaic or photorefractive tweezers. The main advantage of the new method is that the involved electrophoretic and/or dielectrophoretic forces do not require any electrodes and large scale manipulation of nano-objects can be easily achieved using the patterning capabilities of light. The paper describes the experimental techniques for particle trapping and the main reported experimental results obtained with a variety of micro- and nano-particles (dielectricmore » and conductive) and different illumination configurations (single beam, holographic geometry, and spatial light modulator projection). The report also pays attention to the physical basis of the method, namely, the coupling of the evanescent photorefractive fields to the dielectric response of the nano-particles. The role of a number of physical parameters such as the contrast and spatial periodicities of the illumination pattern or the particle deposition method is discussed. Moreover, the main properties of the obtained particle patterns in relation to potential applications are summarized, and first demonstrations reviewed. Finally, the PV method is discussed in comparison to other patterning strategies, such as those based on the pyroelectric response and the electric fields associated to domain poling of ferroelectric materials.« less

  8. Phase transition of a new lattice hydrodynamic model with consideration of on-ramp and off-ramp

    NASA Astrophysics Data System (ADS)

    Zhang, Geng; Sun, Di-hua; Zhao, Min

    2018-01-01

    A new traffic lattice hydrodynamic model with consideration of on-ramp and off-ramp is proposed in this paper. The influence of on-ramp and off-ramp on the stability of the main road is uncovered by theoretical analysis and computer simulation. Through linear stability theory, the neutral stability condition of the new model is obtained and the results show that the unstable region in the phase diagram is enlarged by considering the on-ramp effect but shrunk with consideration of the off-ramp effect. The mKdV equation near the critical point is derived via nonlinear reductive perturbation method and the occurrence of traffic jamming transition can be described by the kink-antikink soliton solution of the mKdV equation. From the simulation results of space-time evolution of traffic density waves, it is shown that the on-ramp can worsen the traffic stability of the main road but off-ramp is positive in stabilizing the traffic flow of the main road.

  9. Simple optical method of qualitative assessment of sperm motility: preliminary results

    NASA Astrophysics Data System (ADS)

    Sozanska, Agnieszka; Kolwas, Krystyna; Galas, Jacek; Blocki, Narcyz; Czyzewski, Adam

    2005-09-01

    The examination of quality of the sperm ejaculate is one of the most important steps in artificial fertilization procedure. The main aim of semen storage centres is to characterise the best semen quality for fertilization. Reliable information about sperm motility is also one the most important parameters for in vitro laboratory procedures. There exist very expensive automated methods for semen analysis but they are unachievable for most of laboratories and semen storage centres. Motivation for this study is to elaborate a simple, cheap, objective and repeatable method for semen motility assessment. The method enables to detect even small changes in motility introduced by medical, physical or chemical factors. To test the reliability of the method we used cryopreserved bull semen from Lowicz Semen Storage Centre. The examined sperm specimen was warmed in water bath and then centrifuged. The best semen was collected by the swim-up technique and diluted to a proper concentration. Several semen concentrations and dilutions were tested in order to find the best probe parameters giving repeatable results. For semen visualization we used the phase-contrast microscope with a CCD camera. A PC computer was used to acquire and to analyse the data. The microscope table equipped with a microscope glass pool 0.7mm deep instead of some conventional plane microscope slides was stabilised at the temperature of 37°C. The main idea of our method is based on a numerical processing of the optical contrast of the sperm images which illustrates the dynamics of the sperm cells movement and on appropriate analysis of a grey scale level of the superimposed images. An elaborated numerical algorithm allows us to find the relative amount of motile sperm cells. The proposed method of sperm motility assessment seems to be objective and repeatable.

  10. Validity of the assessment method of skeletal maturation by cervical vertebrae: a systematic review and meta-analysis.

    PubMed

    Cericato, G O; Bittencourt, M A V; Paranhos, L R

    2015-01-01

    To perform a systematic review with meta-analysis to answer the question: is the cervical vertebrae maturation index (CVMI) effective to replace hand-wrist radiograph (gold standard) in determining the pubertal growth spurt in patients undergoing bone growth? A search in three databases was performed, in which studies were selected that compared one of the two main assessment methods for cervical vertebrae (Hassel B, Farman AG. Skeletal maturation evaluation using cervical vertebrae. Am J Orthod Dentofacial Orthop 1995; 107: 58-66, or Baccetti T, Franchi L, McNamara JA Jr. An improved version of the cervical vertebral maturation (CVM) method for the assessment of mandibular growth. Angle Orthod 2002; 72: 316-23) to a carpal assessment method. The main methodological data from each of the texts were collected and tabulated after. Later, the meta-analysis of the correlation coefficients obtained was performed. 19 articles were selected from an initial 206 articles collected. Regardless of the method used, the results of the meta-analysis showed that every article selected presented a positive correlation between skeletal maturation assessment performed by cervical vertebrae and carpal methods, with discrepancy of values between genders indicating higher correlation for the female gender (0.925; 0.878) than for the male (0.879; 0.842). When the assessment was performed without gender separation, correlation was significant (0.592; 0.688) but lower in the cases when genders were separated. With the results of this meta-analysis, it is safe to affirm that both CVMIs used in the present study are reliable to replace the hand-wrist radiograph in predicting the pubertal growth spurt, considering that the highest values were found in female samples, especially in the method by Hassel and Farman.

  11. Techniques for Sea Ice Characteristics Extraction and Sea Ice Monitoring Using Multi-Sensor Satellite Data in the Bohai Sea-Dragon 3 Programme Final Report (2012-2016)

    NASA Astrophysics Data System (ADS)

    Zhang, Xi; Zhang, Jie; Meng, Junmin

    2016-08-01

    The objectives of Dragon-3 programme (ID: 10501) are to develop methods for classification sea ice types and retrieving ice thickness based on multi-sensor data. In this final results paper, we give a briefly introduction for our research work and mainly results. Key words: the Bohai Sea ice, Sea ice, optical and

  12. Comment on "Unification of multiqubit polygamy inequalities"

    NASA Astrophysics Data System (ADS)

    Song, Wei; Zhao, Jun-Long; Yu, Long-Bao; Zhang, Li-Hua

    2017-05-01

    Recently, Kim established a unified view of polygamy of multiqubit entanglement [Phys. Rev. A 85, 032335 (2012), 10.1103/PhysRevA.85.032335]. In order to prove their main results, Kim first proposed an important property which is illustrated in Lemma 2. We point out that his proofs of Lemma 2 are flawed because of some errors in his derivations. Furthermore, we present an improved method to prove the original results.

  13. Linear Programming and Its Application to Pattern Recognition Problems

    NASA Technical Reports Server (NTRS)

    Omalley, M. J.

    1973-01-01

    Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.

  14. Some new traveling wave exact solutions of the (2+1)-dimensional Boiti-Leon-Pempinelli equations.

    PubMed

    Qi, Jian-ming; Zhang, Fu; Yuan, Wen-jun; Huang, Zi-feng

    2014-01-01

    We employ the complex method to obtain all meromorphic exact solutions of complex (2+1)-dimensional Boiti-Leon-Pempinelli equations (BLP system of equations). The idea introduced in this paper can be applied to other nonlinear evolution equations. Our results show that all rational and simply periodic traveling wave exact solutions of the equations (BLP) are solitary wave solutions, the complex method is simpler than other methods, and there exist some rational solutions ur,2 (z) and simply periodic solutions us,2-6(z) which are not only new but also not degenerated successively by the elliptic function solutions. We believe that this method should play an important role for finding exact solutions in the mathematical physics. For these new traveling wave solutions, we give some computer simulations to illustrate our main results.

  15. Ultrasound-Assisted Extraction of Stilbenes from Grape Canes.

    PubMed

    Piñeiro, Zulema; Marrufo-Curtido, Almudena; Serrano, Maria Jose; Palma, Miguel

    2016-06-16

    An analytical ultrasound-assisted extraction (UAE) method has been optimized and validated for the rapid extraction of stilbenes from grape canes. The influence of sample pre-treatment (oven or freeze-drying) and several extraction variables (solvent, sample-solvent ratio and extraction time between others) on the extraction process were analyzed. The new method allowed the main stilbenes in grape canes to be extracted in just 10 min, with an extraction temperature of 75 °C and 60% ethanol in water as the extraction solvent. Validation of the extraction method was based on analytical properties. The resulting RSDs (n = 5) for interday/intraday precision were less than 10%. Furthermore, the method was successfully applied in the analysis of 20 different grape cane samples. The result showed that grape cane byproducts are potentially sources of bioactive compounds of interest for pharmaceutical and food industries.

  16. An Applied Method for Predicting the Load-Carrying Capacity in Compression of Thin-Wall Composite Structures with Impact Damage

    NASA Astrophysics Data System (ADS)

    Mitrofanov, O.; Pavelko, I.; Varickis, S.; Vagele, A.

    2018-03-01

    The necessity for considering both strength criteria and postbuckling effects in calculating the load-carrying capacity in compression of thin-wall composite structures with impact damage is substantiated. An original applied method ensuring solution of these problems with an accuracy sufficient for practical design tasks is developed. The main advantage of the method is its applicability in terms of computing resources and the set of initial data required. The results of application of the method to solution of the problem of compression of fragments of thin-wall honeycomb panel damaged by impacts of various energies are presented. After a comparison of calculation results with experimental data, a working algorithm for calculating the reduction in the load-carrying capacity of a composite object with impact damage is adopted.

  17. Measurement and calibration of differential Mueller matrix of distributed targets

    NASA Technical Reports Server (NTRS)

    Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.

    1992-01-01

    A rigorous method for calibrating polarimetric backscatter measurements of distributed targets is presented. By characterizing the radar distortions over the entire mainlobe of the antenna, the differential Mueller matrix is derived from the measured scattering matrices with a high degree of accuracy. It is shown that the radar distortions can be determined by measuring the polarimetric response of a metallic sphere over the main lobe of the antenna. Comparison of results obtained with the new algorithm with the results derived from the old calibration method show that the discrepancy between the two methods is less than 1 dB for the backscattering coefficients. The discrepancy is more drastic for the phase-difference statistics, indicating that removal of the radar distortions from the cross products of the scattering matrix elements cannot be accomplished with the traditional calibration methods.

  18. Magnetic resonance image segmentation using multifractal techniques

    NASA Astrophysics Data System (ADS)

    Yu, Yue-e.; Wang, Fang; Liu, Li-lin

    2015-11-01

    In order to delineate target region for magnetic resonance image (MRI) with diseases, the classical multifractal spectrum (MFS)-segmentation method and latest multifractal detrended fluctuation spectrum (MF-DFS)-based segmentation method are employed in our study. One of our main conclusions from experiments is that both of the two multifractal-based methods are workable for handling MRIs. The best result is obtained by MF-DFS-based method using Lh10 as local characteristic. The anti-noises experiments also suppot the conclusion. This interest finding shows that the features can be better represented by the strong fluctuations instead of the weak fluctuations for the MRIs. By comparing the multifractal nature between lesion and non-lesion area on the basis of the segmentation results, an interest finding is that the gray value's fluctuation in lesion area is much severer than that in non-lesion area.

  19. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less

  20. Critical evaluation of the potential energy surface of the CH3 + HO2reaction system

    NASA Astrophysics Data System (ADS)

    Faragó, E. P.; Szőri, M.; Owen, M. C.; Fittschen, C.; Viskolcz, B.

    2015-02-01

    The CH3 + HO2 reaction system was studied theoretically by a newly developed, HEAT345-(Q) method based CHEAT1 protocol and includes the combined singlet and triplet potential energy surfaces. The main simplification is based on the CCSDT(Q)/cc-pVDZ calculation which is computationally inexpensive. Despite the economic and black-box treatment of higher excitations, the results are within 0.6 kcal/mol of the highly accurate literature values. Furthermore, the CHEAT1 surpassed the popular standard composite methods such as CBS-4M, CBS-QB3, CBS-APNO, G2, G3, G3MP2B3, G4, W1U, and W1BD mainly due to their poor performance in characterizing transition states (TS). For TS structures, various standard DFT and MP2 method have also been tested against the resulting CCSD/cc-pVTZ geometry of our protocol. A fairly good agreement was only found in the cases of the B2PLYP and BHandHLYP functionals, which were able to reproduce the structures of all TS studied within a maximum absolute deviation of 7%. The complex reaction mechanism was extended by three new low lying reaction channels. These are indirect water elimination from CH3OOH resulted formaldehyde, H2 elimination yielded methylene peroxide, and methanol and reactive triplet oxygen were formed via H-shift in the third channel. CHEAT1 protocol based on HEAT345-(Q) method is a robust, general, and cheap alternative for high accurate kinetic calculations.

  1. Method and machine for high strength undiffused brushless operation

    DOEpatents

    Hsu, John S.

    2003-06-03

    A brushless electric machine (30) having a stator (31) and a rotor (32) and a main air gap (34), the rotor (32) having pairs of rotor pole portions (22b, 22c, 32f, 32l) disposed at least partly around the axis of rotation (32p) and facing the main air gap (24b, 24c, 34), at least one stationary winding (20b, 20c, 33b) separated from the rotor (22b, 22c, 32) by a secondary air gap (23b, 23c, 35) so as to induce a rotor-side flux in the rotor (22b, 22c, 32) which controls a resultant flux in the main air gap (24b, 24c, 34). PM material (27b, 27c) is disposed in spaces between the rotor pole portions (22b, 22c, 32f, 32l) to inhibit the rotor-side flux from leaking from said pole portions (22b, 22c, 32f, 32l) prior to reaching the main air gap (24b, 24c, 34). By selecting the direction of current in the stationary winding (20b, 20c, 33b) both flux enhancement and flux weakening are provided for the main air gap (24b, 24c, 34). The stationary windings (31a, 33b) which are used for both primary and secondary excitation allow for easier adaptation to cooling systems as described. A method of non-diffused flux enhancement and flux weakening is also disclosed.

  2. [Determination and pharmacokinetics of main components for Psoralea corylifolia-Myristica fragrants drug pair by using UPLC-MS/MS].

    PubMed

    Gao, Jia-Rong; Xu, Shuang-Zhi; Han, Yan-Quan; Wei, Liang-Bing; Jiang, Hui; Song, Jun-Mei; Xue, Xue

    2017-05-01

    To conduct multiple-reaction monitoring(MRM) quantitative analysis with ultra-high performance liquid chromatography coupled with mass spectrometry method(UPLC-MS/MS), determine the concentrations of psoralen, isopsoralen, bakuchiol and dehydrodiisoeugenol in plasma under positive iron mode with chloramghenicol as internal standard, and investigate the pharmacokinetics process of the main components before and after oral administration of drug pair Psoralea corylifolia -Myristica fragrants. Thirty-six SD rats were randomly divided into three group(A, B, C) and received P. corylifolia extract, P. corylifolia-M. fragrants extract, and M. fragrants extract respectively by intragastric administration. The plasma samples were collected at different time points. In the plasma samples, psoralen, isopsoralen, bakuchiol and dehydrodiisoeugenol showed good linear relationship within concentration rages of 0.098 125 to 39.25, 0.084 37 to 33.75, 0.046 875 to 18.75, and 0.11 to 2.2 mg•L⁻¹ respectively. The precision and stability results showed that the determination method of plasma concentration for such compositions was stable and reliable. The pharmacokinetic parameters obtained by DAS 2.0 showed varying differences before and after compatibility. According to the experimental results, the compatibility of P. corylifolia and M. fragrants can significantly impact the pharmacokinetic process of main components, expand their distribution and accelerate their metabolism and elimination in vivo. Copyright© by the Chinese Pharmaceutical Association.

  3. Effect of solvents extraction on total phenolics and antioxidant activity of extracts from flaxseed (Linum usitatissimum L.).

    PubMed

    Anwar, Farooq; Przybylski, Roman

    2012-01-01

    Plant origin food ingredients are the main source of very potent antioxidants. Tocopherols, the main oilseeds natural antioxidants are very potent and when implemented into cell membranes are able to scavenge large number of free radicals. Among plant antioxidants are mainly phenolics, large and diversified group of chemical compounds with different radical scavenging potential. Defatted flaxseed meals were extracted with pure alcohols and its mixture with water. Acquired extracts were analysed for the content of phenolics and flavonoids using colorimetric procedures. Antioxidative capacity was assessed by utilizing: DPPH stable free radicals; inhibition of linoleic acid oxidation and reducing power of components. Investigation was conducted on two different batches of flaxseed, assessing antioxidant capacity of compounds extracted with different polarity solvents and extracts were tested for antioxidant activity with different methods. The highest yield of extraction was achieved with 80% methanol but the extract did not contain the highest amount of phenolics and flavonoids. When 80% ethanol was used for extraction the highest amount of flavonoids was detected and also the best antioxidant capacity. The results clearly showed that utilization of polar solvent enable extraction of significant amounts of phenolics and flavonoids. Those components were the most potent antioxidants present in those extracts. Content of these compounds correlated well with results from applied methods for antioxidant assessment.

  4. Selection method of terrain matching area for TERCOM algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qieqie; Zhao, Long

    2017-10-01

    The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%

  5. METHOD AND APPARATUS FOR TESTING THE PRESENCE OF SPECIFIC ATOMIC ELEMENTS IN A SUBSTANCE

    DOEpatents

    Putman, J.L.

    1960-01-26

    Detection of specific atomic elements in a substance and particularly the applicability to well logging are discussed. The principal novelty resides in the determination of several of the auxiliary energy peaks in addition to the main energy peak of the gamma-ray energy spectrum of a substance and comparison of such peaks to the spectrum of the specific atomic element being tested for. thus resulting in identification of same. The invention facilitates the identification of specific elements even when in the presence of other elements having similar gamma energy spectra as to the main energy peaks.

  6. Modeling of enhanced spontaneous parametric down-conversion in plasmonic and dielectric structures with realistic waves

    NASA Astrophysics Data System (ADS)

    Loot, A.; Hizhnyakov, V.

    2018-05-01

    A numerical study of the enhancement of the spontaneous parametric down-conversion in plasmonic and dielectric structures is considered. The modeling is done using a nonlinear transfer-matrix method which is extended to include vacuum fluctuations and realistic waves (e.g. Gaussian beam). The results indicate that in the case of short-range surface plasmon polaritons, the main limiting factor of the enhancement is the short length of the coherent buildup. In the case of long-range surface plasmon polaritons or dielectric guided waves, the very narrow resonances are the main limiting factor instead.

  7. Studies on thermal decomposition behaviors of polypropylene using molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Huang, Jinbao; He, Chao; Tong, Hong; Pan, Guiying

    2017-11-01

    Polypropylene (PP) is one of the main components of waste plastics. In order to understand the mechanism of PP thermal decomposition, the pyrolysis behaviour of PP has been simulated from 300 to 1000 K in periodic boundary conditions by molecular dynamic method, based on AMBER force field. The simulation results show that the pyrolysis process of PP can mostly be divided into three stages: low temperature pyrolysis stage, intermediate temperature stage and high temperature pyrolysis stage. PP pyrolysis is typical of random main-chain scission, and the possible formation mechanism of major pyrolysis products was analyzed.

  8. ["Open" surgery of mitral heart diseases complicated by pulmonary hypertension].

    PubMed

    Abdumazhidov, Kh A; Guliamov, D S; Amanov, A A

    2000-01-01

    Under analysis were the results of 386 operations on the "open" heart made for mitral diseases complicated by pulmonary hypertension of different degrees. Prosthetics of the mitral valve was performed in 251 patients, in 135 patients the so-called "organ-saving" correction of the defect was fulfilled. The decision on the method of the defect correction depends on the anatomical particularities, morphological alterations of the valvular apparatus. The main place among the causes of postoperative lethality (9-11%) is occupied by cardiac insufficiency and renohepatic failure which are noted mainly in patients of the IVth functional class.

  9. Research priorities in the field of HIV and AIDS in Iran

    PubMed Central

    Haghdoost, AliAkbar; Sadeghi, Masoomeh; Nasirian, Maryam; Mirzazadeh, Ali; Navadeh, Soodabeh

    2012-01-01

    Background: HIV is a multidimensional problem. Therefore, prioritization of research topics in this field is a serious challenge. We decided to prioritize the major areas of research on HIV/AIDS in Iran. Materials ans Methods: In a brain-storming session with the main national and provincial stakeholders and experts from different relevant fields, the direct and indirect dimensions of HIV/AIDS and its related research issues were explored. Afterward, using the Delphi method, we sent questionnaires to 20 experts (13 respondents) from different sectors. In this electronic based questioner, we requested experts to evaluate main topics and their subtopics. The ranges of scores were between 0 and 100. Results: The score of priorities of main themes were preventive activities (43.2), large scale planning (25.4), the estimation of the HIV/AIDS burden (20.9), and basic scientific research (10.5). The most important priority in each main theme was education particularly in high risk groups (52.5), developing the national strategy to address the epidemic (31.8), estimation of the incidence and prevalence among high-risk groups (59.5) and developing new preventive methods (66.7), respectively. Conclusions: The most important priorities of researches on HIV/AIDS were preventive activities and developing national strategy. As high risk groups are the most involved people in the epidemic, and they are also the most hard-to-reach sub-populations, a national well designated comprehensive strategy is essential. However, we believe with a very specific and directed scheme, special attention to research in basic sciences is necessary, at least in limited number of institutes. PMID:23626616

  10. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  11. Assessment of changing interdependencies between human electroencephalograms using nonlinear methods

    NASA Astrophysics Data System (ADS)

    Pereda, E.; Rial, R.; Gamundi, A.; González, J.

    2001-01-01

    We investigate the problems that might arise when two recently developed methods for detecting interdependencies between time series using state space embedding are applied to signals of different complexity. With this aim, these methods were used to assess the interdependencies between two electroencephalographic channels from 10 adult human subjects during different vigilance states. The significance and nature of the measured interdependencies were checked by comparing the results of the original data with those of different types of surrogates. We found that even with proper reconstructions of the dynamics of the time series, both methods may give wrong statistical evidence of decreasing interdependencies during deep sleep due to changes in the complexity of each individual channel. The main factor responsible for this result was the use of an insufficient number of neighbors in the calculations. Once this problem was surmounted, both methods showed the existence of a significant relationship between the channels which was mostly of linear type and increased from awake to slow wave sleep. We conclude that the significance of the qualitative results provided for both methods must be carefully tested before drawing any conclusion about the implications of such results.

  12. Application of syntactic methods of pattern recognition for data mining and knowledge discovery in medicine

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Tadeusiewicz, Ryszard

    2000-04-01

    This paper presents and discusses possibilities of application of selected algorithms belonging to the group of syntactic methods of patten recognition used to analyze and extract features of shapes and to diagnose morphological lesions seen on selected medical images. This method is particularly useful for specialist morphological analysis of shapes of selected organs of abdominal cavity conducted to diagnose disease symptoms occurring in the main pancreatic ducts, upper segments of ureters and renal pelvis. Analysis of the correct morphology of these organs is possible with the application of the sequential and tree method belonging to the group of syntactic methods of pattern recognition. The objective of this analysis is to support early diagnosis of disease lesions, mainly characteristic for carcinoma and pancreatitis, based on examinations of ERCP images and a diagnosis of morphological lesions in ureters as well as renal pelvis based on an analysis of urograms. In the analysis of ERCP images the main objective is to recognize morphological lesions in pancreas ducts characteristic for carcinoma and chronic pancreatitis, while in the case of kidney radiogram analysis the aim is to diagnose local irregularities of ureter lumen and to examine the morphology of renal pelvis and renal calyxes. Diagnosing the above mentioned lesion has been conducted with the use of syntactic methods of pattern recognition, in particular the languages of description of features of shapes and context-free sequential attributed grammars. These methods allow to recognize and describe in a very efficient way the aforementioned lesions on images obtained as a result of initial image processing of width diagrams of the examined structures. Additionally, in order to support the analysis of the correct structure of renal pelvis a method using the tree grammar for syntactic pattern recognition to define its correct morphological shapes has been presented.

  13. Comparison of methods for measuring atmospheric deposition of arsenic, cadmium, nickel and lead.

    PubMed

    Aas, Wenche; Alleman, Laurent Y; Bieber, Elke; Gladtke, Dieter; Houdret, Jean-Luc; Karlsson, Vuokko; Monies, Christian

    2009-06-01

    A comprehensive field intercomparison at four different types of European sites (two rural, one urban and one industrial) comparing three different collectors (wet only, bulk and Bergerhoff samplers) was conducted in the framework of the European Committee for Standardization (CEN) to create an European standard for the deposition of the four elements As, Cd, Ni and Pb. The purpose was to determine whether the proposed methods lead to results within the uncertainty required by the EU's daughter directive (70%). The main conclusion is that a different sampling strategy is needed for rural and industrial sites. Thus, the conclusions on uncertainties and sample approach are presented separately for the different approaches. The wet only and bulk collector ("bulk bottle method") are comparable at wet rural sites where the total deposition arises mainly from precipitation, the expanded uncertainty when comparing these two types of sampler are below 45% for As, Cd and Pb, 67% for Ni. At industrial sites and possibly very dry rural and urban sites it is necessary to use Bergerhoff samplers or a "bulk bottle+funnel method". It is not possible to address the total deposition estimation with these methods, but they will give the lowest estimate of the total deposition. The expanded uncertainties when comparing the Bergerhoff and the bulk bottle+funnel methods are below 50% for As and Cd, and 63% for Pb. The uncertainty for Ni was not addressed since the bulk bottle+funnel method did not include a full digestion procedure which is necessary for sites with high loads of undissolved metals. The lowest estimate can however be calculated by comparing parallel Bergerhoff samplers where the expanded uncertainty for Ni was 24%. The reproducibility is comparable to the between sampler/method uncertainties. Sampling and sample preparation were proved to be the main factors in the uncertainty budget of deposition measurements.

  14. Automatic Road Gap Detection Using Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.

    2011-09-01

    Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  15. Evaluation of dysphagia in early stroke patients by bedside, endoscopic, and electrophysiological methods.

    PubMed

    Umay, Ebru Karaca; Unlu, Ece; Saylam, Guleser Kılıc; Cakci, Aytul; Korkmaz, Hakan

    2013-09-01

    We aimed in this study to evaluate dysphagia in early stroke patients using a bedside screening test and flexible fiberoptic endoscopic evaluation of swallowing (FFEES) and electrophysiological evaluation (EE) methods and to compare the effectiveness of these methods. Twenty-four patients who were hospitalized in our clinic within the first 3 months after stroke were included in this study. Patients were evaluated using a bedside screening test [including bedside dysphagia score (BDS), neurological examination dysphagia score (NEDS), and total dysphagia score (TDS)] and FFEES and EE methods. Patients were divided into normal-swallowing and dysphagia groups according to the results of the evaluation methods. Patients with dysphagia as determined by any of these methods were compared to the patients with normal swallowing based on the results of the other two methods. Based on the results of our study, a high BDS was positively correlated with dysphagia identified by FFEES and EE methods. Moreover, the FFEES and EE methods were positively correlated. There was no significant correlation between NEDS and TDS levels and either EE or FFEES method. Bedside screening tests should be used mainly as an initial screening test; then FFEES and EE methods should be combined in patients who show risks. This diagnostic algorithm may provide a practical and fast solution for selected stroke patients.

  16. [Transcranial magnetic therapy in the treatment of psychoautonomous disturbances in children with diabetes mellitus type 1].

    PubMed

    Filina, N Iu; Bolotova, N V; Manukian, V Iu; Nikolaeva, N V; Kompaniets, O V

    2009-01-01

    Results of a clinical-physiological study of 80 children with diabetes mellitus type 1 with psychoautonomous disturbances are presented. Forty patients of the main group received transcranial magnetic therapy (TcMT), 40 patients of the control group had placebo sessions of TcMT with magnetic power supply switched off. TcMT was applied using bitemporal method, running regime with modulation frequency 1-10 Hz. Patients received 10 sessions. Positive changes were found in the main group compared to the controls. In the main group, TcMT sessions allowed to normalize the autonomous status in 75% of children and to improve psychoemotional state in 55%. The correction of psychoemotional status of children changed their behavior towards diabetes, improved control and compensation of the disease.

  17. Rapid analysis of the main components of the total glycosides of Ranunculus japonicus by UPLC/Q-TOF-MS.

    PubMed

    Rui, Wen; Chen, Hongyuan; Tan, Yuzhi; Zhong, Yanmei; Feng, Yifan

    2010-05-01

    A rapid method for the analysis of the main components of the total glycosides of Ranunculus japonicus (TGOR) was developed using ultra-performance liquid chromatography with quadrupole-time-of-flight mass spectrometry (UPLC/Q-TOF-MS). The separation analysis was performed on a Waters Acquity UPLC system and the accurate mass of molecules and their fragment ions were determined by Q-TOF MS. Twenty compounds, including lactone glycosides, flavonoid glycosides and flavonoid aglycones, were identified and tentatively deduced on the basis of their elemental compositions, MS/MS data and relevant literature. The results demonstrated that lactone glycosides and flavonoids were the main constituents of TGOR. Furthermore, an effective and rapid pattern was established allowing for the comprehensive and systematic characterization of the complex samples.

  18. Dynamic Analysis Method for Electromagnetic Artificial Muscle Actuator under PID Control

    NASA Astrophysics Data System (ADS)

    Nakata, Yoshihiro; Ishiguro, Hiroshi; Hirata, Katsuhiro

    We have been studying an interior permanent magnet linear actuator for an artificial muscle. This actuator mainly consists of a mover and stator. The mover is composed of permanent magnets, magnetic cores and a non-magnetic shaft. The stator is composed of 3-phase coils and a back yoke. In this paper, the dynamic analysis method under PID control is proposed employing the 3-D finite element method (3-D FEM) to compute the dynamic response and current response when the positioning control is active. As a conclusion, computed results show good agreement with measured ones of a prototype.

  19. Heuristic algorithm for optical character recognition of Arabic script

    NASA Astrophysics Data System (ADS)

    Yarman-Vural, Fatos T.; Atici, A.

    1996-02-01

    In this paper, a heuristic method is developed for segmentation, feature extraction and recognition of the Arabic script. The study is part of a large project for the transcription of the documents in Ottoman Archives. A geometrical and topological feature analysis method is developed for segmentation and feature extraction stages. Chain code transformation is applied to main strokes of the characters which are then classified by the hidden Markov model (HMM) in the recognition stage. Experimental results indicate that the performance of the proposed method is impressive, provided that the thinning process does not yield spurious branches.

  20. An approximate methods approach to probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.

  1. A mathematical model of microbial enhanced oil recovery (MEOR) method for mixed type rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitnikov, A.A.; Eremin, N.A.; Ibattulin, R.R.

    1994-12-31

    This paper deals with the microbial enhanced oil recovery method. It covers: (1) Mechanism of microbial influence on the reservoir was analyzed; (2) The main groups of metabolites affected by the hydrodynamic characteristics of the reservoir were determined; (3) The criterions of use of microbial influence method on the reservoir are defined. The mathematical model of microbial influence on the reservoir was made on this basis. The injection of molasse water solution with Clostridium bacterias into the mixed type of rock was used in this model. And the results of calculations were compared with experimental data.

  2. PCR-DGGE assessment of the bacterial diversity of breast milk in women with lactational infectious mastitis

    PubMed Central

    Delgado, Susana; Arroyo, Rebeca; Martín, Rocío; Rodríguez, Juan M

    2008-01-01

    Background Infectious mastitis is a common condition during lactation and in fact, represents one of the main causes leading to a precocious weaning. The number of studies dealing with lactational mastitis is low and, up to now, the etiological diagnosis is frequently made on the basis of unspecific clinical signs. The aim of this study was to investigate the microbial diversity of breast milk in 20 women with lactational mastitis employing culture-dependent and culture-independent (PCR-DGGE) approaches. Methods Breast milk samples were cultured in different media to investigate the presence of bacteria and/or yeasts, and a total of 149 representative isolates were identified to the species level by 16S rRNA gene PCR sequencing. The microorganisms recovered were compared with those found by PCR-DGGE analysis. To identify the DGGE profiles two reference markers of different microbial species were constructed. Sequence analysis of unknown bands was also performed. Results Staphylococci were the dominant bacterial group and Staphylococcus epidermidis was the dominant species. In a lower number of samples, other bacteria (mainly streptococci and a few gram-negative species) were also identified. Globally, PCR-DGGE results showed a good correlation with those obtained by culture-based methods. However, although DNA bands corresponding to different lactic acid bacteria were detected, such bacteria could not be isolated from the milk samples. Conclusion Staphylococci seem to be the main etiological agents of human lactational mastitis. The combined use of culture and molecular techniques allowed a better characterization of the bacterial diversity in milk from women suffering from infectious mastitis. Our results suggest that this condition could be the result of a disbiotic process where some of the bacterial species usually present in human milk outgrow (staphylococci) while others disappear (lactobacilli or lactococci). PMID:18423017

  3. Comparison of Effectiveness of Collaborative Learning Methods and Traditional Methods in Physics Classes at Northern Maine Technical College.

    ERIC Educational Resources Information Center

    Overlock, Terrence H., Sr.

    To determine the effect of collaborative learning methods on the success rate of physics students at Northern Maine Technical College (NMTC), a study was undertaken to compare the mean final exam scores of a students in a physics course taught by traditional lecture/lab methods to those in a group taught by collaborative techniques. The…

  4. [Rapid assessment of critical quality attributes of Chinese materia medica (II): strategy of NIR assignment].

    PubMed

    Pei, Yan-Ling; Wu, Zhi-Sheng; Shi, Xin-Yuan; Zhou, Lu-Wei; Qiao, Yan-Jiang

    2014-09-01

    The present paper firstly reviewed the research progress and main methods of NIR spectral assignment coupled with our research results. Principal component analysis was focused on characteristic signal extraction to reflect spectral differences. Partial least squares method was concerned with variable selection to discover characteristic absorption band. Two-dimensional correlation spectroscopy was mainly adopted for spectral assignment. Autocorrelation peaks were obtained from spectral changes, which were disturbed by external factors, such as concentration, temperature and pressure. Density functional theory was used to calculate energy from substance structure to establish the relationship between molecular energy and spectra change. Based on the above reviewed method, taking a NIR spectral assignment of chlorogenic acid as example, a reliable spectral assignment for critical quality attributes of Chinese materia medica (CMM) was established using deuterium technology and spectral variable selection. The result demonstrated the assignment consistency according to spectral features of different concentrations of chlorogenic acid and variable selection region of online NIR model in extract process. Although spectral assignment was initial using an active pharmaceutical ingredient, it is meaningful to look forward to the futurity of the complex components in CMM. Therefore, it provided methodology for NIR spectral assignment of critical quality attributes in CMM.

  5. Production of stable superhydrophilic surfaces on 316L steel by simultaneous laser texturing and SiO2 deposition

    NASA Astrophysics Data System (ADS)

    Rajab, Fatema H.; Liu, Zhu; Li, Lin

    2018-01-01

    Superhydrophilic surfaces with liquid contact angles of less than 5 ° have attracted much interest in practical applications including self-cleaning, cell manipulation, adhesion enhancement, anti-fogging, fluid flow control and evaporative cooling. Standard laser metal texturing method often result in unstable wetting characteristics, i.e. changing from super hydrophilic to hydrophobic in a few days or weeks. In this paper, a simple one step method is reported for fabricating a stable superhydrophilic metallic surface that lasted for at least 6 months. Here, 316L stainless steel substrates were textured using a nanosecond laser with in-situ SiO2 deposition. Morphology and chemistry of laser-textured surfaces were characterised using SEM, XRD, XPS and an optical 3D profiler. Static wettability analysis was carried out over a period of 6 months after the laser treatment. The effect of surface roughness on wettability was also studied. Results showed that the wettability of the textured surfaces could be controlled by changing the scanning speed of laser beam and number of passes. The main reason for the realisation of the stable superhydrophilic surface is the combination of the melted glass particles mainly Si and O with that of stainless steel in the micro-textured patterns. This study presents a useful method

  6. Identification of Mucorales isolates from soil using morphological and molecular methods

    PubMed Central

    Ziaee, A; Zia, M; Bayat, M; Hashemi, J

    2016-01-01

    Background and Purpose: Soil is the main habitat of saprophytic and pathogenic fungi. Mucoromycotina constitutes a large group of soil fungi, with certain opportunistic members causing systemic infections in immunocompromised hosts. The majority of human and animal infections are caused by the members of the genera Rhizopus, Mucor, Rhizomucor, Lichtheimia (Absidia), Cunninghamella, and Mortierella. Accordingly, in the present study, we aimed to isolate and identify the main genera of the order Mucorales, using molecular assays and morphological features. Materials and Methods: In total, 340 soil samples were collected from seven public parks throughout the city and sidewalk gardens in 14 municipal districts in Isfahan, Iran. All the samples were cultured on the appropriate media, incubated at 27°C for 2- 4 days, and examined daily for visible fungal growth. Polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method was applied and macroscopic, microscopic, and physiological characteristics were assessed to identify fungal colonies. Results: 400 pure colonies, belonging to the orders Mucorales and Mortierellales, including the genera Lichtheimia, Rhizopus, Rhizomucor, Mucor, Cunninghamella, and Mortierella, were identified. The genus Rhizopus (35.5%) was the most frequent isolate, followed by Mucor (32.25%) and Rhizomucor (27.5%). Conclusion: The results emphasize the importance of opportunistic fungi in public areas and indicate the risk of exposure for immunocompromised individuals. PMID:28681007

  7. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  8. Statistical analysis of 59 inspected SSME HPFTP turbine blades (uncracked and cracked)

    NASA Technical Reports Server (NTRS)

    Wheeler, John T.

    1987-01-01

    The numerical results of statistical analysis of the test data of Space Shuttle Main Engine high pressure fuel turbopump second-stage turbine blades, including some with cracks are presented. Several statistical methods use the test data to determine the application of differences in frequency variations between the uncracked and cracked blades.

  9. Implementing Peer-Assisted Writing Support in German Secondary Schools

    ERIC Educational Resources Information Center

    Rensing, Julia; Vierbuchen, Marie-Christine; Hillenbrand, Clemens; Grünke, Matthias

    2016-01-01

    The alarming results of large studies such as the National Assessment of Educational Progress (NAEP; National Center for Education Statistics, 2012) point to an urgent need for writing support and call for specific and effective methods to foster writing competencies. The main purpose of this paper is to describe an innovative peer-assisted…

  10. Commercial NiMH Cells in LEO Cycling: Thermal Vacuum Life Test Performed for the Floating Potential Probe (FPP)

    NASA Technical Reports Server (NTRS)

    Darcy, Eric; Strangways, Brad

    2003-01-01

    Contents include the following: 1. Introduction: What is the (Floating Potential Probe) FPP? Why was NiMH battery selected? Haw well would crimped seal cell performed in long term vacuum exposure? 2. Verification tests: Battery description. Test methods. Results. Main findings. FPP status.

  11. 2012 National Guard Bureau Posture Statement

    DTIC Science & Technology

    2012-01-01

    Illinois / Poland Indiana / Slovakia Kansas / Armenia Maine/ Montenegro Maryland / Estonia Maryland / Bosnia Michigan / Latvia Minnesota / Croatia New Jersey...alternative methods of planting to help increase crop production in the area. 2012 Posture Statement 19 Global Engagement State Partnership...horticulture ( plant cultivation), pest control, veterinary/animal husbandry techniques, civil engineering, and energy management. As a result of the

  12. Application of LSP Texts in Translator Training

    ERIC Educational Resources Information Center

    Ilynska, Larisa; Smirnova, Tatjana; Platonova, Marina

    2017-01-01

    The paper presents discussion of the results of extensive empirical research into efficient methods of educating and training translators of LSP (language for special purposes) texts. The methodology is based on using popular LSP texts in the respective fields as one of the main media for translator training. The aim of the paper is to investigate…

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    The correspondence residuals due to the discrepancy between the reality and the shape model in use are analyzed for the modal phase measuring deflectometry. Slope residuals are calculated from these discrepancies between the modal estimation and practical acquisition. Since the shape mismatch mainly occurs locally, zonal integration methods which are good at dealing with local variations are used to reconstruct the height residual for compensation. Finally, results of both simulation and experiment indicate the proposed height compensation method is effective, which can be used as a post-complement for the modal phase measuring deflectometry.

  14. Using Avatars for Improving Speaker Identification in Captioning

    NASA Astrophysics Data System (ADS)

    Vy, Quoc V.; Fels, Deborah I.

    Captioning is the main method for accessing television and film content by people who are deaf or hard-of-hearing. One major difficulty consistently identified by the community is that of knowing who is speaking particularly for an off screen narrator. A captioning system was created using a participatory design method to improve speaker identification. The final prototype contained avatars and a coloured border for identifying specific speakers. Evaluation results were very positive; however participants also wanted to customize various components such as caption and avatar location.

  15. [Treatment of gamma-hydroxybutyrate withdrawal].

    PubMed

    Strand, Niels August Willer; Petersen, Tonny Studsgaard; Nielsen, Lars Martin; Boegevig, Soren

    2017-12-11

    Gamma-hydroxybutyrate (GHB) is a drug of abuse, for which physical addiction develops quickly. GHB withdrawal can develop into a life-threatening condition and has previously been treated mainly with benzodiazepines. These have not always proven effective, leading to long hospitalizations in intensive care units. Based on successful Dutch treatment results for using GHB to treat GHB withdrawal symptoms, we propose to implement a similar method in Denmark. The method requires an interdisciplinary effort for which The Danish Poison Information Centre should be consulted for expertise.

  16. Research on numerical method for multiple pollution source discharge and optimal reduction program

    NASA Astrophysics Data System (ADS)

    Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin

    2018-03-01

    In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.

  17. Dynamic stall: An example of strong interaction between viscous and inviscid flows

    NASA Technical Reports Server (NTRS)

    Philippe, J. J.

    1978-01-01

    A study was done of the phenomena concerning profiles in dynamic stall configuration, and more specially those related to pitch oscillations. The most characteristic experimental results on flow separations with a vortex character, and their repercussions on local pressures and total forces were analyzed. Some aspects of the methods for predicting flows with the presence (or not) of boundary layer separation are examined, as well as the main simplified methods available to date for the calculation of total forces in such configurations.

  18. Risk Assessment at the Cosmetic Product Manufacturer by Expert Judgment Method

    NASA Astrophysics Data System (ADS)

    Vtorushina, A. N.; Larionova, E. V.; Mezenceva, I. L.; Nikonova, E. D.

    2017-05-01

    A case study was performed in a cosmetic product manufacturer. We have identified the main risk factors of occupational accidents and their causes. Risk of accidents is assessed by the expert judgment method. Event tree for the most probable accident is built and recommendations on improvement of occupational health and safety protection system at the cosmetic product manufacturer are developed. The results of this paper can be used to develop actions to improve the occupational safety and health system in the chemical industry.

  19. The Trojan Horse Method for nuclear astrophysics and its recent applications

    NASA Astrophysics Data System (ADS)

    Lamia, L.; Spitaleri, C.; Mazzocco, M.; Boiano, A.; Boiano, C.; Broggini, C.; Caciolli, A.; Depalo, R.; Di Pietro, A.; Figuera, P.; Galtarossa, F.; Guardo, G. L.; Gulino, M.; Hayakawa, S.; Kubono, S.; La Cognata, M.; La Commara, M.; La Rana, G.; Lattuada, M.; Menegazzo, R.; Pakou, A.; Parascandolo, C.; Piatti, D.; Pierroutsakou, D.; Pizzone, R. G.; Puglia, S. M. R.; Romano, S.; Rapisarda, G. G.; Sanchez-Benitez, A. M.; Sergi, M. L.; Sgouros, O.; Silva, H.; Soramel, F.; Soukeras, V.; Strano, E.; Torresi, D.; Trippella, O.; Tumino, A.; Yamaguchi, H.; Villante, F. L.; Zhang, G. L.

    2018-01-01

    The Trojan Horse Method (THM) has been applied extensively for the last 25 years to measure nuclear reaction cross sections of interest for astrophysics. Although it has been mainly applied for charged particle-induced reactions, recently it has been found to have also a relevant role for neutron-induced reactions. Here, some advantages of THM will be discussed and the preliminary results of the cosmological relevant 7Be(n,α)4He cross section measurement are discussed.

  20. Interface thermal conductance characterization by infrared thermography: A tool for the study of insertions in bronze ancient Statuary

    NASA Astrophysics Data System (ADS)

    Mercuri, F.; Caruso, G.; Orazi, N.; Zammit, U.; Cicero, C.; Colacicchi Alessandri, O.; Ferretti, M.; Paoloni, S.

    2018-05-01

    In this paper, a new method based on the use of infrared thermography is proposed for the characterization of repairs and inserted parts on ancient bronzes. In particular, the quality of the contact between different kind of insertions and the main body of bronze statues is investigated by analysing the heat conduction process occurring across the interface between them. The thermographic results have been used to establish the nature of these inserted elements and the way they have been coupled to the main body of the statue during and after the manufacturing process. A model for the heat conduction based on the numerical finite elements method has been applied to compare the obtained results to the theoretical predictions. Measurements have been first carried out on test samples and then in the field on the Boxer at Rest (Museo Nazionale Romano in Rome), a masterpiece of the Greek Statuary, which contains a large variety of inserted items and repairs which are typical of the manufacturing process of bronze artefacts in general.

  1. Study on the interaction of triadimenol with calf thymus DNA by multispectroscopic methods and molecular modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Yepeng; Zhang, Guowen; Fu, Peng; Ma, Yadi; Zhou, Jia

    2012-10-01

    The binding mechanism of triadimenol (NOL) to calf thymus DNA (ctDNA) in physiological buffer (pH 7.4) was investigated by multispectroscopic methods including UV-vis absorption, fluorescence, circular dichroism (CD), Fourier transform infrared (FT-IR), and nuclear magnetic resonance (1H NMR) spectroscopy, coupled with viscosity measurements and atomic force microscopy (AFM) technique. The results suggested that NOL interacted with ctDNA by intercalation mode. CD and AFM assays showed that NOL can damage the base stacking of ctDNA and result in regional cleavage of the two DNA strands. FT-IR and 1H NMR spectra coupled with molecular docking revealed that a specific binding mainly exists between NOL and G-C base pairs of the ctDNA where two hydrogen bonds form. Moreover, the association constants of NOL with DNA at three different temperatures were determined to be in the 103 L mol-1 range. The calculated thermodynamic parameters suggested that the binding of NOL to ctDNA was driven mainly by hydrogen bond and van der Waals.

  2. A study on the morphology of polystyrene-grafted poly(ethylene-alt-tetrafluoroethylene) (ETFE) films prepared using a simultaneous radiation grafting method

    NASA Astrophysics Data System (ADS)

    Song, Ju-Myung; Ko, Beom-Seok; Sohn, Joon-Yong; Nho, Young Chang; Shin, Junhwa

    2014-04-01

    The morphology of polystyrene-grafted poly(ethylene-alt-tetrafluoroethylene) (ETFE) films prepared using a simultaneous radiation grafting method was investigated using DMA, DSC, XRD, and SAXS instruments. The DMA study indicates that the ETFE amorphous phase and PS amorphous phase are mixed well in the PS-grafted ETFE films while the ETFE crystalline phase and the PS amorphous phase are separated, suggesting that the PS chains are grafted mainly on the ETFE amorphous regions. The DSC and XRD data showed that the natural crystalline structures of ETFE in the grafted ETFE films are not affected by the degree of grafting. The SAXS profiles displayed that the inter-crystalline distance of the ETFE films increases with an increasing degree of grafting, which further implies that the PS graft chains formed by the simultaneous irradiation has a significant impact on the amorphous morphology of the resulting grafted ETFE film. Thus, these results indicate that the styrene monomers are mainly grafted on the ETFE amorphous regions during the simultaneous radiation grafting process.

  3. Solving Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) using BRKGA with local search

    NASA Astrophysics Data System (ADS)

    Prasetyo, H.; Alfatsani, M. A.; Fauza, G.

    2018-05-01

    The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.

  4. A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.

    A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.

  5. Factor Structure and Psychometric Properties of the Brief Illness Perception Questionnaire in Turkish Cancer Patients

    PubMed Central

    Karataş, Tuğba; Özen, Şükrü; Kutlutürkan, Sevinç

    2017-01-01

    Objective: The main aim of this study was to investigate the factor structure and psychometric properties of the Brief Illness Perception Questionnaire (BIPQ) in Turkish cancer patients. Methods: This methodological study involved 135 cancer patients. Statistical methods included confirmatory or exploratory factor analysis and Cronbach alpha coefficients for internal consistency. Results: The values of fit indices are within the acceptable range. The alpha coefficients for emotional illness representations, cognitive illness representations, and total scale are 0.83, 0.80, and 0.85, respectively. Conclusions: The results confirm the two-factor structure of the Turkish BIPQ and demonstrate its reliability and validity. PMID:28217734

  6. Methods of Combinatorial Optimization to Reveal Factors Affecting Gene Length

    PubMed Central

    Bolshoy, Alexander; Tatarinova, Tatiana

    2012-01-01

    In this paper we present a novel method for genome ranking according to gene lengths. The main outcomes described in this paper are the following: the formulation of the genome ranking problem, presentation of relevant approaches to solve it, and the demonstration of preliminary results from prokaryotic genomes ordering. Using a subset of prokaryotic genomes, we attempted to uncover factors affecting gene length. We have demonstrated that hyperthermophilic species have shorter genes as compared with mesophilic organisms, which probably means that environmental factors affect gene length. Moreover, these preliminary results show that environmental factors group together in ranking evolutionary distant species. PMID:23300345

  7. Testing the cosmic anisotropy with supernovae data: Hemisphere comparison and dipole fitting

    NASA Astrophysics Data System (ADS)

    Deng, Hua-Kai; Wei, Hao

    2018-06-01

    The cosmological principle is one of the cornerstones in modern cosmology. It assumes that the universe is homogeneous and isotropic on cosmic scales. Both the homogeneity and the isotropy of the universe should be tested carefully. In the present work, we are interested in probing the possible preferred direction in the distribution of type Ia supernovae (SNIa). To our best knowledge, two main methods have been used in almost all of the relevant works in the literature, namely the hemisphere comparison (HC) method and the dipole fitting (DF) method. However, the results from these two methods are not always approximately coincident with each other. In this work, we test the cosmic anisotropy by using these two methods with the joint light-curve analysis (JLA) and simulated SNIa data sets. In many cases, both methods work well, and their results are consistent with each other. However, in the cases with two (or even more) preferred directions, the DF method fails while the HC method still works well. This might shed new light on our understanding of these two methods.

  8. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H.; Gilinsky, Mikhail M.

    2001-01-01

    Three connected sub-projects were conducted under reported project. Partially, these sub-projects are directed to solving the problems conducted by the HU/FM&AL under two other NASA grants. The fundamental idea uniting these projects is to use untraditional 3D corrugated nozzle designs and additional methods for exhaust jet noise reduction without essential thrust lost and even with thrust augmentation. Such additional approaches are: (1) to add some solid, fluid, or gas mass at discrete locations to the main supersonic gas stream to minimize the negative influence of strong shock waves forming in propulsion systems; this mass addition may be accompanied by heat addition to the main stream as a result of the fuel combustion or by cooling of this stream as a result of the liquid mass evaporation and boiling; (2) to use porous or permeable nozzles and additional shells at the nozzle exit for preliminary cooling of exhaust hot jet and pressure compensation for non-design conditions (so-called continuous ejector with small mass flow rate; and (3) to propose and analyze new effective methods fuel injection into flow stream in air-breathing engines. Note that all these problems were formulated based on detailed descriptions of the main experimental facts observed at NASA Glenn Research Center. Basically, the HU/FM&AL Team has been involved in joint research with the purpose of finding theoretical explanations for experimental facts and the creation of the accurate numerical simulation technique and prediction theory for solutions for current problems in propulsion systems solved by NASA and Navy agencies. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analysis for advanced aircraft and rocket engines. The F&AL Team uses analytical methods, numerical simulations, and possible experimental tests at the Hampton University campus. We will present some management activity and theoretical numerical simulation results obtained by the FM&AL Team in the reporting period in accordance with the schedule of the work.

  9. Utility of computed tomography in assessment of pulmonary hypertension secondary to biomass smoke exposure

    PubMed Central

    Sertogullarindan, Bunyamin; Bora, Aydin; Yavuz, Alpaslan; Ekin, Selami; Gunbatar, Hulya; Arisoy, Ahmet; Avcu, Serhat; Ozbay, Bulent

    2014-01-01

    Background The aim of this study was to investigate the feasibility of main pulmonary artery diameter quantification by thoracic computerized tomography (CT) in the diagnosis of pulmonary hypertension seconder to biomass smoke exposure. Material/Methods One hundred and four women subjects with biomass smoke exposure and 20 healthy women subjects were enrolled in the prospective study. The correlation between echocardiographic estimation of systolic pulmonary artery pressure and the main pulmonary artery diameter of the cases were studied. Results The main pulmonary artery diameter was 26.9±5.1 in the control subjects and 37.1±6.4 in subjects with biomass smoke exposure. This difference was statistically significant (p<0.001). The systolic pulmonary artery pressure was 22.7±12.4 in the control subjects and 57.3±22 in subjects with biomass smoke exposure. This difference was statistically significant (p<0.001). Systolic pulmonary artery pressure was significantly correlated with the main pulmonary artery diameter (r=0.614, p<0.01). A receiver operating characteristic (ROC) curve analysis showed that a value of 29 mm of the main pulmonary artery diameter differentiated between pulmonary hypertension and non-pulmonary hypertension patients. The sensitivity of the measurement to diagnose pulmonary hypertension was 91% and specificity was 80%. Conclusions Our results indicate that main pulmonary artery diameter measurements by SCT may suggest presence of pulmonary hypertension in biomass smoke exposed women. PMID:24618994

  10. Collective intelligence of the artificial life community on its own successes, failures, and future.

    PubMed

    Rasmussen, Steen; Raven, Michael J; Keating, Gordon N; Bedau, Mark A

    2003-01-01

    We describe a novel Internet-based method for building consensus and clarifying conflicts in large stakeholder groups facing complex issues, and we use the method to survey and map the scientific and organizational perspectives of the artificial life community during the Seventh International Conference on Artificial Life (summer 2000). The issues addressed in this survey included artificial life's main successes, main failures, main open scientific questions, and main strategies for the future, as well as the benefits and pitfalls of creating a professional society for artificial life. By illuminating the artificial life community's collective perspective on these issues, this survey illustrates the value of such methods of harnessing the collective intelligence of large stakeholder groups.

  11. Evolutionary variational-hemivariational inequalities

    NASA Astrophysics Data System (ADS)

    Carl, Siegfried; Le, Vy K.; Motreanu, Dumitru

    2008-09-01

    We consider an evolutionary quasilinear hemivariational inequality under constraints represented by some closed and convex subset. Our main goal is to systematically develop the method of sub-supersolution on the basis of which we then prove existence, comparison, compactness and extremality results. The obtained results are applied to a general obstacle problem. We improve the corresponding results in the recent monograph [S. Carl, V.K. Le, DE Motreanu, Nonsmooth Variational Problems and Their Inequalities. Comparison Principles and Applications, Springer Monogr. Math., Springer, New York, 2007].

  12. Introduction to a standardized method for the evaluation of the potency of Bacillus thuringiensis serotype H-14 based products*

    PubMed Central

    Rishikesh, N.; Quélennec, G.

    1983-01-01

    Vector resistance and other constraints have necessitated consideration of the use of alternative materials and methods in an integrated approach to vector control. Bacillus thuringiensis serotype H-14 is a promising biological control agent which acts as a conventional larvicide through its delta-endotoxin (active ingredient) and which now has to be suitably formulated for application in vector breeding habitats. The active ingredient in the formulations has so far not been chemically characterized or quantified and therefore recourse has to be taken to a bioassay method. Drawing on past experience and through the assistance mainly of various collaborating centres, the World Health Organization has standardized a bioassay method (described in the Annex), which gives consistent and reproducible results. The method permits the determination of the potency of a B.t. H-14 preparation through comparison with a standard powder. The universal adoption of the standardized bioassay method will ensure comparability of the results of different investigators. PMID:6601545

  13. A Hybrid On-line Verification Method of Relay Setting

    NASA Astrophysics Data System (ADS)

    Gao, Wangyuan; Chen, Qing; Si, Ji; Huang, Xin

    2017-05-01

    Along with the rapid development of the power industry, grid structure gets more sophisticated. The validity and rationality of protective relaying are vital to the security of power systems. To increase the security of power systems, it is essential to verify the setting values of relays online. Traditional verification methods mainly include the comparison of protection range and the comparison of calculated setting value. To realize on-line verification, the verifying speed is the key. The verifying result of comparing protection range is accurate, but the computation burden is heavy, and the verifying speed is slow. Comparing calculated setting value is much faster, but the verifying result is conservative and inaccurate. Taking the overcurrent protection as example, this paper analyses the advantages and disadvantages of the two traditional methods above, and proposes a hybrid method of on-line verification which synthesizes the advantages of the two traditional methods. This hybrid method can meet the requirements of accurate on-line verification.

  14. Uncertain decision tree inductive inference

    NASA Astrophysics Data System (ADS)

    Zarban, L.; Jafari, S.; Fakhrahmad, S. M.

    2011-10-01

    Induction is the process of reasoning in which general rules are formulated based on limited observations of recurring phenomenal patterns. Decision tree learning is one of the most widely used and practical inductive methods, which represents the results in a tree scheme. Various decision tree algorithms have already been proposed such as CLS, ID3, Assistant C4.5, REPTree and Random Tree. These algorithms suffer from some major shortcomings. In this article, after discussing the main limitations of the existing methods, we introduce a new decision tree induction algorithm, which overcomes all the problems existing in its counterparts. The new method uses bit strings and maintains important information on them. This use of bit strings and logical operation on them causes high speed during the induction process. Therefore, it has several important features: it deals with inconsistencies in data, avoids overfitting and handles uncertainty. We also illustrate more advantages and the new features of the proposed method. The experimental results show the effectiveness of the method in comparison with other methods existing in the literature.

  15. Analysis of stray radiation for infrared optical system

    NASA Astrophysics Data System (ADS)

    Li, Yang; Zhang, Tingcheng; Liao, Zhibo; Mu, Shengbo; Du, Jianxiang; Wang, Xiangdong

    2016-10-01

    Based on the theory of radiation energy transfer in the infrared optical system, two methods for stray radiation analysis caused by interior thermal radiation in infrared optical system are proposed, one of which is important sampling method technique using forward ray trace, another of which is integral computation method using reverse ray trace. The two methods are discussed in detail. A concrete infrared optical system is provided. Light-tools is used to simulate the passage of radiation from the mirrors and mounts. Absolute values of internal irradiance on the detector are received. The results shows that the main part of the energy on the detector is due to the critical objects which were consistent with critical objects obtained by reverse ray trace, where mirror self-emission contribution is about 87.5% of the total energy. Corresponding to the results, the irradiance on the detector calculated by the two methods are in good agreement. So the validity and rationality of the two methods are proved.

  16. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  17. High Energy Ultrasound As An Applicable Tool For Well Regeneration

    NASA Astrophysics Data System (ADS)

    Bott, W.; Hofmann, T.; Wilken, R.-D.

    Drinking water abstraction by groundwater wells is a main part of ground-water man- agement. During well operation ageing processes cause a decrease of permeability and productivity of these wells. The occurring processes are mainly a combination of chemical, physical and biological factors. This leads to the necessity of well regen- eration in order to maintain original well conditions and are linked to major invest- ments. The use of ultrasound as a well regeneration method is a new application for this purpose. In comparison to conventional regeneration methods, mainly mechan- ical and chemical procedures or a combination of both, high energy ultrasound can be called an environmentally "friendly" application because of the avoidance of any use of harmful chemicals within the well and the aquifer. In addition this method acts with consideration to the well building. But there are conflicting opinions on the effi- ciency of ultrasound. The goal of a current research project, financed by the German Foundation of Environment (DBU), is to answer the question, under which conditions high energy ultrasound is most effective for well regeneration. For this purpose an experimental station was constructed to carry out laboratory examinations on the in- fluence of different parameters on ultrasound efficiency, i.e. hydrostatic pressure, tem- perature, different filter gravel and well filter, duration of sonic, fre-quency and inten- sity. The whole instal-lation is stable regarding pressure up to 20 bar, to approximate con-ditions in real wells. First results show a clear dependence of sonic penetration on different materials of well filter and different size of filter gravel as well as on hydro- static pressure conditions within the well. The contribution presents the experimental setup and figures out further results of currently carried out investigations.

  18. Landfill mining: Developing a comprehensive assessment method.

    PubMed

    Hermann, Robert; Wolfsberger, Tanja; Pomberger, Roland; Sarc, Renato

    2016-11-01

    In Austria, the first basic technological and economic examinations of mass-waste landfills with the purpose to recover secondary raw materials have been carried out by the 'LAMIS - Landfill Mining Österreich' pilot project. A main focus of its research, and the subject of this article, is the first conceptual design of a comprehensive assessment method for landfill mining plans, including not only monetary factors (like costs and proceeds) but also non-monetary ones, such as the concerns of adjoining owners or the environmental impact. Detailed reviews of references, the identification of influences and system boundaries to be included in planning landfill mining, several expert workshops and talks with landfill operators have been performed followed by a division of the whole assessment method into preliminary and main assessment. Preliminary assessment is carried out with a questionnaire to rate juridical feasibility, the risk and the expenditure of a landfill mining project. The results of this questionnaire are compiled in a portfolio chart that is used to recommend, or not, further assessment. If a detailed main assessment is recommended, defined economic criteria are rated by net present value calculations, while ecological and socio-economic criteria are examined in a utility analysis and then transferred into a utility-net present value chart. If this chart does not support making a definite statement on the feasibility of the project, the results must be further examined in a cost-effectiveness analysis. Here, the benefit of the particular landfill mining project per capital unit (utility-net present value ratio) is determined to make a final distinct statement on the general benefit of a landfill mining project. © The Author(s) 2016.

  19. Yarkovsky effect and V-shapes: New method to compute family ages

    NASA Astrophysics Data System (ADS)

    Spoto, F.; Milani, A.; Cellino, A.; Knezevic, Z.; Novakovic, B.; Paolicchi, P.

    2014-07-01

    The computation of family ages is a high-priority goal. As a matter of principle, it can be achieved by using V-shape plots for the families old enough to have the Yarkovsky effect dominating the spread of the proper a and large enough for a statistically significant analysis of the shape. By performing an asteroid family classification with a very enlarged dataset, the results are not just ''more families'', but there are interesting qualitative changes. These are due to the large-number statistics, but also to the larger fraction of smaller objects contained in recently numbered asteroids. We are convinced that our method is effective in adding many smaller asteroids to the core families. As a result, we have a large number of families with very well defined V-shapes, thus with a good possibility of age estimation. We have developed our method to compute ages, which we believe is better than those used previously because it is more objective. Since there are no models for error in absolute magnitude H and for albedo, we have also developed a model of the error in the inverse of the diameter and then we have performed a weighted least-squares fit. We report at least 5/6 examples of dynamical families for which the computation of the V-shape is possible. These examples show the presence of different internal structure of the families, e.g., in the dynamical family of (4) Vesta, we have found two collisional families. The main problem in estimating the ages is the calibration. The difficulty in the Yarkovsky calibration, due to the need to extrapolate from near-Earth asteroids (NEAs) with measured da/dt to main-belt asteroids, is in most cases the main limitation to the accuracy of the age estimation. We obtain an age estimation by scaling the results for the NEA for which there is the best Yarkovsky effect determination, namely (101955) Bennu.

  20. Solar-Induced Plant Fluorescence as seen from space-borne instruments

    NASA Astrophysics Data System (ADS)

    Khosravi, Narges; Vountas, Marco; Rozanov, Vladimir V.; Bracher, Astrid; Burrows, John P.

    2015-04-01

    Solar induced chlorophyll fluorescence (SIF) retrieval can be linked to vegetation correspondence to global carbon cycle, and could be useful for terrestrial carbon budget assessment as well as agricultural and environmental purposes. There have been several investigations using space-borne SIF retrieval due to its good spatial coverage and time efficiency. These methods are mainly based on the fact that plant leaves absorb sunlight mainly within the visible spectral range and use it either for photosynthesis and/or release it as heat or fluorescence (in red and Near Infra Red, NIR, spectral region) back to the atmosphere. As a result, SIF can be considered an additive signal on top of the ground reflectance reaching TOA (Top Of the Atmosphere). Chlorophyll fluorescence is mainly emitted in the spectral range of red to the near-infrared with a pronounced peak at 690 and another at 740 nm. Although it is a very weak signal and two orders of magnitude smaller than the received radiance at TOA, it is feasible to retrieve it within spectral wavelength windows in the NIR. We developed a novel SIF retrieval method based on a modeled assumption of the emitted fluorescence spectrum at canopy level as it would be seen at TOA. The application of it to 10 years of SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric CHartographY) data showed promising results. Comparing our SIF retrieval with results from other studies showed that SIF values of our retrieval are in a general agreement with them. With some variations. As there is no validated SIF retrieval, it is difficult to judge the retrieval quality. Our approach is of generic nature and therefore, could be applied to other data sets as well. Hence, the method is being applied on GOME-2 level 1 data, as the instrument has a better spatial resolution (in the wavelength range needed) and a better global coverage.

  1. Experts’ perceptions of the concept of induced demand in healthcare: A qualitative study in Isfahan, Iran

    PubMed Central

    Keyvanara, Mahmoud; Karimi, Saeed; Khorasani, Elahe; Jazi, Marzie Jafarian

    2014-01-01

    Context: One of the most important subjects in health economics and healthcare management is the theory of induced demand. There are different views about the concept of induced demand. Extensive texts have been presented on induced demand, however a compatible concept has not necessarily been provided for this phenomenon and it has not been defined explicitly. Aims: The main aim of this article is to understand the concept of induced demand with the use of experts’ perceptions of Isfahan University of Medical Sciences. Settings and Design: The research was done using a qualitative method. Semi-structured interview was used for data generation. Participants in this study were people who had been informed in this regard and had to be experienced and were known as experts. Purposive sampling was done for data saturation. Materials and Methods: Seventeen people were interviewed and criteria such as “reliability of information” and “stability” of the data were considered. The anonymity of the interviewees was preserved. Statistical Analysis Used: The data are transcribed, categorized and then the thematic analysis was used. Results: In this study, 21 sub-categories and three main categories were derived. Three main subjects were included: Induced demand definition, induced demand elements, and induced demand methods. Each of these issues contained some sub-subjects. Conclusion: The result of this study provides a framework for examining the concept of induced demand. The most notable findings include the definition of induced demand, induced demand elements, and method of induced demand. In induced demand definition, an important issue that is often overlooked is that inducing regarding to the effectiveness of clinical services and medical values can lead to better or worse outcomes for patients. These findings help the health policy makers study the phenomenon of induced demand with clear-sighted approach. PMID:25013820

  2. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    NASA Astrophysics Data System (ADS)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  3. A new method of Quickbird own image fusion

    NASA Astrophysics Data System (ADS)

    Han, Ying; Jiang, Hong; Zhang, Xiuying

    2009-10-01

    With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.

  4. [Spatial Distribution and Potential Ecological Risk Assessment of Heavy Metals in Soils and Sediments in Shunde Waterway, Southern China].

    PubMed

    Cai, Yi-min; Chen, Wei-ping; Peng, Chi; Wang, Tie-yu; Xiao, Rong-bo

    2016-05-15

    Environmental quality of soils and sediments around water source area can influence the safety of potable water of rivers. In order to study the pollution characteristics, the sources and ecological risks of heavy metals Zn, Cr, Pb, Cu, Ni and Cd in water source area, surface soils around the waterway and sediments in the estuary of main tributaries were collected in Shunde, and ecological risks of heavy metals were assessed by two methods of potential ecological risk assessment. The mean contents of Zn, Cr, Pb, Cu, Ni and Cd in the surface soils were 186.80, 65.88, 54.56, 32.47, 22.65 and 0.86 mg · kg⁻¹ respectively, and they were higher than their soil background values except those of Cu and Ni. The mean concentrations of Zn, Cr, Pb, Cu, Ni and Cd in the sediments were 312.11, 111.41, 97.87, 92.32, 29.89 and 1.72 mg · kg⁻¹ respectively, and they were higher than their soil background values except that of Ni. The results of principal component analysis illustrated that the main source of Cr and Ni in soils was soil parent materials, and Zn, Pb, Cu and Cd in soils mainly came from wastewater discharge of local manufacturing industry. The six heavy metals in sediments mainly originated from industry emissions around the Shunde waterway. The results of potential ecological risk assessment integrating environmental bioavailability of heavy metals showed that Zn, Cu, Pb and Ni had a slight potential ecological risk. Cd had a slight potential ecological risk in surface soils, but a moderate potential ecological risk in surfaces sediments. Because the potential ecological risk assessment integrating environmental bioavailability of heavy metals took the soil properties and heavy metal forms into account, its results of risks were lower than those of Hakanson methods, and it could avoid overestimating the potential risks of heavy metals.

  5. A microtremor survey to define the subsoil structure in a mud volcano areas

    NASA Astrophysics Data System (ADS)

    Panzera, Francesco; D'Amico, Sebastiano; Lupi, Matteo; Karyono, Karyono; Mazzini, Adriano

    2017-04-01

    Mud erupting systems have been observed and studied in different localities on the planet. They are characterized by emissions of fluids and fragmented sedimentary rocks creating large structures with different morphologies. This is mainly due to the presence of clay-bearing strata that can be buoyant in the surrounding regions and over-pressured fluids that facilitate the formation of diapirs through sedimentary rocks. In this study, we investigate the Lusi mud erupting system mainly by using ambient vibration methods. In particular, thickness of the sediments and the body wave velocities have been investigated. Results are integrated with gravimetry and electrical resistivity data in order to locate the main geological discontinuities in the area as well as to reconstruct a 3D model of the buried structure. The approach commonly used for this type of studies is based on the ratio of the horizontal to vertical components of ground motion (HVSR) and on passive array techniques. The HVSR generally enables to recognize peaks that point out to the fundamental frequency of the site, which usually fit quite well the theoretical resonance curves. The combination of HVSR and shear wave velocity, coming from passive array techniques, enables to collect valuable information about the subsurface structures. Here we present new data collected at the mud volcano and sedimentary hosted hydrothermal system sites in order to investigate the depths of the main discontinuities and of the hypothesized hydrocarbon reservoirs. We present the case study of Salse di Nirano (northen Italy), Salinelle (Mt. Etna, Sicily) and Lusi hydrothermal systems (Indonesia). Our results indicate that the ambient vibrations study approach represents a swift and simplified methods that provides quick information on the shallow subsoil structure of the investigated areas.

  6. Exploiting periodicity to extract the atrial activity in atrial arrhythmias

    NASA Astrophysics Data System (ADS)

    Llinares, Raul; Igual, Jorge

    2011-12-01

    Atrial fibrillation disorders are one of the main arrhythmias of the elderly. The atrial and ventricular activities are decoupled during an atrial fibrillation episode, and very rapid and irregular waves replace the usual atrial P-wave in a normal sinus rhythm electrocardiogram (ECG). The estimation of these wavelets is a must for clinical analysis. We propose a new approach to this problem focused on the quasiperiodicity of these wavelets. Atrial activity is characterized by a main atrial rhythm in the interval 3-12 Hz. It enables us to establish the problem as the separation of the original sources from the instantaneous linear combination of them recorded in the ECG or the extraction of only the atrial component exploiting the quasiperiodic feature of the atrial signal. This methodology implies the previous estimation of such main atrial period. We present two algorithms that separate and extract the atrial rhythm starting from a prior estimation of the main atrial frequency. The first one is an algebraic method based on the maximization of a cost function that measures the periodicity. The other one is an adaptive algorithm that exploits the decorrelation of the atrial and other signals diagonalizing the correlation matrices at multiple lags of the period of atrial activity. The algorithms are applied successfully to synthetic and real data. In simulated ECGs, the average correlation index obtained was 0.811 and 0.847, respectively. In real ECGs, the accuracy of the results was validated using spectral and temporal parameters. The average peak frequency and spectral concentration obtained were 5.550 and 5.554 Hz and 56.3 and 54.4%, respectively, and the kurtosis was 0.266 and 0.695. For validation purposes, we compared the proposed algorithms with established methods, obtaining better results for simulated and real registers.

  7. Comparison of a 50 mL pycnometer and a 500 mL flask, EURAMET.M.FF.S8 (EURAMET 1297)

    NASA Astrophysics Data System (ADS)

    Mićić, Ljiljana; Batista, Elsa

    2018-01-01

    The purpose of this comparison was to compare the results of the participating laboratories in the calibration of 50 mL pycnometer and 500 mL volumetric flask using the gravimetric method. Laboratories were asked to determined the 'contained' volume of the 50 mL pycnometer and of the 500 mL flask at a reference temperature of 20 °C. The gravimetric method was used for both instruments by all laboratories. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  8. Chemical composition and antibacterial activity of selected essential oils and some of their main compounds.

    PubMed

    Wanner, Juergen; Schmidt, Erich; Bail, Stefanie; Jirovetz, Leopold; Buchbauer, Gerhard; Gochev, Velizar; Girova, Tanya; Atanasova, Teodora; Stoyanova, Albena

    2010-09-01

    The chemical composition of essential oils of cabreuva (Myrocarpus fastigiatus Allemao, Fabaceae) from Brazil, cedarwood (Juniperus ashei, Cupressaceae) from Texas, Juniper berries (Juniperus communis L., Cupressaceae) and myrrh (Commiphora myrrha (Nees) Engl., Burseraceae) were analyzed using GC/FID and GC/MS. The antimicrobial activity of these essential oils and some of their main compounds were tested against eleven different strains of Gram-positive and Gram-negative bacteria by using agar diffusion and agar serial dilution methods. Animal and plant pathogens, food poisoning and spoilage bacteria were selected. The volatile oils exhibited considerable inhibitory effects against all tested organisms, except Pseudomonas, using both test methods. Higher activity was observed against Gram-positive strains in comparison with Gram-negative bacteria. Cabreuva oil from Brazil showed similar results, but in comparison with the other oils tested, only when higher concentrations of oil were used.

  9. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  10. City image towards tourist attraction (case in Solo, Central Java) examining city image of solo as tourist attraction

    NASA Astrophysics Data System (ADS)

    Wiyana, T.; Putranto, T. S.; Zulkarnain, A.; Kusdiana, R. N.

    2018-03-01

    Affective and cognitive image are two main factors that influence destination in Solo. The purpose of this research is to examine the two main factors of Solo towards tourist attraction. The research method is quantitative. Data collected from observation and survey. A total of 113 respondents obtained from accidental sampling method. The results indicate based on cognitive and affective image. Cognitive consists of culture, batik, city tagline, and community. While, affective consists of tradition, culinary, purposes, climate, and welcoming. The findings show that image has weak correlation towards tourist attraction of Solo. It means most of the tourists are not influenced by city image when they choose Solo as one of their travel destination. The differences between primary and minor image are also examined. Research implication is directed for local government to pursue continuous improvement particularly for the branding of Solo.

  11. Pressure Self-focusing Effect and Novel Methods for Increasing the Maximum Pressure in Traditional and Rotational Diamond Anvil Cells.

    PubMed

    Feng, Biao; Levitas, Valery I

    2017-04-21

    The main principles of producing a region near the center of a sample, compressed in a diamond anvil cell (DAC), with a very high pressure gradient and, consequently, with high pressure are predicted theoretically. The revealed phenomenon of generating extremely high pressure gradient is called the pressure self-focusing effect. Initial analytical predictions utilized generalization of a simplified equilibrium equation. Then, the results are refined using our recent advanced model for elastoplastic material under high pressures in finite element method (FEM) simulations. The main points in producing the pressure self-focusing effect are to use beveled anvils and reach a very thin sample thickness at the center. We find that the superposition of torsion in a rotational DAC (RDAC) offers drastic enhancement of the pressure self-focusing effect and allows one to reach the same pressure under a much lower force and deformation of anvils.

  12. The fossilized size distribution of the main asteroid belt

    NASA Astrophysics Data System (ADS)

    Bottke, William F.; Durda, Daniel D.; Nesvorný, David; Jedicke, Robert; Morbidelli, Alessandro; Vokrouhlický, David; Levison, Hal

    2005-05-01

    Planet formation models suggest the primordial main belt experienced a short but intense period of collisional evolution shortly after the formation of planetary embryos. This period is believed to have lasted until Jupiter reached its full size, when dynamical processes (e.g., sweeping resonances, excitation via planetary embryos) ejected most planetesimals from the main belt zone. The few planetesimals left behind continued to undergo comminution at a reduced rate until the present day. We investigated how this scenario affects the main belt size distribution over Solar System history using a collisional evolution model (CoEM) that accounts for these events. CoEM does not explicitly include results from dynamical models, but instead treats the unknown size of the primordial main belt and the nature/timing of its dynamical depletion using innovative but approximate methods. Model constraints were provided by the observed size frequency distribution of the asteroid belt, the observed population of asteroid families, the cratered surface of differentiated Asteroid (4) Vesta, and the relatively constant crater production rate of the Earth and Moon over the last 3 Gyr. Using CoEM, we solved for both the shape of the initial main belt size distribution after accretion and the asteroid disruption scaling law QD∗. In contrast to previous efforts, we find our derived QD∗ function is very similar to results produced by numerical hydrocode simulations of asteroid impacts. Our best fit results suggest the asteroid belt experienced as much comminution over its early history as it has since it reached its low-mass state approximately 3.9-4.5 Ga. These results suggest the main belt's wavy-shaped size-frequency distribution is a "fossil" from this violent early epoch. We find that most diameter D≳120 km asteroids are primordial, with their physical properties likely determined during the accretion epoch. Conversely, most smaller asteroids are byproducts of fragmentation events. The observed changes in the asteroid spin rate and lightcurve distributions near D˜100-120 km are likely to be a byproduct of this difference. Estimates based on our results imply the primordial main belt population (in the form of D<1000 km bodies) was 150-250 times larger than it is today, in agreement with recent dynamical simulations.

  13. Variable magnetic field (VMF) effect on the heat transfer of a half-annulus cavity filled by Fe3O4-water nanofluid under constant heat flux

    NASA Astrophysics Data System (ADS)

    Hatami, M.; Zhou, J.; Geng, J.; Jing, D.

    2018-04-01

    In this paper, the effect of a variable magnetic field (VMF) on the natural convection heat transfer of Fe3O4-water nanofluid in a half-annulus cavity is studied by finite element method using FlexPDE commercial code. After deriving the governing equations and solving the problem by defined boundary conditions, the effects of three main parameters (Hartmann Number (Ha), nanoparticles volume fraction (φ) and Rayleigh number (Ra)) on the local and average Nusselt numbers of inner wall are investigated. As a main outcome, results confirm that in low Eckert numbers, increasing the Hartmann number make a decrease on the Nusselt number due to Lorentz force resulting from the presence of stronger magnetic field.

  14. A two-dimensional, iterative solution for the jet flap

    NASA Technical Reports Server (NTRS)

    Herold, A. C.

    1973-01-01

    A solution is presented for the jet-flapped wing in two dimensions. The main flow is assumed to be inviscid and incompressible. The flow inside the jet is considered irrotational and the upper and lower boundaries between the jet and free stream are assumed to behave as vortex sheets which allow no mixing. The solution is found to be in satisfactory agreement with two dimensional experimental results and other theoretical work for intermediate values of momentum coefficient, but the regions of agreement vary with jet exit angle. At small values of momentum coefficient, the trajectory for the jet, as computed by this method, has more penetration than that of other available data, while at high values of moment coefficient this solution results in less penetration of the jet into the main flow.

  15. A hypersonic aeroheating calculation method based on inviscid outer edge of boundary layer parameters

    NASA Astrophysics Data System (ADS)

    Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin

    2016-12-01

    This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.

  16. Path similarity skeleton graph matching.

    PubMed

    Bai, Xiang; Latecki, Longin Jan

    2008-07-01

    This paper presents a novel framework to for shape recognition based on object silhouettes. The main idea is to match skeleton graphs by comparing the shortest paths between skeleton endpoints. In contrast to typical tree or graph matching methods, we completely ignore the topological graph structure. Our approach is motivated by the fact that visually similar skeleton graphs may have completely different topological structures. The proposed comparison of shortest paths between endpoints of skeleton graphs yields correct matching results in such cases. The skeletons are pruned by contour partitioning with Discrete Curve Evolution, which implies that the endpoints of skeleton branches correspond to visual parts of the objects. The experimental results demonstrate that our method is able to produce correct results in the presence of articulations, stretching, and occlusion.

  17. Wear Resistance of Aluminum Matrix Composites Reinforced with Al2O3 Particles After Multiple Remelting

    NASA Astrophysics Data System (ADS)

    Klasik, Adam; Pietrzak, Krystyna; Makowska, Katarzyna; Sobczak, Jerzy; Rudnik, Dariusz; Wojciechowski, Andrzej

    2016-08-01

    Based on previous results, the commercial composites of A359 (AlSi9Mg) alloy reinforced with 22 vol.% Al2O3 particles were submitted to multiple remelting by means of gravity casting and squeeze-casting procedures. The studies were focused on tribological tests, x-ray phase analyses, and microstructural examinations. More promising results were obtained for squeeze-casting method mainly because of the reduction of the negative microstructural effects such as shrinkage porosity or other microstructural defects and discontinuities. The results showed that direct remelting may be treated as economically well-founded and alternative way compared to other recycling processes. It was underlined that the multiple remelting method must be analyzed for any material separately.

  18. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  19. Problems with German Science Education

    NASA Astrophysics Data System (ADS)

    Riess, Falk

    The main problems of science (especially physics) teaching in Germany are students'' lack of interest and motivation in the subject, their poor understanding of scientific concepts, ideas, methods,and results, and their lack of comprehension of the social, political, and epistemological role of science. These circumstances result in a growing `scientific illiteracy'' of the population and adecline in democratic quality concerning decision making processes about scientific and technological projects. One means of improving this situation lies in the use of history and philosophy of science in science teaching. School science curricula and textbooks neglect almost completely the importance of history and philosophy of science. In this paper, the main empirical results concerning motivation and knowledge are given. Some examples from science curricula and textbooks are presented, and some of the few reform projects in Germany are listed. As a consequence a compensatory program is proposed in order to create the prerequisites for raising science education in Germany to an international standard.

  20. Dry cleaning of Turkish coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cicek, T.

    2008-07-01

    This study dealt with the upgrading of two different type of Turkish coal by a dry cleaning method using a modified air table. The industrial size air table used in this study is a device for removing stones from agricultural products. This study investigates the technical and economical feasibility of the dry cleaning method which has never been applied before on coals in Turkey. The application of a dry cleaning method on Turkish coals designated for power generation without generating environmental pollution and ensuring a stable coal quality are the main objectives of this study. The size fractions of 5-8,more » 3-5, and 1-3 mm of the investigated coals were used in the upgrading experiments. Satisfactory results were achieved with coal from the Soma region, whereas the upgrading results of Hsamlar coal were objectionable for the coarser size fractions. However, acceptable results were obtained for the size fraction 1-3 mm of Hsamlar coal.« less

  1. Evaluation of ion release, cytotoxicity, and platelet adhesion of electrochemical anodized 316 L stainless steel cardiovascular stents.

    PubMed

    Díaz, M; Sevilla, P; Galán, A M; Escolar, G; Engel, E; Gil, F J

    2008-11-01

    316L Stainless steel is one of the most used metallic material in orthopedical prosthesis, osteosinthesis plates, and cardiovascular stents. One of the main problems this material presents is the nickel and chromium release, specially the Ni ion release that provokes allergy in a high number of patients. Recently, experimental applications in vitro and in vivo seem to indicate that the thickness of the nature oxide of the stainless steel results in very strong reinforcement of the biological response and reduce the ion release due to the thicker surface oxide. It is possible to grow the natural chromium oxide layer by electrolytic method such anodization. In this study, two main anodization methods to grow chromium oxide on the 316L stainless steel have been evaluated. Nickel and Chromium ions release in human blood at 37 degrees C were detected at times of 1, 6, 11, and 15 days by means of atomic absorption in a graphite furnace (GAAF). Moreover, cytocompatibility tests were carried out. Perfusion experiments were performed to evaluate morphometrically platelet interaction with the material and to explore the potential thrombogenicity. The results showed a good cytocompatibility between the material and the osteoblast-like cells. However, these anodization methods released between 2 and 10 times more nickel and chromium than the original stainless steel, depending on the method used. Besides, anodized samples shown an increase of the percentage of surface covered by platelets. Consequently, the anodization methods studied do not improve the long-term behavior of the stainless steel for its application as cardiovascular stents.

  2. The effect of repeated firings on the color change of dental ceramics using different glazing methods

    PubMed Central

    Yılmaz, Kerem; Ozturk, Caner

    2014-01-01

    PURPOSE Surface color is one of the main criteria to obtain an ideal esthetic. Many factors such as the type of the material, surface specifications, number of firings, firing temperature and thickness of the porcelain are all important to provide an unchanged surface color in dental ceramics. The aim of this study was to evaluate the color changes in dental ceramics according to the material type and glazing methods, during the multiple firings. MATERIALS AND METHODS Three different types of dental ceramics (IPS Classical metal ceramic, Empress Esthetic and Empress 2 ceramics) were used in the study. Porcelains were evaluated under five main groups according to glaze and natural glaze methods. Color changes (ΔE) and changes in color parameters (ΔL, Δa, Δb) were determined using colorimeter during the control, the first, third, fifth, and seventh firings. The statistical analysis of the results was performed using ANOVA and Tukey test. RESULTS The color changes which occurred upon material-method-firing interaction were statistically significant (P<.05). ΔE, ΔL, Δa and Δb values also demonstrated a negative trend. The MC-G group was less affected in terms of color changes compared to other groups. In all-ceramic specimens, the surface color was significantly affected by multiple firings. CONCLUSION Firing detrimentally affected the structure of the porcelain surface and hence caused fading of the color and prominence of yellow and red characters. Compressible all-ceramics were remarkably affected by repeated firings due to their crystalline structure. PMID:25551001

  3. Sexual Risk Taking Among Transgender Male-to-Female Youths With Different Partner Types

    PubMed Central

    Garofalo, Robert; Harris, D. Robert; Belzer, Marvin

    2010-01-01

    Objectives. We examined associations between partner types (categorized as main, casual, or commercial) and sexual risk behaviors of sexually active male-to-female (transgender female) youths. Methods. We interviewed 120 transgender female youths aged 15 to 24 years recruited from clinics, community-based agencies, club and bar venues, referrals, and the streets of Los Angeles, California, and Chicago, Illinois. Results. Sexual risk behaviors varied by partner type. Transgender female youths were less likely to use condoms during receptive anal intercourse with their main partner and were less likely to use condoms with a main partner while under the influence of substances. Youth participants were also more likely to talk to a main partner about their HIV status. Our data identified no demographic or social factors that predicted condom use during receptive anal intercourse by partner type. Conclusions. Research and interventions that focus on understanding and mitigating risk behaviors by partner type, especially those that tackle the unique risks incurred with main partners, may make important contributions to risk reduction among transgender female youths. PMID:20622176

  4. A strategy for optical properties investigation in ABe2BO3F2 (A=K, Rb, Cs) using finite field methods

    NASA Astrophysics Data System (ADS)

    Mushahali, Hahaer; Mu, Baoxia; Wang, Qian; Mamat, Mamatrishat; Cao, Haibin; Yang, Guang; Jing, Qun

    2018-07-01

    The finite-field methods can be used to intuitively learn about the optical response and find out the atomic contributions to the birefringence and SHG tensors. In this paper, the linear and second-order nonlinear optical properties of ABe2BO3F2 family (A = K, Rb, Cs) compounds are investigated using the finite-field methods within different exchange-correlation functionals. The results show that the obtained birefringence and SHG tensors are in good agreement with the experimental values. The atomic contribution to the total birefringence was further investigated using the variation of the atomic charges, and the Born effective charges. The results show that the boron-oxygen groups give main contribution to the anisotropic birefringence.

  5. Image processing based detection of lung cancer on CT scan images

    NASA Astrophysics Data System (ADS)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  6. Non-isothermal crystallization kinetics of eucalyptus lignosulfonate/polyvinyl alcohol composite.

    PubMed

    Ye, De-Zhan; Zhang, Xi; Gu, Shaojin; Zhou, Yingshan; Xu, Weilin

    2017-04-01

    The nonisothermal crystallinization kinetic was performed on Polyvinyl alcohol (PVA) mixed with eucalyptus lignosulfonate calcuim (HLS) as the biobased thermal stabilizer, which was systematically analyzed based on Jeziorny model, Ozawa equation and the Mo method. The results indicated that the entire crystallization process took place through two main stages involving the primary and secondary crystallization processes. The Mo method described nonisothermal crystallization behavior well. Based on the results of the half time for completing crystallization, k c value in Jeziorny model, F(T) value in Mo method and crystallization activation energy, it was concluded that low loading of HLS accelerated PVA crystallization process, however, the growth rate of PVA crystallization was impeded at high content of HLS. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. [Analysis of phonosurgical methods of treatment in spasmodic dysphonia].

    PubMed

    Kosztyła-Hojna, Bożena; Berger, Greta; Zdrojkowski, Maciej

    2017-02-20

    Spasmodic dysphonia (SD) is rather a rare voice disorder. It is most often seen in woman aged 40-50. The disease is caused by deep emotional and neurological disorders of extrapyramidal system. Two main clinical forms of SD are distinguished: about 90% of cases - adductor spasmodic dysphonia and abductor spasmodic dysphonia roughly 10%. Conservative therapy does not always yield sufficient effects. Botulinum toxin - type A injections into the thyroarytenoid muscle are also used in therapy. Though results are temporary and reversible. Among phonosurgical methods thyroplasty type II according to Isshiki and tyroarytenoid muscle myectomy (TAM) should be also mentioned among phonosurgical methods. The aim of the work is to evaluate results of conservative and phonosurgical treatment of SD. Spasmodic dysphonia markedly restricts communication process of patients and public relations both social and occupational.

  8. Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.

    PubMed

    Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng

    2018-05-01

    Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.

  9. Determination of depth and size of defects in carbon-fiber-reinforced plastic with different methods of pulse thermography

    NASA Astrophysics Data System (ADS)

    Popow, Vitalij; Gurka, Martin

    2018-03-01

    The main advantage of high performance composite material is its exceptional light-weight capability due to individual tailoring of anisotropic fiber lay-up. Its main draw-back is a brittle and complex failure behavior under dynamic loading which requires extensive quality assurance measures and short maintenance intervals. For this reason efficient test methods are required, which not only generate good and reliable results, but are also simple in handling, allow rapid adaptation to different test situations and short measuring times. Especially the knowledge about size and position of a defect is necessary to decide about acceptance or rejection of a structure under investigation. As a promising method for contactless in-line and off-line inspection we used pulsed thermography. For the determination of the depth of the defects we used logarithmic peak second derivative, a widely accepted method. Alternatively an analytical model, describing the adiabatic heating of a solid plate by an instantaneous pulse, was fitted directly to the measurement data. For the determination of defect size four different approaches were investigated and compared with exact values. The measurements were done with continuous carbon-fiber reinforced materials.

  10. High-resolution stress measurements for microsystem and semiconductor applications

    NASA Astrophysics Data System (ADS)

    Vogel, Dietmar; Keller, Juergen; Michel, Bernd

    2006-04-01

    Research results obtained for local stress determination on micro and nanotechnology components are summarized. It meets the concern of controlling stresses introduced to sensors, MEMS and electronics devices during different micromachining processes. The method bases on deformation measurement options made available inside focused ion beam equipment. Removing locally material by ion beam milling existing stresses / residual stresses lead to deformation fields around the milled feature. Digital image correlation techniques are used to extract deformation values from micrographs captured before and after milling. In the paper, two main milling features have been analyzed - through hole and through slit milling. Analytical solutions for stress release fields of in-plane stresses have been derived and compared to respective experimental findings. Their good agreement allows to settle a method for determination of residual stress values, which is demonstrated for thin membranes manufactured by silicon micro technology. Some emphasis is made on the elimination of main error sources for stress determination, like rigid body object displacements and rotations due to drifts of experimental conditions under FIB imaging. In order to illustrate potential application areas of the method residual stress suppression by ion implantation is evaluated by the method and reported here.

  11. FIB-based measurement of local residual stresses on microsystems

    NASA Astrophysics Data System (ADS)

    Vogel, Dietmar; Sabate, Neus; Gollhardt, Astrid; Keller, Juergen; Auersperg, Juergen; Michel, Bernd

    2006-03-01

    The paper comprises research results obtained for stress determination on micro and nanotechnology components. It meets the concern of controlling stresses introduced to sensors, MEMS and electronics devices during different micromachining processes. The method bases on deformation measurement options made available inside focused ion beam equipment. Removing locally material by ion beam milling existing stresses / residual stresses lead to deformation fields around the milled feature. Digital image correlation techniques are used to extract deformation values from micrographs captured before and after milling. In the paper, two main milling features have been analyzed - through hole and through slit milling. Analytical solutions for stress release fields of in-plane stresses have been derived and compared to respective experimental findings. Their good agreement allows to settle a method for determination of residual stress values, which is demonstrated for thin membranes manufactured by silicon micro technology. Some emphasis is made on the elimination of main error sources for stress determination, like rigid body object displacements and rotations due to drifts of experimental conditions under FIB imaging. In order to illustrate potential application areas of the method residual stress suppression by ion implantation is evaluated by the method and reported here.

  12. Analysis of 1263 deaths in four general practices.

    PubMed Central

    Holden, J; O'Donnell, S; Brindley, J; Miles, L

    1998-01-01

    BACKGROUND: The death of a patient is a significant event that occurs often enough in general practice for it to have the potential to tell us much about the care we provide. There are few large series in the literature and we still know little about the collaborative use of this outcome measure. AIM: To determine the pattern of deaths and potentially preventable factors in our practices. METHOD: We completed a standard data collection form after each death in four general practices over a 40-month period. The results were discussed at quarterly meetings. RESULTS: A total of 1263 deaths occurred among our registered patients during the period of the audit. Preventable factors contributing to deaths were considered to be attributable to: patients (40%): mainly cigarette smoking, poor compliance, and alcohol problems; general practice teams (5%): mainly delayed referral, diagnosis and treatment, and failure to prescribe aspirin to patients with vascular disease; hospitals (6%): mainly delayed diagnosis and perceived treatment problems; the environment (3%): mainly falls, principally resulting in fractured neck of femur. CONCLUSION: A simple audit of deaths along the lines that we describe gives important information about the care provided by general practice teams and those in hospital practice. It has both educational value and is a source of ideas for service improvement and further study, particularly when carried out over several years. PMID:9800400

  13. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  14. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  15. Method for appraising model validity of randomised controlled trials of homeopathic treatment: multi-rater concordance study

    PubMed Central

    2012-01-01

    Background A method for assessing the model validity of randomised controlled trials of homeopathy is needed. To date, only conventional standards for assessing intrinsic bias (internal validity) of trials have been invoked, with little recognition of the special characteristics of homeopathy. We aimed to identify relevant judgmental domains to use in assessing the model validity of homeopathic treatment (MVHT). We define MVHT as the extent to which a homeopathic intervention and the main measure of its outcome, as implemented in a randomised controlled trial (RCT), reflect 'state-of-the-art' homeopathic practice. Methods Using an iterative process, an international group of experts developed a set of six judgmental domains, with associated descriptive criteria. The domains address: (I) the rationale for the choice of the particular homeopathic intervention; (II) the homeopathic principles reflected in the intervention; (III) the extent of homeopathic practitioner input; (IV) the nature of the main outcome measure; (V) the capability of the main outcome measure to detect change; (VI) the length of follow-up to the endpoint of the study. Six papers reporting RCTs of homeopathy of varying design were randomly selected from the literature. A standard form was used to record each assessor's independent response per domain, using the optional verdicts 'Yes', 'Unclear', 'No'. Concordance among the eight verdicts per domain, across all six papers, was evaluated using the kappa (κ) statistic. Results The six judgmental domains enabled MVHT to be assessed with 'fair' to 'almost perfect' concordance in each case. For the six RCTs examined, the method allowed MVHT to be classified overall as 'acceptable' in three, 'unclear' in two, and 'inadequate' in one. Conclusion Future systematic reviews of RCTs in homeopathy should adopt the MVHT method as part of a complete appraisal of trial validity. PMID:22510227

  16. Current Methods to Define Metabolic Tumor Volume in Positron Emission Tomography: Which One is Better?

    PubMed

    Im, Hyung-Jun; Bradshaw, Tyler; Solaiyappan, Meiyappan; Cho, Steve Y

    2018-02-01

    Numerous methods to segment tumors using 18 F-fluorodeoxyglucose positron emission tomography (FDG PET) have been introduced. Metabolic tumor volume (MTV) refers to the metabolically active volume of the tumor segmented using FDG PET, and has been shown to be useful in predicting patient outcome and in assessing treatment response. Also, tumor segmentation using FDG PET has useful applications in radiotherapy treatment planning. Despite extensive research on MTV showing promising results, MTV is not used in standard clinical practice yet, mainly because there is no consensus on the optimal method to segment tumors in FDG PET images. In this review, we discuss currently available methods to measure MTV using FDG PET, and assess the advantages and disadvantages of the methods.

  17. Long-Term In-Service Monitoring and Performance Assessment of the Main Cables of Long-Span Suspension Bridges

    PubMed Central

    Deng, Yang; Liu, Yang; Chen, Suren

    2017-01-01

    Despite the recent developments in structural health monitoring, there remain great challenges for accurately, conveniently, and economically assessing the in-service performance of the main cables for long-span suspension bridges. A long-term structural health monitoring technique is developed to measure the tension force with a conventional sensing technology and further provide the in-service performance assessment strategy of the main cable. The monitoring system adopts conventional vibrating strings transducers to monitor the tension forces of separate cable strands of the main cable in the anchor span. The performance evaluation of the main cable is conducted based on the collected health monitoring data: (1) the measured strand forces are used to derive the overall tension force of a main cable, which is further translated into load bearing capacity assessment using the concept of safety factor; and (2) the proposed technique can also evaluate the uniformity of tension forces from different cable strands. The assessment of uniformity of strand forces of a main cable offers critical information in terms of potential risks of partial damage and performance deterioration of the main cable. The results suggest the proposed low-cost monitoring system is an option to provide approximate estimation of tension forces of main cables for suspension bridges. With the long-term monitoring data, the proposed monitoring-based evaluation methods can further provide critical information to assess the safety and serviceability performance of main cables. PMID:28621743

  18. Long-Term In-Service Monitoring and Performance Assessment of the Main Cables of Long-Span Suspension Bridges.

    PubMed

    Deng, Yang; Liu, Yang; Chen, Suren

    2017-06-16

    Despite the recent developments in structural health monitoring, there remain great challenges for accurately, conveniently, and economically assessing the in-service performance of the main cables for long-span suspension bridges. A long-term structural health monitoring technique is developed to measure the tension force with a conventional sensing technology and further provide the in-service performance assessment strategy of the main cable. The monitoring system adopts conventional vibrating strings transducers to monitor the tension forces of separate cable strands of the main cable in the anchor span. The performance evaluation of the main cable is conducted based on the collected health monitoring data: (1) the measured strand forces are used to derive the overall tension force of a main cable, which is further translated into load bearing capacity assessment using the concept of safety factor; and (2) the proposed technique can also evaluate the uniformity of tension forces from different cable strands. The assessment of uniformity of strand forces of a main cable offers critical information in terms of potential risks of partial damage and performance deterioration of the main cable. The results suggest the proposed low-cost monitoring system is an option to provide approximate estimation of tension forces of main cables for suspension bridges. With the long-term monitoring data, the proposed monitoring-based evaluation methods can further provide critical information to assess the safety and serviceability performance of main cables.

  19. Planting pattern and weed control method influence on yield production of corn (Zea mays L.)

    NASA Astrophysics Data System (ADS)

    Purba, E.; Nasution, D. P.

    2018-02-01

    Field experiment was carried out to evaluate the influence of planting patterns and weed control methods on the growth and yield of corn. The effect of the planting pattern and weed control method was studied in a split plot design. The main plots were that of planting pattern single row (25cm x 60cm), double row (25cm x 25cm x 60cm) and triangle row ( 25cm x 25cm x 25cm). Subplot was that of weed control method consisted five methods namely weed free throughout the growing season, hand weeding, sprayed with glyphosate, sprayed with paraquat, and no weeding.. Result showed that both planting pattern and weed control method did not affect the growth of corn. However, planting pattern and weed control method significantly affected yield production. Yield resulted from double row and triangle planting pattern was 14% and 41% higher, consecutively, than that of single row pattern. The triangle planting pattern combined with any weed control method produced the highest yield production of corn.

  20. Does Subjective Rating Reflect Behavioural Coding? Personality in 2 Month-Old Dog Puppies: An Open-Field Test and Adjective-Based Questionnaire.

    PubMed

    Barnard, Shanis; Marshall-Pescini, Sarah; Passalacqua, Chiara; Beghelli, Valentina; Capra, Alexa; Normando, Simona; Pelosi, Annalisa; Valsecchi, Paola

    2016-01-01

    A number of studies have recently investigated personality traits in non-human species, with the dog gaining popularity as a subject species for research in this area. Recent research has shown the consistency of personality traits across both context and time for adult dogs, both when using questionnaire based methods of investigation and behavioural analyses of the dogs' behaviour. However, only a few studies have assessed the correspondence between these two methods, with results varying considerably across studies. Furthermore, most studies have focused on adult dogs, despite the fact that an understanding of personality traits in young puppies may be important for research focusing on the genetic basis of personality traits. In the current study, we sought to evaluate the correspondence between a questionnaire based method and the in depth analyses of the behaviour of 2-month old puppies in an open-field test in which a number of both social and non-social stimuli were presented to the subjects. We further evaluated consistency of traits over time by re-testing a subset of puppies. The correspondence between methods was high and test- retest consistency (for the main trait) was also good using both evaluation methods. Results showed clear factors referring to the two main personality traits 'extroversion,' (i.e. the enthusiastic, exuberant approach to the stimuli) and 'neuroticism,' (i.e. the more cautious and fearful approach to the stimuli), potentially similar to the shyness-boldness dimension found in previous studies. Furthermore, both methods identified an 'amicability' dimension, expressing the positive interactions the pups directed at the humans stranger, and a 'reservedness' dimension which identified pups who largely chose not to interact with the stimuli, and were defined as quiet and not nosey in the questionnaire.

Top