Sample records for method obtained results

  1. Assessment of a solid-phase reagent for urinary specific gravity determination.

    PubMed

    Chu, S Y; Sparks, D

    1984-02-01

    We have compared the specific gravity (S.G.) determined by the N-Multistix method with that obtained from the Total Solids (TS) meter. Overall, 88.7% of the specific gravity results obtained with the reagent strip method were within 0.005 of those obtained with the TS meter. There was a good correlation between the methods and there was no bias for the group means obtained by either method. A good correlation was also found between the S.G. on the strip and osmolality (correlation coefficient of 0.955). The results obtained with the reagent strip for urinary specific gravity therefore appear acceptable for routine laboratory purposes.

  2. Evaluation of Laboratory Procedures to Quantify the Neutral Detergent Fiber Content in Forage, Concentrate, and Ruminant Feces.

    PubMed

    Barbosa, Marcília Medrado; Detmann, Edenio; Rocha, Gabriel Cipriano; de Oliveira Franco, Marcia; de Campos Valadares Filho, Sebastião

    2015-01-01

    A comparison was made of measurements of neutral detergent fiber concentrations obtained with AOAC Method 2002.04 and modified methods using pressurized environments or direct use of industrial heat-stable α-amylase in samples of forage (n=37), concentrate (n=30), and ruminant feces (n=39). The following method modifications were tested: AOAC Method 2002.04 with replacement of the reflux apparatus with an autoclave or Ankom(220®) extractor and F57 filter bags, and AOAC Method 2002.04 with replacement of the standardization procedures for α-amylase by a single addition of industrial α-amylase [250 μL of Termamyl 2X 240 Kilo Novo Units (KNU)-T/g] prior to heating the neutral detergent solution. For the feces and forage samples, the results obtained with the modified methods with an autoclave or modification of α-amylase use were similar to those obtained using AOAC Method 2002.04, but the use of the Ankom220 extractor resulted in overestimated values. For the concentrate samples, the modified methods using an autoclave or Ankom220 extractor resulted in positive systematic errors. However, the method using industrial α-amylase resulted in systematic error and slope bias despite that the obtained values were close to those obtained with AOAC Method 2002.04.

  3. New approximate orientation averaging of the water molecule interacting with the thermal neutron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markovic, M.I.; Minic, D.M.; Rakic, A.D.

    1992-02-01

    This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-knowmore » K-N approximate orientation averaging method.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujiwara, K., E-mail: ku.fujiwara@screen.co.jp; Department of Mechanical Engineering, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka 565-0871; Shibahara, M., E-mail: siba@mech.eng.osaka-u.ac.jp

    A classical molecular dynamics simulation was conducted for a system composed of fluid molecules between two planar solid surfaces, and whose interactions are described by the 12-6 Lennard-Jones form. This paper presents a general description of the pressure components and interfacial tension at a fluid-solid interface obtained by the perturbative method on the basis of statistical thermodynamics, proposes a method to consider the pressure components tangential to an interface which are affected by interactions with solid atoms, and applies this method to the calculation system. The description of the perturbative method is extended to subsystems, and the local pressure componentsmore » and interfacial tension at a liquid-solid interface are obtained and examined in one- and two-dimensions. The results are compared with those obtained by two alternative methods: (a) an evaluation of the intermolecular force acting on a plane, and (b) the conventional method based on the virial expression. The accuracy of the numerical results is examined through the comparison of the results obtained by each method. The calculated local pressure components and interfacial tension of the fluid at a liquid-solid interface agreed well with the results of the two alternative methods at each local position in one dimension. In two dimensions, the results showed a characteristic profile of the tangential pressure component which depended on the direction tangential to the liquid-solid interface, which agreed with that obtained by the evaluation of the intermolecular force acting on a plane in the present study. Such good agreement suggests that the perturbative method on the basis of statistical thermodynamics used in this study is valid to obtain the local pressure components and interfacial tension at a liquid-solid interface.« less

  5. Development of the algorithm of measurement data and tomographic section reconstruction results processing for evaluating the respiratory activity of the lungs using the multi-angle electric impedance tomography

    NASA Astrophysics Data System (ADS)

    Aleksanyan, Grayr; Shcherbakov, Ivan; Kucher, Artem; Sulyz, Andrew

    2018-04-01

    Continuous monitoring of the patient's breathing by the method of multi-angle electric impedance tomography allows to obtain images of conduction change in the chest cavity during the monitoring. Direct analysis of images is difficult due to the large amount of information and low resolution images obtained by multi-angle electrical impedance tomography. This work presents a method for obtaining a graph of respiratory activity of the lungs based on the results of continuous lung monitoring using the multi-angle electrical impedance tomography method. The method makes it possible to obtain a graph of the respiratory activity of the left and right lungs separately, as well as a summary graph, to which it is possible to apply methods of processing the results of spirography.

  6. Optimization of digital image processing to determine quantum dots' height and density from atomic force microscopy.

    PubMed

    Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L

    2018-01-01

    An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Statistical evaluation of fatty acid profile and cholesterol content in fish (common carp) lipids obtained by different sample preparation procedures.

    PubMed

    Spiric, Aurelija; Trbovic, Dejana; Vranic, Danijela; Djinovic, Jasna; Petronijevic, Radivoj; Matekalo-Sverak, Vesna

    2010-07-05

    Studies performed on lipid extraction from animal and fish tissues do not provide information on its influence on fatty acid composition of the extracted lipids as well as on cholesterol content. Data presented in this paper indicate the impact of extraction procedures on fatty acid profile of fish lipids extracted by the modified Soxhlet and ASE (accelerated solvent extraction) procedure. Cholesterol was also determined by direct saponification method, too. Student's paired t-test used for comparison of the total fat content in carp fish population obtained by two extraction methods shows that differences between values of the total fat content determined by ASE and modified Soxhlet method are not statistically significant. Values obtained by three different methods (direct saponification, ASE and modified Soxhlet method), used for determination of cholesterol content in carp, were compared by one-way analysis of variance (ANOVA). The obtained results show that modified Soxhlet method gives results which differ significantly from the results obtained by direct saponification and ASE method. However the results obtained by direct saponification and ASE method do not differ significantly from each other. The highest quantities for cholesterol (37.65 to 65.44 mg/100 g) in the analyzed fish muscle were obtained by applying direct saponification method, as less destructive one, followed by ASE (34.16 to 52.60 mg/100 g) and modified Soxhlet extraction method (10.73 to 30.83 mg/100 g). Modified Soxhlet method for extraction of fish lipids gives higher values for n-6 fatty acids than ASE method (t(paired)=3.22 t(c)=2.36), while there is no statistically significant difference in the n-3 content levels between the methods (t(paired)=1.31). The UNSFA/SFA ratio obtained by using modified Soxhlet method is also higher than the ratio obtained using ASE method (t(paired)=4.88 t(c)=2.36). Results of Principal Component Analysis (PCA) showed that the highest positive impact to the second principal component (PC2) is recorded by C18:3 n-3, and C20:3 n-6, being present in a higher amount in the samples treated by the modified Soxhlet extraction, while C22:5 n-3, C20:3 n-3, C22:1 and C20:4, C16 and C18 negatively influence the score values of the PC2, showing significantly increased level in the samples treated by ASE method. Hotelling's paired T-square test used on the first three principal components for confirmation of differences in individual fatty acid content obtained by ASE and Soxhlet method in carp muscle showed statistically significant difference between these two data sets (T(2)=161.308, p<0.001). Copyright 2010 Elsevier B.V. All rights reserved.

  8. Comparisons of Lagrangian and Eulerian PDF methods in simulations of non-premixed turbulent jet flames with moderate-to-strong turbulence-chemistry interactions

    NASA Astrophysics Data System (ADS)

    Jaishree, J.; Haworth, D. C.

    2012-06-01

    Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.

  9. Topological soliton solutions for three shallow water waves models

    NASA Astrophysics Data System (ADS)

    Liu, Jiangen; Zhang, Yufeng; Wang, Yan

    2018-07-01

    In this article, we investigate three distinct physical structures for shallow water waves models by the improved ansatz method. The method was improved and can be used to obtain more generalized form topological soliton solutions than the original method. As a result, some new exact solutions of the shallow water equations are successfully established and the obtained results are exhibited graphically. The results showed that the improved ansatz method can be applied to solve other nonlinear differential equations arising from mathematical physics.

  10. Olive oil polyphenols: A quantitative method by high-performance liquid-chromatography-diode-array detection for their determination and the assessment of the related health claim.

    PubMed

    Ricciutelli, Massimo; Marconi, Shara; Boarelli, Maria Chiara; Caprioli, Giovanni; Sagratini, Gianni; Ballini, Roberto; Fiorini, Dennis

    2017-01-20

    In order to assess if an extra virgin olive oil (EVOO) can be acknowledged with the health claim related to olive oil polyphenols (Reg. EU n.432/2012), a new method to quantify these species in EVOO, by means of liquid-liquid extraction followed by HPLC-DAD/MS/MS of the hydroalcoholic extract, has been developed and validated. Different extraction procedures, different types of reverse-phase analytical columns (Synergi Polar, Spherisorb ODS2 and Kinetex) and eluents have been tested. The chromatographic column Synergi Polar (250×4.6mm, 4μm), never used before in this kind of application, provided the best results, with water and methanol/isopropanol (9/1) as eluents. The method allows the quantification of the phenolic alcohols tyrosol and hydroxytyrosol, the phenolic acids vanillic, p-coumaric and ferulic acids, secoiridoids derivatives, the lignans, pinoresinol and acetoxypinoresinol and the flavonoids luteolin and apigenin. The new method has been applied to 20 commercial EVOOs belonging to two different price range categories (3.78-5.80 euros/L and 9.5-25.80 euros/L) and 5 olive oils. The obtained results highlight that acetoxypinoresinol, ferulic acid, vanillic acid and the total non secoiridoid phenolic substances resulted to be significantly higher in HEVOOs than in LEVOOs (P=0.0026, 0.0217, 0.0092, 0.0003 respectively). For most of the samples analysed there is excellent agreement between the results obtained by applying the HPLC method adopted by the International Olive Council and the results obtained by applying the presented HPLC method. Results obtained by HPLC methods have been also compared with the ones obtained by the colorimetric Folin-Ciocalteu method. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  12. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  13. Method for determiantion of the frequency-contrast characteristics of electronic-optic systems

    NASA Astrophysics Data System (ADS)

    Mardirossian, Garo; Zhekov, Zhivko

    The frequency-contrast characteristics is an important criterion to judge the quality of electronic-optic systems, which boast an increasing application in space research, astronomy, martial art etc. The paper provides a brief description of the methods for determining the frequency-contrast characteristics of optic systems, developed at the Space Research Institute of the Bulgarian Academy of Science. The suggested methods have been used to develop a couple of electronic-optic systems participated in the designed ground-based and aerospace scientific-research equipment. Based on the obtained practical results, the conclusion was made that the methods provide to obtain sufficiently precise data, which coincide well with the results, obtained when using other methods.

  14. Natural frequencies of thin rectangular plates clamped on contour using the Finite Element Method

    NASA Astrophysics Data System (ADS)

    (Barboni Haţiegan, L.; Haţiegan, C.; Gillich, G. R.; Hamat, C. O.; Vasile, O.; Stroia, M. D.

    2018-01-01

    This paper presents the determining of natural frequencies of plates without and with damages using the finite element method of SolidWorks program. The first thirty natural frequencies obtained for thin rectangular rectangular plates clamped on contour without and with central damages a for different dimensions. The relative variation of natural frequency was determined and the obtained results by the finite element method (FEM) respectively relative variation of natural frequency, were graphically represented according to their vibration natural modes. Finally, the obtained results were compared.

  15. Portable system of programmable syringe pump with potentiometer for determination of promethazine in pharmaceutical applications.

    PubMed

    Saleh, Tawfik A; Abulkibash, A M; Ibrahim, Atta E

    2012-04-01

    A simple and fast-automated method was developed and validated for the assay of promethazine hydrochloride in pharmaceutical formulations, based on the oxidation of promethazine by cerium in an acidic medium. A portable system, consisting of a programmable syringe pump connected to a potentiometer, was constructed. The developed change in potential during promethazine oxidation was monitored. The related optimum working conditions, such as supporting electrolyte concentration, cerium(IV) concentration and flow rate were optimized. The proposed method was successfully applied to pharmaceutical samples as well as synthetic ones. The obtained results were realized by the official British pharmacopoeia (BP) method and comparable results were obtained. The obtained t-value indicates no significant differences between the results of the proposed and BP methods, with the advantages of the proposed method being simple, sensitive and cost effective.

  16. Micelle-mediated extraction of elderberry blossom by whey protein and naturally derived surfactants.

    PubMed

    Śliwa, Karolina; Tomaszkiewicz-Potępa, Anna; Sikora, Elżbieta; Ogonowski, Jan

    2013-01-01

    Classical methods of the extraction of active ingredients from the plant material are expensive, complicated and often environmentally unfriendly. The micelle-mediated extraction method (MME) seems to be a good alternative. In this work, extractions of elderberry blossoms (Flos Sambuci) were performed using MME methods. Several popular surfactants and whey protein concentrate (WPC) was applied in the process. The obtained results were compared with those obtained in extraction by means of water. Antioxidant properties of the extracts were analyzed by using two different methods: reaction with di(phenyl)-(2,4,6-trinitrophenyl)iminoazanium (DPPH) reagent and Follin's method. Furthermore, the flavonoid content in the extracts was determined. The results confirmed that the MME method with using whey protein might be an alternative method for obtaining, rich in natural antioxidants, plant extracts.

  17. Comparison of flow cytometry, fluorescence microscopy and spectrofluorometry for analysis of gene electrotransfer efficiency.

    PubMed

    Marjanovič, Igor; Kandušer, Maša; Miklavčič, Damijan; Keber, Mateja Manček; Pavlin, Mojca

    2014-12-01

    In this study, we compared three different methods used for quantification of gene electrotransfer efficiency: fluorescence microscopy, flow cytometry and spectrofluorometry. We used CHO and B16 cells in a suspension and plasmid coding for GFP. The aim of this study was to compare and analyse the results obtained by fluorescence microscopy, flow cytometry and spectrofluorometry and in addition to analyse the applicability of spectrofluorometry for quantifying gene electrotransfer on cells in a suspension. Our results show that all the three methods detected similar critical electric field strength, around 0.55 kV/cm for both cell lines. Moreover, results obtained on CHO cells showed that the total fluorescence intensity and percentage of transfection exhibit similar increase in response to increase electric field strength for all the three methods. For B16 cells, there was a good correlation at low electric field strengths, but at high field strengths, flow cytometer results deviated from results obtained by fluorescence microscope and spectrofluorometer. Our study showed that all the three methods detected similar critical electric field strengths and high correlations of results were obtained except for B16 cells at high electric field strengths. The results also demonstrated that flow cytometry measures higher values of percentage transfection compared to microscopy. Furthermore, we have demonstrated that spectrofluorometry can be used as a simple and consistent method to determine gene electrotransfer efficiency on cells in a suspension.

  18. Portable system of programmable syringe pump with potentiometer for determination of promethazine in pharmaceutical applications

    PubMed Central

    Saleh, Tawfik A.; Abulkibash, A.M.; Ibrahim, Atta E.

    2011-01-01

    A simple and fast-automated method was developed and validated for the assay of promethazine hydrochloride in pharmaceutical formulations, based on the oxidation of promethazine by cerium in an acidic medium. A portable system, consisting of a programmable syringe pump connected to a potentiometer, was constructed. The developed change in potential during promethazine oxidation was monitored. The related optimum working conditions, such as supporting electrolyte concentration, cerium(IV) concentration and flow rate were optimized. The proposed method was successfully applied to pharmaceutical samples as well as synthetic ones. The obtained results were realized by the official British pharmacopoeia (BP) method and comparable results were obtained. The obtained t-value indicates no significant differences between the results of the proposed and BP methods, with the advantages of the proposed method being simple, sensitive and cost effective. PMID:23960787

  19. Information-theoretic indices usage for the prediction and calculation of octanol-water partition coefficient.

    PubMed

    Persona, Marek; Kutarov, Vladimir V; Kats, Boris M; Persona, Andrzej; Marczewska, Barbara

    2007-01-01

    The paper describes the new prediction method of octanol-water partition coefficient, which is based on molecular graph theory. The results obtained using the new method are well correlated with experimental values. These results were compared with the ones obtained by use of ten other structure correlated methods. The comparison shows that graph theory can be very useful in structure correlation research.

  20. Equivalent Circuit Parameter Calculation of Interior Permanent Magnet Motor Involving Iron Loss Resistance Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Yamazaki, Katsumi

    In this paper, we propose a method to calculate the equivalent circuit parameters of interior permanent magnet motors including iron loss resistance using the finite element method. First, the finite element analysis considering harmonics and magnetic saturation is carried out to obtain time variations of magnetic fields in the stator and the rotor core. Second, the iron losses of the stator and the rotor are calculated from the results of the finite element analysis with the considerations of harmonic eddy current losses and the minor hysteresis losses of the core. As a result, we obtain the equivalent circuit parameters i.e. the d-q axis inductance and the iron loss resistance as functions of operating condition of the motor. The proposed method is applied to an interior permanent magnet motor to calculate the characteristics based on the equivalent circuit obtained by the proposed method. The calculated results are compared with the experimental results to verify the accuracy.

  1. Linear least-squares method for global luminescent oil film skin friction field analysis

    NASA Astrophysics Data System (ADS)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  2. Modified flotation method with the use of Percoll for the detection of Isospora suis oocysts in suckling piglet faeces.

    PubMed

    Karamon, Jacek; Ziomko, Irena; Cencek, Tomasz; Sroka, Jacek

    2008-10-01

    The modification of flotation method for the examination of diarrhoeic piglet faeces for the detection of Isospora suis oocysts was elaborated. The method was based on removing fractions of fat from the sample of faeces by centrifugation with a 25% Percoll solution. The investigations were carried out in comparison to the McMaster method. From five variants of the Percoll flotation method, the best results were obtained when 2ml of flotation liquid per 1g of faeces were used. The limit of detection in the Percoll flotation method was 160 oocysts per 1g, and was better than with the McMaster method. The efficacy of the modified method was confirmed by results obtained in the examination of the I. suis infected piglets. From all faecal samples, positive samples in the Percoll flotation method were double the results than that of the routine method. Oocysts were first detected by the Percoll flotation method on day 4 post-invasion, i.e. one-day earlier than with the McMaster method. During the experiment (except for 3 days), the extensity of I. suis invasion in the litter examined by the Percoll flotation method was higher than that with the McMaster method. The obtained results show that the modified flotation method with the use of Percoll could be applied in the diagnostics of suckling piglet isosporosis.

  3. Modified harmonic balance method for the solution of nonlinear jerk equations

    NASA Astrophysics Data System (ADS)

    Rahman, M. Saifur; Hasan, A. S. M. Z.

    2018-03-01

    In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.

  4. Comparison of Anaerobic Susceptibility Results Obtained by Different Methods

    PubMed Central

    Rosenblatt, J. E.; Murray, P. R.; Sonnenwirth, A. C.; Joyce, J. L.

    1979-01-01

    Susceptibility tests using 7 antimicrobial agents (carbenicillin, chloramphenicol, clindamycin, penicillin, cephalothin, metronidazole, and tetracycline) were run against 35 anaerobes including Bacteroides fragilis (17), other gram-negative bacilli (7), clostridia (5), peptococci (4), and eubacteria (2). Results in triplicate obtained by the microbroth dilution method and the aerobic modification of the broth disk method were compared with those obtained with an agar dilution method using Wilkins-Chalgren agar. Media used in the microbroth dilution method included Wilkins-Chalgren broth, brain heart infusion broth, brucella broth, tryptic soy broth, thioglycolate broth, and Schaedler's broth. A result differing by more than one dilution from the Wilkins-Chalgren agar result was considered a discrepancy, and when there was a change in susceptibility status this was termed a significant discrepancy. The microbroth dilution method using Wilkins-Chalgren broth and thioglycolate broth produced the fewest total discrepancies (22 and 24, respectively), and Wilkins-Chalgren broth, thioglycolate, and Schaedler's broth had the fewest significant discrepancies (6, 5, and 5, respectively). With the broth disk method, there were 15 significant discrepancies, although half of these were with tetracycline, which was the antimicrobial agent associated with the highest number of significant discrepancies (33), considering all of the test methods and media. PMID:464560

  5. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  6. GHM method for obtaining rationalsolutions of nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo

    2015-01-01

    In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.

  7. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  8. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  9. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    NASA Astrophysics Data System (ADS)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  10. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  11. Comparative study between EDXRF and ASTM E572 methods using two-way ANOVA

    NASA Astrophysics Data System (ADS)

    Krummenauer, A.; Veit, H. M.; Zoppas-Ferreira, J.

    2018-03-01

    Comparison with reference method is one of the necessary requirements for the validation of non-standard methods. This comparison was made using the experiment planning technique with two-way ANOVA. In ANOVA, the results obtained using the EDXRF method, to be validated, were compared with the results obtained using the ASTM E572-13 standard test method. Fisher's tests (F-test) were used to comparative study between of the elements: molybdenum, niobium, copper, nickel, manganese, chromium and vanadium. All F-tests of the elements indicate that the null hypothesis (Ho) has not been rejected. As a result, there is no significant difference between the methods compared. Therefore, according to this study, it is concluded that the EDXRF method was approved in this method comparison requirement.

  12. Assessment of formulas for calculating critical concentration by the agar diffusion method.

    PubMed Central

    Drugeon, H B; Juvin, M E; Caillon, J; Courtieu, A L

    1987-01-01

    The critical concentration of antibiotic was calculated by using the agar diffusion method with disks containing different charges of antibiotic. It is currently possible to use different calculation formulas (based on Fick's law) devised by Cooper and Woodman (the best known) and by Vesterdal. The results obtained with the formulas were compared with the MIC results (obtained by the agar dilution method). A total of 91 strains and two cephalosporins (cefotaxime and ceftriaxone) were studied. The formula of Cooper and Woodman led to critical concentrations that were higher than the MIC, but concentrations obtained with the Vesterdal formula were closer to the MIC. The critical concentration was independent of method parameters (dilution, for example). PMID:3619419

  13. Two Different Points of View through Artificial Intelligence and Vector Autoregressive Models for Ex Post and Ex Ante Forecasting

    PubMed Central

    Aydin, Alev Dilek; Caliskan Cavdar, Seyma

    2015-01-01

    The ANN method has been applied by means of multilayered feedforward neural networks (MLFNs) by using different macroeconomic variables such as the exchange rate of USD/TRY, gold prices, and the Borsa Istanbul (BIST) 100 index based on monthly data over the period of January 2000 and September 2014 for Turkey. Vector autoregressive (VAR) method has also been applied with the same variables for the same period of time. In this study, different from other studies conducted up to the present, ENCOG machine learning framework has been used along with JAVA programming language in order to constitute the ANN. The training of network has been done by resilient propagation method. The ex post and ex ante estimates obtained by the ANN method have been compared with the results obtained by the econometric forecasting method of VAR. Strikingly, our findings based on the ANN method reveal that there is a possibility of financial distress or a financial crisis in Turkey starting from October 2017. The results which were obtained with the method of VAR also support the results of ANN method. Additionally, our results indicate that the ANN approach has more superior prediction performance than the VAR method. PMID:26550010

  14. Two Different Points of View through Artificial Intelligence and Vector Autoregressive Models for Ex Post and Ex Ante Forecasting.

    PubMed

    Aydin, Alev Dilek; Caliskan Cavdar, Seyma

    2015-01-01

    The ANN method has been applied by means of multilayered feedforward neural networks (MLFNs) by using different macroeconomic variables such as the exchange rate of USD/TRY, gold prices, and the Borsa Istanbul (BIST) 100 index based on monthly data over the period of January 2000 and September 2014 for Turkey. Vector autoregressive (VAR) method has also been applied with the same variables for the same period of time. In this study, different from other studies conducted up to the present, ENCOG machine learning framework has been used along with JAVA programming language in order to constitute the ANN. The training of network has been done by resilient propagation method. The ex post and ex ante estimates obtained by the ANN method have been compared with the results obtained by the econometric forecasting method of VAR. Strikingly, our findings based on the ANN method reveal that there is a possibility of financial distress or a financial crisis in Turkey starting from October 2017. The results which were obtained with the method of VAR also support the results of ANN method. Additionally, our results indicate that the ANN approach has more superior prediction performance than the VAR method.

  15. Arthrodesis of the knee after failed knee replacement.

    PubMed

    Wade, P J; Denham, R A

    1984-05-01

    Arthrodesis of the knee is sometimes needed for failed total knee replacement, but fusion can be difficult to obtain. We describe a method of arthrodesis that uses the simple, inexpensive, Portsmouth external fixator. Bony union was obtained in all six patients treated with this technique. These results are compared with those obtained by other methods of arthrodesis.

  16. Comparison of viscous-shock-layer solutions by time-asymptotic and steady-state methods. [flow distribution around a Jupiter entry probe

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1982-01-01

    Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.

  17. Objective Amplitude of Accommodation Computed from Optical Quality Metrics Applied to Wavefront Outcomes

    PubMed Central

    López-Gil, Norberto; Fernández-Sánchez, Vicente; Thibos, Larry N.; Montés-Micó, Robert

    2010-01-01

    Purpose We studied the accuracy and precision of 32 objective wavefront methods for finding the amplitude of accommodation obtained in 180 eyes. Methods Ocular accommodation was stimulated with 0.5 D steps in target vergence spanning the full range of accommodation for each subject. Subjective monocular amplitude of accommodation was measured using two clinical methods, using negative lenses and with a custom Badal optometer. Results Both subjective methods gave similar results. Results obtained from the Badal optometer where used to test the accuracy of the objective methods. All objective methods showed lower amplitude of accommodation that the subjective ones by an amount that varied from 0.2 to 1.1 D depending on the method. The precision in this prediction also varied between subjects, with an average standard error of the mean of 0.1 D that decreased with age. Conclusions Depth of field increases subjective of amplitude of accommodation overestimating the objective amplitude obtained with all the metrics used. The change in the negative direction of spherical aberration during accommodation increases the amplitude of accommodation by an amount that varies with age.

  18. Determination of relative ion chamber calibration coefficients from depth-ionization measurements in clinical electron beams

    NASA Astrophysics Data System (ADS)

    Muir, B. R.; McEwen, M. R.; Rogers, D. W. O.

    2014-10-01

    A method is presented to obtain ion chamber calibration coefficients relative to secondary standard reference chambers in electron beams using depth-ionization measurements. Results are obtained as a function of depth and average electron energy at depth in 4, 8, 12 and 18 MeV electron beams from the NRC Elekta Precise linac. The PTW Roos, Scanditronix NACP-02, PTW Advanced Markus and NE 2571 ion chambers are investigated. The challenges and limitations of the method are discussed. The proposed method produces useful data at shallow depths. At depths past the reference depth, small shifts in positioning or drifts in the incident beam energy affect the results, thereby providing a built-in test of incident electron energy drifts and/or chamber set-up. Polarity corrections for ion chambers as a function of average electron energy at depth agree with literature data. The proposed method produces results consistent with those obtained using the conventional calibration procedure while gaining much more information about the behavior of the ion chamber with similar data acquisition time. Measurement uncertainties in calibration coefficients obtained with this method are estimated to be less than 0.5%. These results open up the possibility of using depth-ionization measurements to yield chamber ratios which may be suitable for primary standards-level dissemination.

  19. Comparative homology agreement search: An effective combination of homology-search methods

    PubMed Central

    Alam, Intikhab; Dress, Andreas; Rehmsmeier, Marc; Fuellen, Georg

    2004-01-01

    Many methods have been developed to search for homologous members of a protein family in databases, and the reliability of results and conclusions may be compromised if only one method is used, neglecting the others. Here we introduce a general scheme for combining such methods. Based on this scheme, we implemented a tool called comparative homology agreement search (chase) that integrates different search strategies to obtain a combined “E value.” Our results show that a consensus method integrating distinct strategies easily outperforms any of its component algorithms. More specifically, an evaluation based on the Structural Classification of Proteins database reveals that, on average, a coverage of 47% can be obtained in searches for distantly related homologues (i.e., members of the same superfamily but not the same family, which is a very difficult task), accepting only 10 false positives, whereas the individual methods obtain a coverage of 28–38%. PMID:15367730

  20. Validation of the concentration profiles obtained from the near infrared/multivariate curve resolution monitoring of reactions of epoxy resins using high performance liquid chromatography as a reference method.

    PubMed

    Garrido, M; Larrechi, M S; Rius, F X

    2007-03-07

    This paper reports the validation of the results obtained by combining near infrared spectroscopy and multivariate curve resolution-alternating least squares (MCR-ALS) and using high performance liquid chromatography as a reference method, for the model reaction of phenylglycidylether (PGE) and aniline. The results are obtained as concentration profiles over the reaction time. The trueness of the proposed method has been evaluated in terms of lack of bias. The joint test for the intercept and the slope showed that there were no significant differences between the profiles calculated spectroscopically and the ones obtained experimentally by means of the chromatographic reference method at an overall level of confidence of 5%. The uncertainty of the results was estimated by using information derived from the process of assessment of trueness. Such operational aspects as the cost and availability of instrumentation and the length and cost of the analysis were evaluated. The method proposed is a good way of monitoring the reactions of epoxy resins, and it adequately shows how the species concentration varies over time.

  1. Spectroscopic investigations of microwave generated plasmas

    NASA Technical Reports Server (NTRS)

    Hawley, Martin C.; Haraburda, Scott S.; Dinkel, Duane W.

    1991-01-01

    The study deals with the plasma behavior as applied to spacecraft propulsion from the perspective of obtaining better design and modeling capabilities. The general theory of spectroscopy is reviewed, and existing methods for converting emission-line intensities into such quantities as temperatures and densities are outlined. Attention is focused on the single-atomic-line and two-line radiance ratio methods, atomic Boltzmann plot, and species concentration. Electronic temperatures for a helium plasma are determined as a function of pressure and a gas-flow rate using these methods, and the concentrations of ions and electrons are predicted from the Saha-Eggert equations using the sets of temperatures obtained as a function of the gas-flow rate. It is observed that the atomic Boltzmann method produces more reliable results for the electronic temperature, while the results obtained from the single-line method reflect the electron temperatures accurately.

  2. Modified Hawking Radiation from a Kerr-Newman Black Hole due to Back-Reaction

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Wang, Gang; Liu, Wenbiao

    Hawking radiation from a general Kerr-Newman black hole is investigated using Damour-Ruffini's method. Considering the back-reaction of particle's energy, charge and angular momentum to the spacetime, we obtain a modified nonthermal spectrum. Maybe the information loss paradox can be explained, furthermore, the result is also consistent with the result obtained using Parikh and Wilczek's method.

  3. Phase matrix induced symmetrics for multiple scattering using the matrix operator method

    NASA Technical Reports Server (NTRS)

    Hitzfelder, S. J.; Kattawar, G. W.

    1973-01-01

    Entirely rigorous proofs of the symmetries induced by the phase matrix into the reflection and transmission operators used in the matrix operator theory are given. Results are obtained for multiple scattering in both homogeneous and inhomogeneous atmospheres. These results will be useful to researchers using the method since large savings in computer time and storage are obtainable.

  4. A Comparison of Presentation Levels to Maximize Word Recognition Scores

    PubMed Central

    Guthrie, Leslie A.; Mackersie, Carol L.

    2010-01-01

    Background While testing suprathreshold word recognition at multiple levels is considered best practice, studies on practice patterns do not suggest that this is common practice. Audiologists often test at a presentation level intended to maximize recognition scores, but methods for selecting this level are not well established for a wide range of hearing losses. Purpose To determine the presentation level methods that resulted in maximum suprathreshold phoneme-recognition scores while avoiding loudness discomfort. Research Design Performance-intensity functions were obtained for 40 participants with sensorineural hearing loss using the Computer-Assisted Speech Perception Assessment. Participants had either gradually sloping (mild, moderate, moderately severe/severe) or steeply sloping losses. Performance-intensity functions were obtained at presentation levels ranging from 10 dB above the SRT to 5 dB below the UCL (uncomfortable level). In addition, categorical loudness ratings were obtained across a range of intensities using speech stimuli. Scores obtained at UCL – 5 dB (maximum level below loudness discomfort) were compared to four alternative presentation-level methods. The alternative presentation-level methods included sensation level (SL; 2 kHz reference, SRT reference), a fixed-level (95 dB SPL) method, and the most comfortable loudness level (MCL). For the SL methods, scores used in the analysis were selected separately for the SRT and 2 kHz references based on several criteria. The general goal was to choose levels that represented asymptotic performance while avoiding loudness discomfort. The selection of SLs varied across the range of hearing losses. Results Scores obtained using the different presentation-level methods were compared to scores obtained using UCL – 5 dB. For the mild hearing loss group, the mean phoneme scores were similar for all presentation levels. For the moderately severe/severe group, the highest mean score was obtained using UCL - 5 dB. For the moderate and steeply sloping groups, the mean scores obtained using 2 kHz SL were equivalent to UCL - 5 dB, whereas scores obtained using the SRT SL were significantly lower than those obtained using UCL - 5 dB. The mean scores corresponding to MCL and 95 dB SPL were significantly lower than scores for UCL - 5 dB for the moderate and the moderately severe/severe group. Conclusions For participants with mild to moderate gradually sloping losses and for those with steeply sloping losses, the UCL – 5 dB and the 2 kHz SL methods resulted in the highest scores without exceeding listeners' UCLs. For participants with moderately severe/severe losses, the UCL - 5 dB method resulted in the highest phoneme recognition scores. PMID:19594086

  5. On solving wave equations on fixed bounded intervals involving Robin boundary conditions with time-dependent coefficients

    NASA Astrophysics Data System (ADS)

    van Horssen, Wim T.; Wang, Yandong; Cao, Guohua

    2018-06-01

    In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.

  6. High-performance liquid chromatographic method for potency determination of amoxicillin in commercial preparations and for stability studies.

    PubMed Central

    Hsu, M C; Hsu, P W

    1992-01-01

    A reversed-phase column liquid chromatographic method was developed for the assay of amoxicillin and its preparations. The linear calibration range was 0.2 to 2.0 mg/ml (r = 0.9998), and recoveries were generally greater than 99%. The high-performance liquid chromatographic assay results were compared with those obtained from a microbiological assay of bulk drug substance and capsule, injection, and granule formulations containing amoxicillin and degraded amoxicillin. At the 99% confidence level, no significant intermethod differences were noted for the paired results. Commercial formulations were also analyzed, and the results obtained by the proposed method closely agreed with those found by the microbiological method. The results indicated that the proposed method is a suitable substitute for the microbiological method for assays and stability studies of amoxicillin preparations. PMID:1416827

  7. Parametric and experimental analysis using a power flow approach

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1988-01-01

    Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.

  8. Shuffling cross-validation-bee algorithm as a new descriptor selection method for retention studies of pesticides in biopartitioning micellar chromatography.

    PubMed

    Zarei, Kobra; Atabati, Morteza; Ahmadi, Monire

    2017-05-04

    Bee algorithm (BA) is an optimization algorithm inspired by the natural foraging behaviour of honey bees to find the optimal solution which can be proposed to feature selection. In this paper, shuffling cross-validation-BA (CV-BA) was applied to select the best descriptors that could describe the retention factor (log k) in the biopartitioning micellar chromatography (BMC) of 79 heterogeneous pesticides. Six descriptors were obtained using BA and then the selected descriptors were applied for model development using multiple linear regression (MLR). The descriptor selection was also performed using stepwise, genetic algorithm and simulated annealing methods and MLR was applied to model development and then the results were compared with those obtained from shuffling CV-BA. The results showed that shuffling CV-BA can be applied as a powerful descriptor selection method. Support vector machine (SVM) was also applied for model development using six selected descriptors by BA. The obtained statistical results using SVM were better than those obtained using MLR, as the root mean square error (RMSE) and correlation coefficient (R) for whole data set (training and test), using shuffling CV-BA-MLR, were obtained as 0.1863 and 0.9426, respectively, while these amounts for the shuffling CV-BA-SVM method were obtained as 0.0704 and 0.9922, respectively.

  9. Propagation Constant of a Rectangular Waveguides Completely Full of Ferrite Magnetized Longitudinally

    NASA Astrophysics Data System (ADS)

    Sakli, Hedi; Benzina, Hafedh; Aguili, Taoufik; Tao, Jun Wu

    2009-08-01

    This paper is an analysis of rectangular waveguide completely full of ferrite magnetized longitudinally. The analysis is based on the formulation of the transverse operator method (TOM), followed by the application of the Galerkin method. We obtain an eigenvalue equation system. The propagation constant of some homogenous and anisotropic waveguide structures with ferrite has been obtained. The results presented here show that the transverse operator formulation is not only an elegant theoretical form, but also a powerful and efficient analysis method, it is useful to solve a number of the propagation problems in electromagnetic. One advantage of this method is that it presents a fast convergence. Numerical examples are given for different cases and compared with the published results. A good agreement is obtained.

  10. Subsonic aerodynamic characteristics of interacting lifting surfaces with separated flow around sharp edges predicted by a vortex-lattice method

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.; Gloss, B. B.

    1975-01-01

    Because the potential flow suction along the leading and side edges of a planform can be used to determine both leading- and side-edge vortex lift, the present investigation was undertaken to apply the vortex-lattice method to computing side-edge suction force for isolated or interacting planforms. Although there is a small effect of bound vortex sweep on the computation of the side-edge suction force, the results obtained for a number of different isolated planforms produced acceptable agreement with results obtained from a method employing continuous induced-velocity distributions. By using the method outlined, better agreement between theory and experiment was noted for a wing in the presence of a canard than was previously obtained.

  11. Magnetic Field Suppression of Flow in Semiconductor Melt

    NASA Technical Reports Server (NTRS)

    Fedoseyev, A. I.; Kansa, E. J.; Marin, C.; Volz, M. P.; Ostrogorsky, A. G.

    2000-01-01

    One of the most promising approaches for the reduction of convection during the crystal growth of conductive melts (semiconductor crystals) is the application of magnetic fields. Current technology allows the experimentation with very intense static fields (up to 80 KGauss) for which nearly convection free results are expected from simple scaling analysis in stabilized systems (vertical Bridgman method with axial magnetic field). However, controversial experimental results were obtained. The computational methods are, therefore, a fundamental tool in the understanding of the phenomena accounting during the solidification of semiconductor materials. Moreover, effects like the bending of the isomagnetic lines, different aspect ratios and misalignments between the direction of the gravity and magnetic field vectors can not be analyzed with analytical methods. The earliest numerical results showed controversial conclusions and are not able to explain the experimental results. Although the generated flows are extremely low, the computational task is a complicated because of the thin boundary layers. That is one of the reasons for the discrepancy in the results that numerical studies reported. Modeling of these magnetically damped crystal growth experiments requires advanced numerical methods. We used, for comparison, three different approaches to obtain the solution of the problem of thermal convection flows: (1) Spectral method in spectral superelement implementation, (2) Finite element method with regularization for boundary layers, (3) Multiquadric method, a novel method with global radial basis functions, that is proven to have exponential convergence. The results obtained by these three methods are presented for a wide region of Rayleigh and Hartman numbers. Comparison and discussion of accuracy, efficiency, reliability and agreement with experimental results will be presented as well.

  12. A comparison of Thellier-type and multispecimen paleointensity determinations on Pleistocene and historical lava flows from Lanzarote (Canary Islands, Spain)

    NASA Astrophysics Data System (ADS)

    Calvo-Rathert, Manuel; Morales-Contreras, Juan; Carrancho, Ángel; Goguitchaichvili, Avto

    2016-09-01

    Sixteen Miocene, Pleistocene, and historic lava flows have been sampled in Lanzarote (Canary Islands) for paleointensity analysis with both the Coe and multispecimen methods. Besides obtaining new data, the main goal of the study was the comparison of paleointensity results determined with two different techniques. Characteristic Remanent Magnetization (ChRM) directions were obtained in 15 flows, and 12 were chosen for paleointensity determination. In Thellier-type experiments, a selection of reliable paleointensity determinations (43 of 78 studied samples) was performed using sets of criteria of different stringency, trying to relate the quality of results to the strictness of the chosen criteria. Uncorrected and fraction and domain-state corrected multispecimen paleointensity results were obtained in all flows. Results with the Coe method on historical flows either agree with the expected values or show moderately lower ones, but multispecimen determinations display a large deviation from the expected result in one case. No relation can be detected between correct or anomalous results and paleointensity determination quality or rock-magnetic properties. However, results on historical flows suggest that agreement between both methods could be a good indicator of correct determinations. Comparison of results obtained with both methods on seven Pleistocene flows yields an excellent agreement in four and disagreements in three cases. Pleistocene determinations were only accepted if either results from both methods agreed or a result was based on a sufficiently large number (n > 4) of individual Thellier-type determinations. In most Pleistocene flows, a VADM around 5 × 1022 Am2 was observed, although two flows displayed higher values around 9 × 1022 Am2.

  13. Rapid identification of bacteria from bioMérieux BacT/ALERT blood culture bottles by MALDI-TOF MS.

    PubMed

    Haigh, J D; Green, I M; Ball, D; Eydmann, M; Millar, M; Wilks, M

    2013-01-01

    Several studies have reported poor results when trying to identify microorganisms directly from the bioMérieux BacT/ALERT blood culture system using matrix-assisted laser desorption/ionisation-time of flight (MALDI-TOF) mass spectrometry. The aim of this study is to evaluate two new methods, Sepsityper and an enrichment method for direct identification of microorganisms from this system. For both methods the samples were processed using the Bruker Microflex LT mass spectrometer (Biotyper) using the Microflex Control software to obtain spectra. The results from direct analysis were compared with those obtained by subculture and subsequent identification. A total of 350 positive blood cultures were processed simultaneously by the two methods. Fifty-three cultures were polymocrobial or failed to grow any organism on subculture, and these results were not included as there was either no subculture result, or for polymicrobial cultures it was known that the Biotyper would not be able to distinguish the constituent organisms correctly. Overall, the results showed that, contrary to previous reports, it is possible to identify bacteria directly from bioMérieux blood culture bottles, as 219/297 (74%) correct identifications were obtained using the Bruker Sepsityper method and 228/297 (77%) were obtained for the enrichment method when there is only one organism was present. Although the enrichment method was simpler, the reagent costs for the Sepsityper method were approximately pound 4.00 per sample compared to pound 0.50. An even simpler and cheaper method, which was less labour-intensive and did not require further reagents, was investigated. Seventy-seven specimens from positive signalled blood cultures were analysed by inoculating prewarmed blood agar plates and analysing any growth after 1-, 2- and 4-h periods of incubation at 37 degrees C, by either direct transfer or alcohol extraction. This method gave the highest number of correct identifications, 66/77 (86%), and was cheaper and less labour-intensive than either of the two above methods.

  14. [A preparative method for isolating the synaptonemal complexes from mammalian spermatocytes].

    PubMed

    Dadashev, S Ia; Bogdanov, Iu F; Gorach, G G; Kolomiets, O L; Karpova, O I

    1993-01-01

    A method of isolation of synaptonemal complexes (SC) from mouse, rat and Syrian hamster spermatocytes is described. A fraction of pachytene spermatocyte nuclei was obtained by centrifugation of the testis homogenate in stepwise sucrose gradient and then lysed. The resulting chromatine was hydrolysed with DNAse II, and a fraction of isolated SCs was obtained by ultracentrifugation of the hydrolysate. The method can be applied for obtaining the SC fraction from spermatocytes sufficient for cytological, biochemical and molecular biology studies.

  15. Nonideal isentropic gas flow through converging-diverging nozzles

    NASA Technical Reports Server (NTRS)

    Bober, W.; Chow, W. L.

    1990-01-01

    A method for treating nonideal gas flows through converging-diverging nozzles is described. The method incorporates the Redlich-Kwong equation of state. The Runge-Kutta method is used to obtain a solution. Numerical results were obtained for methane gas. Typical plots of pressure, temperature, and area ratios as functions of Mach number are given. From the plots, it can be seen that there exists a range of reservoir conditions that require the gas to be treated as nonideal if an accurate solution is to be obtained.

  16. Homogenization of periodic bi-isotropic composite materials

    NASA Astrophysics Data System (ADS)

    Ouchetto, Ouail; Essakhi, Brahim

    2018-07-01

    In this paper, we present a new method for homogenizing the bi-periodic materials with bi-isotropic components phases. The presented method is a numerical method based on the finite element method to compute the local electromagnetic properties. The homogenized constitutive parameters are expressed as a function of the macroscopic electromagnetic properties which are obtained from the local properties. The obtained results are compared to Unfolding Finite Element Method and Maxwell-Garnett formulas.

  17. New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods

    NASA Astrophysics Data System (ADS)

    S Saha, Ray

    2016-04-01

    In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.

  18. Results of Investigative Tests of Gas Turbine Engine Compressor Blades Obtained by Electrochemical Machining

    NASA Astrophysics Data System (ADS)

    Kozhina, T. D.; Kurochkin, A. V.

    2016-04-01

    The paper highlights results of the investigative tests of GTE compressor Ti-alloy blades obtained by the method of electrochemical machining with oscillating tool-electrodes, carried out in order to define the optimal parameters of the ECM process providing attainment of specified blade quality parameters given in the design documentation, while providing maximal performance. The new technological methods suggested based on the results of the tests; in particular application of vibrating tool-electrodes and employment of locating elements made of high-strength materials, significantly extend the capabilities of this method.

  19. Self-enhancement learning: target-creating learning and its application to self-organizing maps.

    PubMed

    Kamimura, Ryotaro

    2011-05-01

    In this article, we propose a new learning method called "self-enhancement learning." In this method, targets for learning are not given from the outside, but they can be spontaneously created within a neural network. To realize the method, we consider a neural network with two different states, namely, an enhanced and a relaxed state. The enhanced state is one in which the network responds very selectively to input patterns, while in the relaxed state, the network responds almost equally to input patterns. The gap between the two states can be reduced by minimizing the Kullback-Leibler divergence between the two states with free energy. To demonstrate the effectiveness of this method, we applied self-enhancement learning to the self-organizing maps, or SOM, in which lateral interactions were added to an enhanced state. We applied the method to the well-known Iris, wine, housing and cancer machine learning database problems. In addition, we applied the method to real-life data, a student survey. Experimental results showed that the U-matrices obtained were similar to those produced by the conventional SOM. Class boundaries were made clearer in the housing and cancer data. For all the data, except for the cancer data, better performance could be obtained in terms of quantitative and topological errors. In addition, we could see that the trustworthiness and continuity, referring to the quality of neighborhood preservation, could be improved by the self-enhancement learning. Finally, we used modern dimensionality reduction methods and compared their results with those obtained by the self-enhancement learning. The results obtained by the self-enhancement were not superior to but comparable with those obtained by the modern dimensionality reduction methods.

  20. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  1. Fine-Grained Indexing of the Biomedical Literature: MeSH Subheading Attachment for a MEDLINE Indexing Tool

    PubMed Central

    Névéol, Aurélie; Shooshan, Sonya E.; Mork, James G.; Aronson, Alan R.

    2007-01-01

    Objective This paper reports on the latest results of an Indexing Initiative effort addressing the automatic attachment of subheadings to MeSH main headings recommended by the NLM’s Medical Text Indexer. Material and Methods Several linguistic and statistical approaches are used to retrieve and attach the subheadings. Continuing collaboration with NLM indexers also provided insight on how automatic methods can better enhance indexing practice. Results The methods were evaluated on corpus of 50,000 MEDLINE citations. For main heading/subheading pair recommendations, the best precision is obtained with a post-processing rule method (58%) while the best recall is obtained by pooling all methods (64%). For stand-alone subheading recommendations, the best performance is obtained with the PubMed Related Citations algorithm. Conclusion Significant progress has been made in terms of subheading coverage. After further evaluation, some of this work may be integrated in the MEDLINE indexing workflow. PMID:18693897

  2. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  3. Signal Analysis Algorithms for Optimized Fitting of Nonresonant Laser Induced Thermal Acoustics Damped Sinusoids

    NASA Technical Reports Server (NTRS)

    Balla, R. Jeffrey; Miller, Corey A.

    2008-01-01

    This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.

  4. Air data system optimization using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Deshpande, Samir M.; Kumar, Renjith R.; Seywald, Hans; Siemers, Paul M., III

    1992-01-01

    An optimization method for flush-orifice air data system design has been developed using the Genetic Algorithm approach. The optimization of the orifice array minimizes the effect of normally distributed random noise in the pressure readings on the calculation of air data parameters, namely, angle of attack, sideslip angle and freestream dynamic pressure. The optimization method is applied to the design of Pressure Distribution/Air Data System experiment (PD/ADS) proposed for inclusion in the Aeroassist Flight Experiment (AFE). Results obtained by the Genetic Algorithm method are compared to the results obtained by conventional gradient search method.

  5. New method for characterization of retroreflective materials

    NASA Astrophysics Data System (ADS)

    Junior, O. S.; Silva, E. S.; Barros, K. N.; Vitro, J. G.

    2018-03-01

    The present article aims to propose a new method of analyzing the properties of retroreflective materials using a goniophotometer. The aim is to establish a higher resolution test method with a wide range of viewing angles, taking into account a three-dimensional analysis of the retroreflection of the tested material. The validation was performed by collecting data from specimens collected from materials used in safety clothing and road signs. The approach showed that the results obtained by the proposed method are comparable to the results obtained by the normative protocols, representing an evolution for the metrology of these materials.

  6. Estimating the vibration level of an L-shaped beam using power flow techniques

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.

    1986-01-01

    The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.

  7. Comparison of three nondestructive and contactless techniques for investigations of recombination parameters on an example of silicon samples

    NASA Astrophysics Data System (ADS)

    Chrobak, Ł.; Maliński, M.

    2018-06-01

    This paper presents a comparison of three nondestructive and contactless techniques used for determination of recombination parameters of silicon samples. They are: photoacoustic method, modulated free carriers absorption method and the photothermal radiometry method. In the paper the experimental set-ups used for measurements of the recombination parameters in these methods as also theoretical models used for interpretation of obtained experimental data have been presented and described. The experimental results and their respective fits obtained with these nondestructive techniques are shown and discussed. The values of the recombination parameters obtained with these methods are also presented and compared. Main advantages and disadvantages of presented methods have been discussed.

  8. Study of viscous flow about airfoils by the integro-differential method

    NASA Technical Reports Server (NTRS)

    Wu, J. C.; Sampath, S.

    1975-01-01

    An integro-differential method was used for numerically solving unsteady incompressible viscous flow problems. A computer program was prepared to solve the problem of an impulsively started 9% thick symmetric Joukowski airfoil at an angle of attack of 15 deg and a Reynolds number of 1000. Some of the results obtained for this problem were discussed and compared with related work completed previously. Two numerical procedures were used, an Alternating Direction Implicit (ADI) method and a Successive Line Relaxation (SLR) method. Generally, the ADI solution agrees well with the SLR solution and with previous results are stations away from the trailing edge. At the trailing edge station, the ADI solution differs substantially from previous results, while the vorticity profiles obtained from the SLR method there are in good qualitative agreement with previous results.

  9. Solution of Grad-Shafranov equation by the method of fundamental solutions

    NASA Astrophysics Data System (ADS)

    Nath, D.; Kalra, M. S.; Kalra

    2014-06-01

    In this paper we have used the Method of Fundamental Solutions (MFS) to solve the Grad-Shafranov (GS) equation for the axisymmetric equilibria of tokamak plasmas with monomial sources. These monomials are the individual terms appearing on the right-hand side of the GS equation if one expands the nonlinear terms into polynomials. Unlike the Boundary Element Method (BEM), the MFS does not involve any singular integrals and is a meshless boundary-alone method. Its basic idea is to create a fictitious boundary around the actual physical boundary of the computational domain. This automatically removes the involvement of singular integrals. The results obtained by the MFS match well with the earlier results obtained using the BEM. The method is also applied to Solov'ev profiles and it is found that the results are in good agreement with analytical results.

  10. Computation of partially invariant solutions for the Einstein Walker manifolds' identifying equations

    NASA Astrophysics Data System (ADS)

    Nadjafikhah, Mehdi; Jafari, Mehdi

    2013-12-01

    In this paper, partially invariant solutions (PISs) method is applied in order to obtain new four-dimensional Einstein Walker manifolds. This method is based on subgroup classification for the symmetry group of partial differential equations (PDEs) and can be regarded as the generalization of the similarity reduction method. For this purpose, those cases of PISs which have the defect structure δ=1 and are resulted from two-dimensional subalgebras are considered in the present paper. Also it is shown that the obtained PISs are distinct from the invariant solutions that obtained by similarity reduction method.

  11. Ambiguities and completeness of SAS data analysis: investigations of apoferritin by SAXS/SANS EID and SEC-SAXS methods

    NASA Astrophysics Data System (ADS)

    Zabelskii, D. V.; Vlasov, A. V.; Ryzhykau, Yu L.; Murugova, T. N.; Brennich, M.; Soloviov, D. V.; Ivankov, O. I.; Borshchevskiy, V. I.; Mishin, A. V.; Rogachev, A. V.; Round, A.; Dencher, N. A.; Büldt, G.; Gordeliy, V. I.; Kuklin, A. I.

    2018-03-01

    The method of small angle scattering (SAS) is widely used in the field of biophysical research of proteins in aqueous solutions. Obtaining low-resolution structure of proteins is still a highly valuable method despite the advances in high-resolution methods such as X-ray diffraction, cryo-EM etc. SAS offers the unique possibility to obtain structural information under conditions close to those of functional assays, i.e. in solution, without different additives, in the mg/mL concentration range. SAS method has a long history, but there are still many uncertainties related to data treatment. We compared 1D SAS profiles of apoferritin obtained by X-ray diffraction (XRD) and SAS methods. It is shown that SAS curves for X-ray diffraction crystallographic structure of apoferritin differ more significantly than it might be expected due to the resolution of the SAS instrument. Extrapolation to infinite dilution (EID) method does not sufficiently exclude dimerization and oligomerization effects and therefore could not guarantee total absence of dimers account in the final SAS curve. In this study, we show that EID SAXS, EID SANS and SEC-SAXS methods give complementary results and when they are used all together, it allows obtaining the most accurate results and high confidence from SAS data analysis of proteins.

  12. New method of extracting information of arterial oxygen saturation based on ∑ | 𝚫 |

    NASA Astrophysics Data System (ADS)

    Dai, Wenting; Lin, Ling; Li, Gang

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  13. New method of extracting information of arterial oxygen saturation based on ∑|𝚫 |

    NASA Astrophysics Data System (ADS)

    Wenting, Dai; Ling, Lin; Gang, Li

    2017-04-01

    Noninvasive detection of oxygen saturation with near-infrared spectroscopy has been widely used in clinics. In order to further enhance its detection precision and reliability, this paper proposes a method of time domain absolute difference summation (∑|Δ|) based on a dynamic spectrum. In this method, the ratio of absolute differences between intervals of two differential sampling points at the same moment on logarithm photoplethysmography signals of red and infrared light was obtained in turn, and then they obtained a ratio sequence which was screened with a statistical method. Finally, use the summation of the screened ratio sequence as the oxygen saturation coefficient Q. We collected 120 reference samples of SpO2 and then compared the result of two methods, which are ∑|Δ| and peak-peak. Average root-mean-square errors of the two methods were 3.02% and 6.80%, respectively, in the 20 cases which were selected randomly. In addition, the average variance of Q of the 10 samples, which were obtained by the new method, reduced to 22.77% of that obtained by the peak-peak method. Comparing with the commercial product, the new method makes the results more accurate. Theoretical and experimental analysis indicates that the application of the ∑|Δ| method could enhance the precision and reliability of oxygen saturation detection in real time.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, E.; Hamilton, D.

    The purpose of this ITER is to chronicle the development of the ROST (trademark), its capabilities, associated equipment, and accessories. The report concludes with an evaluation of how closely the results obtained using the technology compare to the results obtained using the reference methods.

  15. Simultaneous quantitative determination of paracetamol and tramadol in tablet formulation using UV spectrophotometry and chemometric methods

    NASA Astrophysics Data System (ADS)

    Glavanović, Siniša; Glavanović, Marija; Tomišić, Vladislav

    2016-03-01

    The UV spectrophotometric methods for simultaneous quantitative determination of paracetamol and tramadol in paracetamol-tramadol tablets were developed. The spectrophotometric data obtained were processed by means of partial least squares (PLS) and genetic algorithm coupled with PLS (GA-PLS) methods in order to determine the content of active substances in the tablets. The results gained by chemometric processing of the spectroscopic data were statistically compared with those obtained by means of validated ultra-high performance liquid chromatographic (UHPLC) method. The accuracy and precision of data obtained by the developed chemometric models were verified by analysing the synthetic mixture of drugs, and by calculating recovery as well as relative standard error (RSE). A statistically good agreement was found between the amounts of paracetamol determined using PLS and GA-PLS algorithms, and that obtained by UHPLC analysis, whereas for tramadol GA-PLS results were proven to be more reliable compared to those of PLS. The simplest and the most accurate and precise models were constructed by using the PLS method for paracetamol (mean recovery 99.5%, RSE 0.89%) and the GA-PLS method for tramadol (mean recovery 99.4%, RSE 1.69%).

  16. Asymptotic modal analysis and statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Dowell, Earl H.

    1992-01-01

    Asymptotic Modal Analysis (AMA) is a method which is used to model linear dynamical systems with many participating modes. The AMA method was originally developed to show the relationship between statistical energy analysis (SEA) and classical modal analysis (CMA). In the limit of a large number of modes of a vibrating system, the classical modal analysis result can be shown to be equivalent to the statistical energy analysis result. As the CMA result evolves into the SEA result, a number of systematic assumptions are made. Most of these assumptions are based upon the supposition that the number of modes approaches infinity. It is for this reason that the term 'asymptotic' is used. AMA is the asymptotic result of taking the limit of CMA as the number of modes approaches infinity. AMA refers to any of the intermediate results between CMA and SEA, as well as the SEA result which is derived from CMA. The main advantage of the AMA method is that individual modal characteristics are not required in the model or computations. By contrast, CMA requires that each modal parameter be evaluated at each frequency. In the latter, contributions from each mode are computed and the final answer is obtained by summing over all the modes in the particular band of interest. AMA evaluates modal parameters only at their center frequency and does not sum the individual contributions from each mode in order to obtain a final result. The method is similar to SEA in this respect. However, SEA is only capable of obtaining spatial averages or means, as it is a statistical method. Since AMA is systematically derived from CMA, it can obtain local spatial information as well.

  17. Parametric design and analysis on the landing gear of a planet lander using the response surface method

    NASA Astrophysics Data System (ADS)

    Zheng, Guang; Nie, Hong; Luo, Min; Chen, Jinbao; Man, Jianfeng; Chen, Chuanzhi; Lee, Heow Pueh

    2018-07-01

    The purpose of this paper is to obtain the design parameter-landing response relation for designing the configuration of the landing gear in a planet lander quickly. To achieve this, parametric studies on the landing gear are carried out using the response surface method (RSM), based on a single landing gear landing model validated by experimental results. According to the design of experiment (DOE) results of the landing model, the RS (response surface)-functions of the three crucial landing responses are obtained, and the sensitivity analysis (SA) of the corresponding parameters is performed. Also, two multi-objective optimizations designs on the landing gear are carried out. The analysis results show that the RS (response surface)-model performs well for the landing response design process, with a minimum fitting accuracy of 98.99%. The most sensitive parameters for the three landing response are the design size of the buffers, struts friction and the diameter of the bending beam. Moreover, the good agreement between the simulated model and RS-model results are obtained in two optimized designs, which show that the RS-model coupled with the FE (finite element)-method is an efficient method to obtain the design configuration of the landing gear.

  18. Fluorimetric determinations of nucleic acids using iron, osmium and samarium complexes of 4,7-diphenyl-1,10-phenanthroline

    NASA Astrophysics Data System (ADS)

    Salem, A. A.

    2006-09-01

    New sensitive, reliable and reproducible fluorimetric methods for determining microgram amounts of nucleic acids based on their reactions with Fe(II), Os(III) or Sm(III) complexes of 4,7-diphenyl-1,10-phenanthroline are proposed. Two complementary single stranded synthetic DNA sequences based on calf thymus as well as their hybridized double stranded were used. Nucleic acids were found to react instantaneously at room temperature in Tris-Cl buffer pH 7, with the investigated complexes resulting in decreasing their fluorescence emission. Two fluorescence peaks around 388 and 567 nm were obtained for the three complexes using excitation λmax of 280 nm and were used for this investigation. Linear calibration graphs in the range 1-6 μg/ml were obtained. Detection limits of 0.35-0.98 μg/ml were obtained. Using the calibration graphs for the synthetic dsDNA, relative standard deviations of 2.0-5.0% were obtained for analyzing DNA in the extraction products from calf thymus and human blood. Corresponding Recovery% of 80-114 were obtained. Student's t-values at 95% confidence level showed insignificant difference between the real and measured values. Results obtained by these methods were compared with the ethidium bromide method using the F-test and satisfactory results were obtained. The association constants and number of binding sites of synthetic ssDNA and dsDNA with the three complexes were estimated using Rosenthanl graphic method. The interaction mechanism was discussed and an intercalation mechanism was suggested for the binding reaction between nucleic acids and the three complexes.

  19. A developed nearly analytic discrete method for forward modeling in the frequency domain

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai

    2018-02-01

    High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.

  20. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  1. Thermodynamic properties of hydrogen dissociation reaction from the small system method and reactive force field ReaxFF

    NASA Astrophysics Data System (ADS)

    Trinh, Thuat T.; Meling, Nora; Bedeaux, Dick; Kjelstrup, Signe

    2017-03-01

    We present thermodynamic properties of the H2 dissociation reaction by means of the Small System Method (SSM) using Reactive Force Field (ReaxFF) simulations. Thermodynamic correction factors, partial molar enthalpies and heat capacities of the reactant and product were obtained in the high temperature range; up to 30,000 K. The results obtained from the ReaxFF potential agree well with previous results obtained with a three body potential (TBP). This indicates that the popular reactive force field method can be combined well with the newly developed SSM in realistic simulations of chemical reactions. The approach may be useful in the study of heat and mass transport in combination with chemical reactions.

  2. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  3. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; ...

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  4. Comparison of Sensible Heat Flux from Eddy Covariance and Scintillometer over different land surface conditions

    NASA Astrophysics Data System (ADS)

    Zeweldi, D. A.; Gebremichael, M.; Summis, T.; Wang, J.; Miller, D.

    2008-12-01

    The large source of uncertainty in satellite-based evapotranspiration algorithm results from the estimation of sensible heat flux H. Traditionally eddy covariance sensors, and recently large-aperture scintillometers, have been used as ground truth to evaluate satellite-based H estimates. The two methods rely on different physical measurement principles, and represent different foot print sizes. In New Mexico, we conducted a field campaign during summer 2008 to compare H estimates obtained from the eddy covariance and scintillometer methods. During this field campaign, we installed sonic anemometers; one propeller eddy covariance (OPEC) equipped with net radiometer and soil heat flux sensors; large aperture scintillometer (LAS); and weather station consisting of wind speed, direction and radiation sensors over three different experimental areas consisting of different roughness conditions (desert, irrigated area and lake). Our results show the similarities and differences in H estimates obtained from these various methods over the different land surface conditions. Further, our results show that the H estimates obtained from the LAS agree with those obtained from the eddy covariance method when high frequency thermocouple temperature, instead of the typical weather station temperature measurements, is used in the LAS analysis.

  5. Parameters estimation using the first passage times method in a jump-diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaldi, K., E-mail: kkhaldi@umbb.dz; LIMOSE Laboratory, Boumerdes University, 35000; Meddahi, S., E-mail: samia.meddahi@gmail.com

    2016-06-02

    The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.

  6. Aircraft Dynamic Modeling in Turbulence

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Cunninham, Kevin

    2012-01-01

    A method for accurately identifying aircraft dynamic models in turbulence was developed and demonstrated. The method uses orthogonal optimized multisine excitation inputs and an analytic method for enhancing signal-to-noise ratio for dynamic modeling in turbulence. A turbulence metric was developed to accurately characterize the turbulence level using flight measurements. The modeling technique was demonstrated in simulation, then applied to a subscale twin-engine jet transport aircraft in flight. Comparisons of modeling results obtained in turbulent air to results obtained in smooth air were used to demonstrate the effectiveness of the approach.

  7. Modified Hawking radiation in a BTZ black hole using Damour Ruffini method

    NASA Astrophysics Data System (ADS)

    He, Xiaokai; Liu, Wenbiao

    2007-09-01

    Considering energy conservation, angular momentum conservation, and the particles' back reaction to space-time, the scalar particles' Hawking radiation from a BTZ black hole was investigated using Damour-Ruffini method. The exact expression of the emission rate near the horizon is obtained and the result indicates that Hawking radiation spectrum is not purely thermal. The result obtained is consistent with the previous literatures. It is in agreement with an underlying unitary theory and offers a possible mechanism to explain the information loss paradox. Whereas, the method is more concise and understandable.

  8. [Metabolic surgery in treatment of diabetes mellitus of type II].

    PubMed

    Sedov, V M; Fishman, M B

    2013-01-01

    Nowadays, according to data of WHO, the diabetes mellitus was diagnosed in more than 280 million people. The diabetes mellitus type II had 90% patients. The applied methods of conservative therapy seldom lead to euglycemia condition of patients. Last years the treatment of diabetes mellitus was carried out by the method of different bariatic interventions. Good results was obtained, they should be analyzed and investigate. The results of treatment of 142 patients from 628 patients (with type II) were estimated. The patients were undergone by different bariatic interventions. Modern laparoscopic operations were performed on all the patients. Controlled bandage of stomach had 81 of patients. Gastric resection was performed in 28. Gastric bypass surgery was carried out in 22 of patients and biliopancreatic diversion - in 11. The improvement of control of leukemia level was obtained. Diabetes type II could be treated by surgical methods. The best results were obtained after combined operations, which potentially could present an alternative method of treatment of type II diabetes.

  9. A motion deblurring method with long/short exposure image pairs

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Hua, Weiping; Zhao, Jufeng; Gong, Xiaoli; Zhu, Liyao

    2018-01-01

    In this paper, a motion deblurring method with long/short exposure image pairs is presented. The long/short exposure image pairs are captured for the same scene under different exposure time. The image pairs are treated as the input of the deblurring method and more information could be used to obtain a deblurring result with high image quality. Firstly, the luminance equalization process is carried out to the short exposure image. And the blur kernel is estimated with the image pair under the maximum a posteriori (MAP) framework using conjugate gradient algorithm. Then a L0 image smoothing based denoising method is applied to the luminance equalized image. And the final deblurring result is obtained with the gain controlled residual image deconvolution process with the edge map as the gain map. Furthermore, a real experimental optical system is built to capture the image pair in order to demonstrate the effectiveness of the proposed deblurring framework. The long/short image pairs are obtained under different exposure time and camera gain control. Experimental results show that the proposed method could provide a superior deblurring result in both subjective and objective assessment compared with other deblurring approaches.

  10. The use of QSAR methods for determination of n-octanol/water partition coefficient using the example of hydroxyester HE-1

    NASA Astrophysics Data System (ADS)

    Guziałowska-Tic, Joanna

    2017-10-01

    According to the Directive of the European Parliament and of the Council concerning the protection of animals used for scientific purposes, the number of experiments involving the use of animals needs to be reduced. The methods which can replace animal testing include computational prediction methods, for instance, the quantitative structure-activity relationships (QSAR). These methods are designed to find a cohesive relationship between differences in the values of the properties of molecules and the biological activity of a series of test compounds. This paper compares the results of the author's own results of examination on the n-octanol/water coefficient for the hydroxyester HE-1 with those generated by means of three models: Kowwin, MlogP, AlogP. The test results indicate that, in the case of molecular similarity, the highest determination coefficient was obtained for the model MlogP and the lowest root-mean square error was obtained for the Kowwin method. When comparing the mean logP value obtained using the QSAR models with the value resulting from the author's own experiments, it was observed that the best conformity was that recorded for the model AlogP, where relative error was 15.2%.

  11. The exact eigenfunctions and eigenvalues of a two-dimensional rigid rotor obtained using Gaussian wave packet dynamics

    NASA Technical Reports Server (NTRS)

    Reimers, J. R.; Heller, E. J.

    1985-01-01

    Exact eigenfunctions for a two-dimensional rigid rotor are obtained using Gaussian wave packet dynamics. The wave functions are obtained by propagating, without approximation, an infinite set of Gaussian wave packets that collectively have the correct periodicity, being coherent states appropriate to this rotational problem. This result leads to a numerical method for the semiclassical calculation of rovibrational, molecular eigenstates. Also, a simple, almost classical, approximation to full wave packet dynamics is shown to give exact results: this leads to an a posteriori justification of the De Leon-Heller spectral quantization method.

  12. The generalized scattering coefficient method for plane wave scattering in layered structures

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song

    2017-02-01

    The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.

  13. WHO Melting-Point Reference Substances

    PubMed Central

    Bervenmark, H.; Diding, N. Å.; Öhrner, B.

    1963-01-01

    Batches of 13 highly purified chemicals, intended for use as reference substances in the calibration of apparatus for melting-point determinations, have been subjected to a collaborative assay by 15 laboratories in 13 countries. All the laboratories performed melting-point determinations by the capillary methods described in the proposed text for the second edition of the Pharmacopoea Internationalis and some, in addition, carried out determinations by the microscope hot stage (Kofler) method, using both the “going-through” and the “equilibrium” technique. Statistical analysis of the data obtained by the capillary method showed that the within-laboratory variation was small and that the between-laboratory variation, though constituting the greatest part of the whole variance, was not such as to warrant the exclusion of any laboratory from the evaluation of the results. The average values of the melting-points obtained by the laboratories can therefore be used as constants for the substances in question, which have accordingly been established as WHO Melting-Point Reference Substances and included in the WHO collection of authentic chemical substances. As to the microscope hot stage method, analysis of the results indicated that the values obtained by the “going-through” technique did not differ significantly from those obtained by the capillary method, but the values obtained by the “equilibrium” technique were mostly significantly lower. PMID:20604137

  14. Determination of Inorganic Arsenic in a Wide Range of Food Matrices using Hydride Generation - Atomic Absorption Spectrometry.

    PubMed

    de la Calle, Maria B; Devesa, Vicenta; Fiamegos, Yiannis; Vélez, Dinoraz

    2017-09-01

    The European Food Safety Authority (EFSA) underlined in its Scientific Opinion on Arsenic in Food that in order to support a sound exposure assessment to inorganic arsenic through diet, information about distribution of arsenic species in various food types must be generated. A method, previously validated in a collaborative trial, has been applied to determine inorganic arsenic in a wide variety of food matrices, covering grains, mushrooms and food of marine origin (31 samples in total). The method is based on detection by flow injection-hydride generation-atomic absorption spectrometry of the iAs selectively extracted into chloroform after digestion of the proteins with concentrated HCl. The method is characterized by a limit of quantification of 10 µg/kg dry weight, which allowed quantification of inorganic arsenic in a large amount of food matrices. Information is provided about performance scores given to results obtained with this method and which were reported by different laboratories in several proficiency tests. The percentage of satisfactory results obtained with the discussed method is higher than that of the results obtained with other analytical approaches.

  15. Evaluation of Grid Modification Methods for On- and Off-Track Sonic Boom Analysis

    NASA Technical Reports Server (NTRS)

    Nayani, Sudheer N.; Campbell, Richard L.

    2013-01-01

    Grid modification methods have been under development at NASA to enable better predictions of low boom pressure signatures from supersonic aircraft. As part of this effort, two new codes, Stretched and Sheared Grid - Modified (SSG) and Boom Grid (BG), have been developed in the past year. The CFD results from these codes have been compared with ones from the earlier grid modification codes Stretched and Sheared Grid (SSGRID) and Mach Cone Aligned Prism (MCAP) and also with the available experimental results. NASA's unstructured grid suite of software TetrUSS and the automatic sourcing code AUTOSRC were used for base grid generation and flow solutions. The BG method has been evaluated on three wind tunnel models. Pressure signatures have been obtained up to two body lengths below a Gulfstream aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 53 degrees) cases. On-track pressure signatures up to ten body lengths below a Straight Line Segmented Leading Edge (SLSLE) wind tunnel model have been extracted. Good agreement with the wind tunnel results have been obtained. Pressure signatures have been obtained at 1.5 body lengths below a Lockheed Martin aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 40 degrees) cases. Grid sensitivity studies have been carried out to investigate any grid size related issues. Methods have been evaluated for fully turbulent, mixed laminar/turbulent and fully laminar flow conditions.

  16. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  17. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  18. Automatic tracking of labeled red blood cells in microchannels.

    PubMed

    Pinho, Diana; Lima, Rui; Pereira, Ana I; Gayubo, Fernando

    2013-09-01

    The current study proposes an automatic method for the segmentation and tracking of red blood cells flowing through a 100- μm glass capillary. The original images were obtained by means of a confocal system and then processed in MATLAB using the Image Processing Toolbox. The measurements obtained with the proposed automatic method were compared with the results determined by a manual tracking method. The comparison was performed by using both linear regressions and Bland-Altman analysis. The results have shown a good agreement between the two methods. Therefore, the proposed automatic method is a powerful way to provide rapid and accurate measurements for in vitro blood experiments in microchannels. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Bidirectional light-scattering image processing method for high-concentration jet sprays

    NASA Astrophysics Data System (ADS)

    Shimizu, I.; Emori, Y.; Yang, W.-J.; Shimoda, M.; Suzuki, T.

    1985-01-01

    In order to study the distributions of droplet size and volume density in high-concentration jet sprays, a new technique is developed, which combines the forward and backward light scattering method and an image processing method. A pulsed ruby laser is used as the light source. The Mie scattering theory is applied to the results obtained from image processing on the scattering photographs. The time history is obtained for the droplet size and volume density distributions, and the method is demonstrated by diesel fuel sprays under various injecting conditions. The validity of the technique is verified by a good agreement in the injected fuel volume distributions obtained by the present method and by injection rate measurements.

  20. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    ERIC Educational Resources Information Center

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  1. Quantitative determination of ambroxol in tablets by derivative UV spectrophotometric method and HPLC.

    PubMed

    Dinçer, Zafer; Basan, Hasan; Göger, Nilgün Günden

    2003-04-01

    A derivative UV spectrophotometric method for the determination of ambroxol in tablets was developed. Determination of ambroxol in tablets was conducted by using first-order derivative UV spectrophotometric method at 255 nm (n = 5). Standards for the calibration graph ranging from 5.0 to 35.0 microg/ml were prepared from stock solution. The proposed method was accurate with 98.6+/-0.4% recovery value and precise with coefficient of variation (CV) of 1.22. These results were compared with those obtained by reference methods, zero-order UV spectrophotometric method and reversed-phase high-performance liquid chromatography (HPLC) method. A reversed-phase C(18) column with aqueous phosphate (0.01 M)-acetonitrile-glacial acetic acid (59:40:1, v/v/v) (pH 3.12) mobile phase was used and UV detector was set to 252 nm. Calibration solutions used in HPLC were ranging from 5.0 to 20.0 microg/ml. Results obtained by derivative UV spectrophotometric method was comparable to those obtained by reference methods, zero-order UV spectrophotometric method and HPLC, as far as ANOVA test, F(calculated) = 0.762 and F(theoretical) = 3.89, was concerned. Copyright 2003 Elsevier Science B.V.

  2. Measurement of delta13C and delta18O Isotopic Ratios of CaCO3 by a Thermoquest Finnigan GasBench II Delta Plus XL Continous Flow Isotope Ratio Mass Spectrometer with Application to Devils Hole Core DH-11 Calcite

    USGS Publications Warehouse

    Revesz, Kinga M.; Landwehr, Jurate Maciunas; Keybl, Jaroslav Edward

    2001-01-01

    A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400?20 ?g) of calcium carbonate. This new method streamlines the classical phosphoric acid - calcium carbonate (H3PO4 - CaCO3) reaction method by making use of a Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. To obtain reproducible and accurate results, optimal conditions for the H3PO4 - CaCO3 reaction had to be determined. At the acid-carbonate reaction temperature suggested by the equipment manufacturer, the oxygen isotope ratio results were unsatisfactory (standard deviation () greater than 1.5 per mill), probably because of a secondary reaction. When the acid-carbonate reaction temperature was lowered to 26?C and the reaction time was increased to 24 hours, the precision of the carbon and oxygen isotope ratios for duplicate analyses improved to 0.1 and 0.2 per mill, respectively. The method was tested by analyzing calcite from Devils Hole, Nevada, which was formed by precipitation from ground water onto the walls of a sub-aqueous cavern during the last 500,000 years. Isotope-ratio values previously had been obtained by the classical method for Devils Hole core DH-11. The DH-11 core had been recently re-sampled, and isotope-ratio values were obtained using this new method. The results were comparable to those obtained by the classical method. The consistency of the isotopic results is such that an alignment offset could be identified in the re-sampled core material, a cutting error that was then independently confirmed. The reproducibility of the isotopic values is demonstrated by a correlation of approximately 0.96 for both isotopes, after correcting for an alignment offset. This result indicates that the new method is a viable alternative to the classical method. In particular, the new method requires less sample material permitting finer resolution and allows automation of some processes resulting in considerable timesavings.

  3. Mobility-based correction for accurate determination of binding constants by capillary electrophoresis-frontal analysis.

    PubMed

    Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y

    2017-06-01

    Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Application of the implicit MacCormack scheme to the PNS equations

    NASA Technical Reports Server (NTRS)

    Lawrence, S. L.; Tannehill, J. C.; Chaussee, D. S.

    1983-01-01

    The two-dimensional parabolized Navier-Stokes equations are solved using MacCormack's (1981) implicit finite-difference scheme. It is shown that this method for solving the parabolized Navier-Stokes equations does not require the inversion of block tridiagonal systems of algebraic equations and allows the original explicit scheme to be employed in those regions where implicit treatment is not needed. The finite-difference algorithm is discussed and the computational results for two laminar test cases are presented. Results obtained using this method for the case of a flat plate boundary layer are compared with those obtained using the conventional Beam-Warming scheme, as well as those obtained from a boundary layer code. The computed results for a more severe test of the method, the hypersonic flow past a 15 deg compression corner, are found to compare favorably with experiment and a numerical solution of the complete Navier-Stokes equations.

  5. Three Dimensional Aerodynamic Analysis of a High-Lift Transport Configuration

    NASA Technical Reports Server (NTRS)

    Dodbele, Simha S.

    1993-01-01

    Two computational methods, a surface panel method and an Euler method employing unstructured grid methodology, were used to analyze a subsonic transport aircraft in cruise and high-lift conditions. The computational results were compared with two separate sets of flight data obtained for the cruise and high-lift configurations. For the cruise configuration, the surface pressures obtained by the panel method and the Euler method agreed fairly well with results from flight test. However, for the high-lift configuration considerable differences were observed when the computational surface pressures were compared with the results from high-lift flight test. On the lower surface of all the elements with the exception of the slat, both the panel and Euler methods predicted pressures which were in good agreement with flight data. On the upper surface of all the elements the panel method predicted slightly higher suction compared to the Euler method. On the upper surface of the slat, pressure coefficients obtained by both the Euler and panel methods did not agree with the results of the flight tests. A sensitivity study of the upward deflection of the slat from the 40 deg. flap setting suggested that the differences in the slat deflection between the computational model and the flight configuration could be one of the sources of this discrepancy. The computation time for the implicit version of the Euler code was about 1/3 the time taken by the explicit version though the implicit code required 3 times the memory taken by the explicit version.

  6. A comparison of manual anthropometric measurements with Kinect-based scanned measurements in terms of precision and reliability.

    PubMed

    Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina

    2018-01-01

    Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.

  7. Effect of windowing on lithosphere elastic thickness estimates obtained via the coherence method: Results from northern South America

    NASA Astrophysics Data System (ADS)

    Ojeda, GermáN. Y.; Whitman, Dean

    2002-11-01

    The effective elastic thickness (Te) of the lithosphere is a parameter that describes the flexural strength of a plate. A method routinely used to quantify this parameter is to calculate the coherence between the two-dimensional gravity and topography spectra. Prior to spectra calculation, data grids must be "windowed" in order to avoid edge effects. We investigated the sensitivity of Te estimates obtained via the coherence method to mirroring, Hanning and multitaper windowing techniques on synthetic data as well as on data from northern South America. These analyses suggest that the choice of windowing technique plays an important role in Te estimates and may result in discrepancies of several kilometers depending on the selected windowing method. Te results from mirrored grids tend to be greater than those from Hanning smoothed or multitapered grids. Results obtained from mirrored grids are likely to be over-estimates. This effect may be due to artificial long wavelengths introduced into the data at the time of mirroring. Coherence estimates obtained from three subareas in northern South America indicate that the average effective elastic thickness is in the range of 29-30 km, according to Hanning and multitaper windowed data. Lateral variations across the study area could not be unequivocally determined from this study. We suggest that the resolution of the coherence method does not permit evaluation of small (i.e., ˜5 km), local Te variations. However, the efficiency and robustness of the coherence method in rendering continent-scale estimates of elastic thickness has been confirmed.

  8. Ultra-performance liquid chromatography/tandem mass spectrometric quantification of structurally diverse drug mixtures using an ESI-APCI multimode ionization source.

    PubMed

    Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S

    2007-01-01

    We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.

  9. Solitary traveling wave solutions of pressure equation of bubbly liquids with examination for viscosity and heat transfer

    NASA Astrophysics Data System (ADS)

    Khater, Mostafa M. A.; Seadawy, Aly R.; Lu, Dianchen

    2018-03-01

    In this research, we investigate one of the most popular model in nature and also industrial which is the pressure equation of bubbly liquids with examination for viscosity and heat transfer which has many application in nature and engineering. Understanding the physical meaning of exact and solitary traveling wave solutions for this equation gives the researchers in this field a great clear vision of the pressure waves in a mixture liquid and gas bubbles taking into consideration the viscosity of liquid and the heat transfer and also dynamics of contrast agents in the blood flow at ultrasonic researches. To achieve our goal, we apply three different methods which are extended tanh-function method, extended simple equation method and a new auxiliary equation method on this equation. We obtained exact and solitary traveling wave solutions and we also discuss the similarity and difference between these three method and make a comparison between results that we obtained with another results that obtained with the different researchers using different methods. All of these results and discussion explained the fact that our new auxiliary equation method is considered to be the most general, powerful and the most result-oriented. These kinds of solutions and discussion allow for the understanding of the phenomenon and its intrinsic properties as well as the ease of way of application and its applicability to other phenomena.

  10. The method of lines in analyzing solids containing cracks

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.

    1990-01-01

    A semi-numerical method is reviewed for solving a set of coupled partial differential equations subject to mixed and possibly coupled boundary conditions. The line method of analysis is applied to the Navier-Cauchy equations of elastic and elastoplastic equilibrium to calculate the displacement distributions in various, simple geometry bodies containing cracks. The application of this method to the appropriate field equations leads to coupled sets of simultaneous ordinary differential equations whose solutions are obtained along sets of lines in a discretized region. When decoupling of the equations and their boundary conditions is not possible, the use of a successive approximation procedure permits the analytical solution of the resulting ordinary differential equations. The use of this method is illustrated by reviewing and presenting selected solutions of mixed boundary value problems in three dimensional fracture mechanics. These solutions are of great importance in fracture toughness testing, where accurate stress and displacement distributions are required for the calculation of certain fracture parameters. Computations obtained for typical flawed specimens include that for elastic as well as elastoplastic response. Problems in both Cartesian and cylindrical coordinate systems are included. Results are summarized for a finite geometry rectangular bar with a central through-the-thickness or rectangular surface crack under remote uniaxial tension. In addition, stress and displacement distributions are reviewed for finite circular bars with embedded penny-shaped cracks, and rods with external annular or ring cracks under opening mode tension. The results obtained show that the method of lines presents a systematic approach to the solution of some three-dimensional mechanics problems with arbitrary boundary conditions. The advantage of this method over other numerical solutions is that good results are obtained even from the use of a relatively coarse grid.

  11. Using Riemannian geometry to obtain new results on Dikin and Karmarkar methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, P.; Joao, X.; Piaui, T.

    1994-12-31

    We are motivated by a 1990 Karmarkar paper on Riemannian geometry and Interior Point Methods. In this talk we show 3 results. (1) Karmarkar direction can be derived from the Dikin one. This is obtained by constructing a certain Z(x) representation of the null space of the unitary simplex (e, x) = 1; then the projective direction is the image under Z(x) of the affine-scaling one, when it is restricted to that simplex. (2) Second order information on Dikin and Karmarkar methods. We establish computable Hessians for each of the metrics corresponding to both directions, thus permitting the generation ofmore » {open_quotes}second order{close_quotes} methods. (3) Dikin and Karmarkar geodesic descent methods. For those directions, we make computable the theoretical Luenberger geodesic descent method, since we are able to explicit very accurate expressions of the corresponding geodesics. Convergence results are given.« less

  12. Experiences and results multitasking a hydrodynamics code on global and local memory machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.

    1987-01-01

    A one-dimensional, time-dependent Lagrangian hydrodynamics code using a Godunov solution method has been multitasked for the Cray X-MP/48, the Intel iPSC hypercube, the Alliant FX series and the IBM RP3 computers. Actual multitasking results have been obtained for the Cray, Intel and Alliant computers and simulated results were obtained for the Cray and RP3 machines. The differences in the methods required to multitask on each of the machines is discussed. Results are presented for a sample problem involving a shock wave moving down a channel. Comparisons are made between theoretical speedups, predicted by Amdahl's law, and the actual speedups obtained.more » The problems of debugging on the different machines are also described.« less

  13. Obtaining orthotropic elasticity tensor using entries zeroing method.

    NASA Astrophysics Data System (ADS)

    Gierlach, Bartosz; Danek, Tomasz

    2017-04-01

    A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal rotation. Computations were parallelized with OpenMP to decrease computational time what enables different tensors to be processed by different threads. As a result the distributions of rotated tensor entries values were obtained. For the entries which were to be zeroed we can observe almost normal distributions having mean equal to zero or sum of two normal distributions having inverse means. Non-zero entries represent different distributions with two or three maxima. Analysis of obtained results shows that described method produces consistent values of quaternions used to rotate tensors. Despite of less complex target function in a process of optimization in comparison to common approach, entries zeroing method provides results which can be applied to obtain an orthotropic tensor with good reliability. Modification of the method can produce also a tool for obtaining effective tensors belonging to another symmetry classes. This research was supported by the Polish National Science Center under contract No. DEC-2013/11/B/ST10/0472.

  14. Simultaneous determination of binary mixture of amlodipine besylate and atenolol based on dual wavelengths

    NASA Astrophysics Data System (ADS)

    Lamie, Nesrine T.

    2015-10-01

    Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.

  15. Improved Correction of Atmospheric Pressure Data Obtained by Smartphones through Machine Learning

    PubMed Central

    Kim, Yong-Hyuk; Ha, Ji-Hun; Kim, Na-Young; Im, Hyo-Hyuc; Sim, Sangjin; Choi, Reno K. Y.

    2016-01-01

    A correction method using machine learning aims to improve the conventional linear regression (LR) based method for correction of atmospheric pressure data obtained by smartphones. The method proposed in this study conducts clustering and regression analysis with time domain classification. Data obtained in Gyeonggi-do, one of the most populous provinces in South Korea surrounding Seoul with the size of 10,000 km2, from July 2014 through December 2014, using smartphones were classified with respect to time of day (daytime or nighttime) as well as day of the week (weekday or weekend) and the user's mobility, prior to the expectation-maximization (EM) clustering. Subsequently, the results were analyzed for comparison by applying machine learning methods such as multilayer perceptron (MLP) and support vector regression (SVR). The results showed a mean absolute error (MAE) 26% lower on average when regression analysis was performed through EM clustering compared to that obtained without EM clustering. For machine learning methods, the MAE for SVR was around 31% lower for LR and about 19% lower for MLP. It is concluded that pressure data from smartphones are as good as the ones from national automatic weather station (AWS) network. PMID:27524999

  16. Comparison of orbital volume obtained by tomography and rapid prototyping.

    PubMed

    Roça, Guilherme Berto; Foggiatto, José Aguiomar; Ono, Maria Cecilia Closs; Ono, Sergio Eiji; da Silva Freitas, Renato

    2013-11-01

    This study aims to compare orbital volume obtained by helical tomography and rapid prototyping. The study sample was composed of 6 helical tomography scans. Eleven healthy orbits were identified to have their volumes measured. The volumetric analysis with the helical tomography utilized the same protocol developed by the Plastic Surgery Unit of the Federal University of Paraná. From the CT images, 11 prototypes were created, and their respective volumes were analyzed in 2 ways: using software by SolidWorks and by direct analysis, when the prototype was filled with saline solution. For statistical analysis, the results of the volumes of the 11 orbits were considered independent. The average orbital volume measurements obtained by the method of Ono et al was 20.51 cm, the average obtained by the SolidWorks program was 20.64 cm, and the average measured using the prototype method was 21.81 cm. The 3 methods demonstrated a strong correlation between the measurements. The right and left orbits of each patient had similar volumes. The tomographic method for the analysis of orbital volume using the Ono protocol yielded consistent values, and by combining this method with rapid prototyping, both reliability validations of results were enhanced.

  17. A lattice Boltzmann model for the Burgers-Fisher equation.

    PubMed

    Zhang, Jianying; Yan, Guangwu

    2010-06-01

    A lattice Boltzmann model is developed for the one- and two-dimensional Burgers-Fisher equation based on the method of the higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. In order to obtain the two-dimensional Burgers-Fisher equation, vector sigma(j) has been used. And in order to overcome the drawbacks of "error rebound," a new assumption of additional distribution is presented, where two additional terms, in first order and second order separately, are used. Comparisons with the results obtained by other methods reveal that the numerical solutions obtained by the proposed method converge to exact solutions. The model under new assumption gives better results than that with second order assumption. (c) 2010 American Institute of Physics.

  18. An exploratory study of a finite difference method for calculating unsteady transonic potential flow

    NASA Technical Reports Server (NTRS)

    Bennett, R. M.; Bland, S. R.

    1979-01-01

    A method for calculating transonic flow over steady and oscillating airfoils was developed by Isogai. The full potential equation is solved with a semi-implicit, time-marching, finite difference technique. Steady flow solutions are obtained from time asymptotic solutions for a steady airfoil. Corresponding oscillatory solutions are obtained by initiating an oscillation and marching in time for several cycles until a converged periodic solution is achieved. The method is described in general terms and results for the case of an airfoil with an oscillating flap are presented for Mach numbers 0.500 and 0.875. Although satisfactory results are obtained for some reduced frequencies, it is found that the numerical technique generates spurious oscillations in the indicial response functions and in the variation of the aerodynamic coefficients with reduced frequency. These oscillations are examined with a dynamic data reduction method to evaluate their effects and trends with reduced frequency and Mach number. Further development of the numerical method is needed to eliminate these oscillations.

  19. Generalized Lagrangian Jacobi Gauss collocation method for solving unsteady isothermal gas through a micro-nano porous medium

    NASA Astrophysics Data System (ADS)

    Parand, Kourosh; Latifi, Sobhan; Delkhosh, Mehdi; Moayeri, Mohammad M.

    2018-01-01

    In the present paper, a new method based on the Generalized Lagrangian Jacobi Gauss (GLJG) collocation method is proposed. The nonlinear Kidder equation, which explains unsteady isothermal gas through a micro-nano porous medium, is a second-order two-point boundary value ordinary differential equation on the unbounded interval [0, ∞). Firstly, using the quasilinearization method, the equation is converted to a sequence of linear ordinary differential equations. Then, by using the GLJG collocation method, the problem is reduced to solving a system of algebraic equations. It must be mentioned that this equation is solved without domain truncation and variable changing. A comparison with some numerical solutions made and the obtained results indicate that the presented solution is highly accurate. The important value of the initial slope, y'(0), is obtained as -1.191790649719421734122828603800159364 for η = 0.5. Comparing to the best result obtained so far, it is accurate up to 36 decimal places.

  20. Gas chromatographic simulated distillation-mass spectrometry for the determination of the boiling point distributions of crude oils

    PubMed

    Roussis; Fitzgerald

    2000-04-01

    The coupling of gas chromatographic simulated distillation with mass spectrometry for the determination of the distillation profiles of crude oils is reported. The method provides the boiling point distributions of both weight and volume percent amounts. The weight percent distribution is obtained from the measured total ion current signal. The total ion current signal is converted to weight percent amount by calibration with a reference crude oil of a known distillation profile. Knowledge of the chemical composition of the crude oil across the boiling range permits the determination of the volume percent distribution. The long-term repeatability is equivalent to or better than the short-term repeatability of the currently available American Society for Testing and Materials (ASTM) gas chromatographic method for simulated distillation. Results obtained by the mass spectrometric method are in very good agreement with results obtained by conventional methods of physical distillation. The compositional information supplied by the method can be used to extensively characterize crude oils.

  1. Application of Grey Model GM(1, 1) to Ultra Short-Term Predictions of Universal Time

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Guo, Min; Zhao, Danning; Cai, Hongbing; Hu, Dandan

    2016-03-01

    A mathematical model known as one-order one-variable grey differential equation model GM(1, 1) has been herein employed successfully for the ultra short-term (<10days) predictions of universal time (UT1-UTC). The results of predictions are analyzed and compared with those obtained by other methods. It is shown that the accuracy of the predictions is comparable with that obtained by other prediction methods. The proposed method is able to yield an exact prediction even though only a few observations are provided. Hence it is very valuable in the case of a small size dataset since traditional methods, e.g., least-squares (LS) extrapolation, require longer data span to make a good forecast. In addition, these results can be obtained without making any assumption about an original dataset, and thus is of high reliability. Another advantage is that the developed method is easy to use. All these reveal a great potential of the GM(1, 1) model for UT1-UTC predictions.

  2. Simultaneous determination of a binary mixture of pantoprazole sodium and itopride hydrochloride by four spectrophotometric methods.

    PubMed

    Ramadan, Nesrin K; El-Ragehy, Nariman A; Ragab, Mona T; El-Zeany, Badr A

    2015-02-25

    Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method ((1)DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method ((3)D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Simultaneous determination of a binary mixture of pantoprazole sodium and itopride hydrochloride by four spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Ramadan, Nesrin K.; El-Ragehy, Nariman A.; Ragab, Mona T.; El-Zeany, Badr A.

    2015-02-01

    Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method (1DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method (3D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision.

  4. Effect of joint spacing and joint dip on the stress distribution around tunnels using different numerical methods

    NASA Astrophysics Data System (ADS)

    Nikadat, Nooraddin; Fatehi Marji, Mohammad; Rahmannejad, Reza; Yarahmadi Bafghi, Alireza

    2016-11-01

    Different conditions may affect the stability of tunnels by the geometry (spacing and orientation) of joints in the surrounded rock mass. In this study, by comparing the results obtained by the three novel numerical methods i.e. finite element method (Phase2), discrete element method (UDEC) and indirect boundary element method (TFSDDM), the effects of joint spacing and joint dips on the stress distribution around rock tunnels are numerically studied. These comparisons indicate the validity of the stress analyses around circular rock tunnels. These analyses also reveal that for a semi-continuous environment, boundary element method gives more accurate results compared to the results of finite element and distinct element methods. In the indirect boundary element method, the displacements due to joints of different spacing and dips are estimated by using displacement discontinuity (DD) formulations and the total stress distribution around the tunnel are obtained by using fictitious stress (FS) formulations.

  5. Comparison of GPS receiver DCB estimation methods using a GPS network

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Kyu; Park, Jong-Uk; Min Roh, Kyoung; Lee, Sang-Jeong

    2013-07-01

    Two approaches for receiver differential code biases (DCB) estimation using the GPS data obtained from the Korean GPS network (KGN) in South Korea are suggested: the relative and single (absolute) methods. The relative method uses a GPS network, while the single method determines DCBs from a single station only. Their performance was assessed by comparing the receiver DCB values obtained from the relative method with those estimated by the single method. The daily averaged receiver DCBs obtained from the two different approaches showed good agreement for 7 days. The root mean square (RMS) value of those differences is 0.83 nanoseconds (ns). The standard deviation of the receiver DCBs estimated by the relative method was smaller than that of the single method. From these results, it is clear that the relative method can obtain more stable receiver DCBs compared with the single method over a short-term period. Additionally, the comparison between the receiver DCBs obtained by the Korea Astronomy and Space Science Institute (KASI) and those of the IGS Global Ionosphere Maps (GIM) showed a good agreement at 0.3 ns. As the accuracy of DCB values significantly affects the accuracy of ionospheric total electron content (TEC), more studies are needed to ensure the reliability and stability of the estimated receiver DCBs.

  6. Applying the Multiple Signal Classification Method to Silent Object Detection Using Ambient Noise

    NASA Astrophysics Data System (ADS)

    Mori, Kazuyoshi; Yokoyama, Tomoki; Hasegawa, Akio; Matsuda, Minoru

    2004-05-01

    The revolutionary concept of using ocean ambient noise positively to detect objects, called acoustic daylight imaging, has attracted much attention. The authors attempted the detection of a silent target object using ambient noise and a wide-band beam former consisting of an array of receivers. In experimental results obtained in air, using the wide-band beam former, we successfully applied the delay-sum array (DSA) method to detect a silent target object in an acoustic noise field generated by a large number of transducers. This paper reports some experimental results obtained by applying the multiple signal classification (MUSIC) method to a wide-band beam former to detect silent targets. The ocean ambient noise was simulated by transducers decentralized to many points in air. Both MUSIC and DSA detected a spherical target object in the noise field. The relative power levels near the target obtained with MUSIC were compared with those obtained by DSA. Then the effectiveness of the MUSIC method was evaluated according to the rate of increase in the maximum and minimum relative power levels.

  7. Evaluation of the validity of the Bolton Index using cone-beam computed tomography (CBCT)

    PubMed Central

    Llamas, José M.; Cibrián, Rosa; Gandía, José L.; Paredes, Vanessa

    2012-01-01

    Aims: To evaluate the reliability and reproducibility of calculating the Bolton Index using cone-beam computed tomography (CBCT), and to compare this with measurements obtained using the 2D Digital Method. Material and Methods: Traditional study models were obtained from 50 patients, which were then digitized in order to be able to measure them using the Digital Method. Likewise, CBCTs of those same patients were undertaken using the Dental Picasso Master 3D® and the images obtained were then analysed using the InVivoDental programme. Results: By determining the regression lines for both measurement methods, as well as the difference between both of their values, the two methods are shown to be comparable, despite the fact that the measurements analysed presented statistically significant differences. Conclusions: The three-dimensional models obtained from the CBCT are as accurate and reproducible as the digital models obtained from the plaster study casts for calculating the Bolton Index. The differences existing between both methods were clinically acceptable. Key words:Tooth-size, digital models, bolton index, CBCT. PMID:22549690

  8. Denoising by coupled partial differential equations and extracting phase by backpropagation neural networks for electronic speckle pattern interferometry.

    PubMed

    Tang, Chen; Lu, Wenjing; Chen, Song; Zhang, Zhen; Li, Botao; Wang, Wenping; Han, Lin

    2007-10-20

    We extend and refine previous work [Appl. Opt. 46, 2907 (2007)]. Combining the coupled nonlinear partial differential equations (PDEs) denoising model with the ordinary differential equations enhancement method, we propose the new denoising and enhancing model for electronic speckle pattern interferometry (ESPI) fringe patterns. Meanwhile, we propose the backpropagation neural networks (BPNN) method to obtain unwrapped phase values based on a skeleton map instead of traditional interpolations. We test the introduced methods on the computer-simulated speckle ESPI fringe patterns and experimentally obtained fringe pattern, respectively. The experimental results show that the coupled nonlinear PDEs denoising model is capable of effectively removing noise, and the unwrapped phase values obtained by the BPNN method are much more accurate than those obtained by the well-known traditional interpolation. In addition, the accuracy of the BPNN method is adjustable by changing the parameters of networks such as the number of neurons.

  9. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  10. Non destructive testing of works of art by terahertz analysis

    NASA Astrophysics Data System (ADS)

    Bodnar, Jean-Luc; Metayer, Jean-Jacques; Mouhoubi, Kamel; Detalle, Vincent

    2013-11-01

    Improvements in technologies and the growing security needs in airport terminals lead to the development of non destructive testing devices using terahertz waves. Indeed, these waves have the advantage of being, on one hand, relatively penetrating. They also have the asset of not being ionizing. It is thus potentially an interesting contribution in the non destructive testing field. With the help of the VISIOM Company, the possibilities of this new industrial analysis method in assisting the restoration of works of art were then approached. The results obtained within this framework are presented here and compared with those obtained by infrared thermography. The results obtained show first that the THZ method, like the stimulated infrared thermography allows the detection of delamination located in murals paintings or in marquetries. They show then that the THZ method seems to allow detecting defects located relatively deeply (10 mm) and defects potentially concealed by other defects. It is an advantage compared to the stimulated infra-red thermography which does not make it possible to obtain these results. Furthermore, they show that the method does not seem sensitive to the various pigments constituting the pictorial layer, to the presence of a layer of "Japan paper" and to the presence of a layer of whitewash. It is not the case of the stimulated infrared thermography. It is another advantage of the THZ method. Finally, they show that the THZ method is limited in the detection of low-size defects. It is a disadvantage compared to the stimulated infrared thermography.

  11. Statistical analysis of activation and reaction energies with quasi-variational coupled-cluster theory

    NASA Astrophysics Data System (ADS)

    Black, Joshua A.; Knowles, Peter J.

    2018-06-01

    The performance of quasi-variational coupled-cluster (QV) theory applied to the calculation of activation and reaction energies has been investigated. A statistical analysis of results obtained for six different sets of reactions has been carried out, and the results have been compared to those from standard single-reference methods. In general, the QV methods lead to increased activation energies and larger absolute reaction energies compared to those obtained with traditional coupled-cluster theory.

  12. Influence of temporary organic bond nature on the properties of compacts and ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ditts, A., E-mail: ditts@tpu.ru; Revva, I., E-mail: revva@tpu.ru; Pogrebenkov, V.

    2016-01-15

    This work contains results of investigation of obtaining high thermally conductive ceramics from commercial powders of aluminum nitride and yttrium oxide by the method of monoaxial compaction of granulate. The principal scheme of preparation is proposed and technological properties of granulate are defined. Compaction conditions for simple items to use as heat removal in microelectronics and power electrical engineering have been established. Investigations of thermophysical properties of obtained ceramics and its structure by the XRD and SEM methods have been carried out. Ceramics with thermal conductivity from 172 to 174 W/m·K has been obtained as result of this work.

  13. Method of Curved Models and Its Application to the Study of Curvilinear Flight of Airships. Part II

    NASA Technical Reports Server (NTRS)

    Gourjienko, G A

    1937-01-01

    This report compares the results obtained by the aid of curved models with the results of tests made by the method of damped oscillations, and with flight tests. Consequently we shall be able to judge which method of testing in the tunnel produces results that are in closer agreement with flight test results.

  14. Love waves in functionally graded piezoelectric materials by stiffness matrix method.

    PubMed

    Ben Salah, Issam; Wali, Yassine; Ben Ghozlen, Mohamed Hédi

    2011-04-01

    A numerical matrix method relative to the propagation of ultrasonic guided waves in functionally graded piezoelectric heterostructure is given in order to make a comparative study with the respective performances of analytical methods proposed in literature. The preliminary obtained results show a good agreement, however numerical approach has the advantage of conceptual simplicity and flexibility brought about by the stiffness matrix method. The propagation behaviour of Love waves in a functionally graded piezoelectric material (FGPM) is investigated in this article. It involves a thin FGPM layer bonded perfectly to an elastic substrate. The inhomogeneous FGPM heterostructure has been stratified along the depth direction, hence each state can be considered as homogeneous and the ordinary differential equation method is applied. The obtained solutions are used to study the effect of an exponential gradient applied to physical properties. Such numerical approach allows applying different gradient variation for mechanical and electrical properties. For this case, the obtained results reveal opposite effects. The dispersive curves and phase velocities of the Love wave propagation in the layered piezoelectric film are obtained for electrical open and short cases on the free surface, respectively. The effect of gradient coefficients on coupled electromechanical factor, on the stress fields, the electrical potential and the mechanical displacement are discussed, respectively. Illustration is achieved on the well known heterostructure PZT-5H/SiO(2), the obtained results are especially useful in the design of high-performance acoustic surface devices and accurately prediction of the Love wave propagation behaviour. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Analysis of biomolecular solvation sites by 3D-RISM theory.

    PubMed

    Sindhikara, Daniel J; Hirata, Fumio

    2013-06-06

    We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.

  16. Capillary electrophoresis method for the discrimination between natural and artificial vanilla flavor for controlling food frauds.

    PubMed

    Lahouidak, Samah; Salghi, Rachid; Zougagh, Mohammed; Ríos, Angel

    2018-03-06

    A capillary electrophoresis method was developed for the determination of coumarin (COUM), ethyl vanillin (EVA), p-hydroxybenzaldehyde (PHB), p-hydroxybenzoic acid (PHBA), vanillin (VAN), vanillic acid (VANA) and vanillic alcohol (VOH) in vanilla products. The measured concentrations are compared to values obtained by liquid chromatography (LC) method. Analytical results, method precision, and accuracy data are presented and limits of detection for the method ranged from 2 to 5 μg/mL. The results obtained are used in monitoring the composition of vanilla flavorings, as well as for confirmation of natural or non-natural origin of vanilla in samples using four selected food samples containing this flavor. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A Novel Polygonal Finite Element Method: Virtual Node Method

    NASA Astrophysics Data System (ADS)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  18. X-ray and neutron diffraction studies of crystallinity in hydroxyapatite coatings.

    PubMed

    Girardin, E; Millet, P; Lodini, A

    2000-02-01

    To standardize industrial implant production and make comparisons between different experimental results, we have to be able to quantify the crystallinity of hydroxyapatite. Methods of measuring crystallinity ratio were developed for various HA samples before and after plasma spraying. The first series of methods uses X-ray diffraction. The advantage of these methods is that X-ray diffraction equipment is used widely in science and industry. In the second series, a neutron diffraction method is developed and the results recorded are similar to those obtained by the modified X-ray diffraction methods. The advantage of neutron diffraction is the ability to obtain measurements deep inside a component. It is a nondestructive method, owing to the very low absorption of neutrons in most materials. Copyright 2000 John Wiley & Sons, Inc.

  19. Determination of acrylamide in various food matrices: evaluation of LC and GC mass spectrometric methods.

    PubMed

    Becalski, Adam; Lau, Benjamin P Y; Lewis, David; Seaman, Stephen W; Sun, Wing F

    2005-01-01

    Recent concerns surrounding the presence of acrylamide in many types of thermally processed food have brought about the need for the development of analytical methods suitable for determination of acrylamide in diverse matrices with the goals of improving overall confidence in analytical results and better understanding of method capabilities. Consequently, the results are presented of acrylamide testing in commercially available food products--potato fries, potato chips, crispbread, instant coffee, coffee beans, cocoa, chocolate and peanut butter, obtained by using the same sample extract. The results obtained by using LC-MS/MS, GC/MS (El), GC/HRMS (El)--with or without derivatization--and the use of different analytical columns, are discussed and compared with respect to matrix borne interferences, detection limits and method complexities.

  20. Infants and young children modeling method for numerical dosimetry studies: application to plane wave exposure

    NASA Astrophysics Data System (ADS)

    Dahdouh, S.; Varsier, N.; Nunez Ochoa, M. A.; Wiart, J.; Peyman, A.; Bloch, I.

    2016-02-01

    Numerical dosimetry studies require the development of accurate numerical 3D models of the human body. This paper proposes a novel method for building 3D heterogeneous young children models combining results obtained from a semi-automatic multi-organ segmentation algorithm and an anatomy deformation method. The data consist of 3D magnetic resonance images, which are first segmented to obtain a set of initial tissues. A deformation procedure guided by the segmentation results is then developed in order to obtain five young children models ranging from the age of 5 to 37 months. By constraining the deformation of an older child model toward a younger one using segmentation results, we assure the anatomical realism of the models. Using the proposed framework, five models, containing thirteen tissues, are built. Three of these models are used in a prospective dosimetry study to analyze young child exposure to radiofrequency electromagnetic fields. The results lean to show the existence of a relationship between age and whole body exposure. The results also highlight the necessity to specifically study and develop measurements of child tissues dielectric properties.

  1. Turbulent heat fluxes by profile and inertial dissipation methods: analysis of the atmospheric surface layer from shipboard measurements during the SOFIA/ASTEX and SEMAPHORE experiments

    NASA Astrophysics Data System (ADS)

    Dupuis, Hélène; Weill, Alain; Katsaros, Kristina; Taylor, Peter K.

    1995-10-01

    Heat flux estimates obtained using the inertial dissipation method, and the profile method applied to radiosonde soundings, are assessed with emphasis on the parameterization of the roughness lengths for temperature and specific humidity. Results from the inertial dissipation method show a decrease of the temperature and humidity roughness lengths for increasing neutral wind speed, in agreement with previous studies. The sensible heat flux estimates were obtained using the temperature estimated from the speed of sound determined by a sonic anemometer. This method seems very attractive for estimating heat fluxes over the ocean. However allowance must be made in the inertial dissipation method for non-neutral stratification. The SOFIA/ASTEX and SEMAPHORE results show that, in unstable stratification, a term due to the transport terms in the turbulent kinetic energy budget, has to be included in order to determine the friction velocity with better accuracy. Using the profile method with radiosonde data, the roughness length values showed large scatter. A reliable estimate of the temperature roughness length could not be obtained. The humidity roughness length values were compatible with those found using the inertial dissipation method.

  2. Simultaneous chemometric determination of pyridoxine hydrochloride and isoniazid in tablets by multivariate regression methods.

    PubMed

    Dinç, Erdal; Ustündağ, Ozgür; Baleanu, Dumitru

    2010-08-01

    The sole use of pyridoxine hydrochloride during treatment of tuberculosis gives rise to pyridoxine deficiency. Therefore, a combination of pyridoxine hydrochloride and isoniazid is used in pharmaceutical dosage form in tuberculosis treatment to reduce this side effect. In this study, two chemometric methods, partial least squares (PLS) and principal component regression (PCR), were applied to the simultaneous determination of pyridoxine (PYR) and isoniazid (ISO) in their tablets. A concentration training set comprising binary mixtures of PYR and ISO consisting of 20 different combinations were randomly prepared in 0.1 M HCl. Both multivariate calibration models were constructed using the relationships between the concentration data set (concentration data matrix) and absorbance data matrix in the spectral region 200-330 nm. The accuracy and the precision of the proposed chemometric methods were validated by analyzing synthetic mixtures containing the investigated drugs. The recovery results obtained by applying PCR and PLS calibrations to the artificial mixtures were found between 100.0 and 100.7%. Satisfactory results obtained by applying the PLS and PCR methods to both artificial and commercial samples were obtained. The results obtained in this manuscript strongly encourage us to use them for the quality control and the routine analysis of the marketing tablets containing PYR and ISO drugs. Copyright © 2010 John Wiley & Sons, Ltd.

  3. Estimating Durability of Reinforced Concrete

    NASA Astrophysics Data System (ADS)

    Varlamov, A. A.; Shapovalov, E. L.; Gavrilov, V. B.

    2017-11-01

    In this article we propose to use the methods of fracture mechanics to evaluate concrete durability. To evaluate concrete crack resistance characteristics of concrete directly in the structure in order to implement the methods of fracture mechanics, we have developed special methods. Various experimental studies have been carried out to determine the crack resistance characteristics and the concrete modulus of elasticity during its operating. A comparison was carried out for the results obtained with the use of the proposed methods and those obtained with the standard methods for determining the concrete crack resistance characteristics.

  4. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  5. Speckle interferometry with temporal phase evaluation for measuring large-object deformation.

    PubMed

    Joenathan, C; Franze, B; Haible, P; Tiziani, H J

    1998-05-01

    We propose a new method for measuring large-object deformations byusing temporal evolution of the speckles in speckleinterferometry. The principle of the method is that by deformingthe object continuously, one obtains fluctuations in the intensity ofthe speckle. A large number of frames of the object motion arecollected to be analyzed later. The phase data for whole-objectdeformation are then retrieved by inverse Fourier transformation of afiltered spectrum obtained by Fourier transformation of thesignal. With this method one is capable of measuring deformationsof more than 100 mum, which is not possible using conventionalelectronic speckle pattern interferometry. We discuss theunderlying principle of the method and the results of theexperiments. Some nondestructive testing results are alsopresented.

  6. Comparison of microcrystalline characterization results from oil palm midrib alpha cellulose using different delignization method

    NASA Astrophysics Data System (ADS)

    Yuliasmi, S.; Pardede, T. R.; Nerdy; Syahputra, H.

    2017-03-01

    Oil palm midrib is one of the waste generated by palm plants containing 34.89% cellulose. Cellulose has the potential to produce microcrystalline cellulose can be used as an excipient in tablet formulations by direct compression. Microcrystalline cellulose is the result of a controlled hydrolysis of alpha cellulose, so the alpha cellulose extraction process of oil palm midrib greatly affect the quality of the resulting microcrystalline cellulose. The purpose of this study was to compare the microcrystalline cellulose produced from alpha cellulose extracted from oil palm midrib by two different methods. Fisrt delignization method uses sodium hydroxide. Second method uses a mixture of nitric acid and sodium nitrite, and continued with sodium hydroxide and sodium sulfite. Microcrystalline cellulose obtained by both method was characterized separately, including organoleptic test, color reagents test, dissolution test, pH test and determination of functional groups by FTIR. The results was compared with microcrystalline cellulose which has been available on the market. The characterization results showed that microcrystalline cellulose obtained by first method has the most similar characteristics to the microcrystalline cellulose available in the market.

  7. Improving image-quality of interference fringes of out-of-plane vibration using temporal speckle pattern interferometry and standard deviation for piezoelectric plates.

    PubMed

    Chien-Ching Ma; Ching-Yuan Chang

    2013-07-01

    Interferometry provides a high degree of accuracy in the measurement of sub-micrometer deformations; however, the noise associated with experimental measurement undermines the integrity of interference fringes. This study proposes the use of standard deviation in the temporal domain to improve the image quality of patterns obtained from temporal speckle pattern interferometry. The proposed method combines the advantages of both mean and subtractive methods to remove background noise and ambient disturbance simultaneously, resulting in high-resolution images of excellent quality. The out-of-plane vibration of a thin piezoelectric plate is the main focus of this study, providing information useful to the development of energy harvesters. First, ten resonant states were measured using the proposed method, and both mode shape and resonant frequency were investigated. We then rebuilt the phase distribution of the first resonant mode based on the clear interference patterns obtained using the proposed method. This revealed instantaneous deformations in the dynamic characteristics of the resonant state. The proposed method also provides a frequency-sweeping function, facilitating its practical application in the precise measurement of resonant frequency. In addition, the mode shapes and resonant frequencies obtained using the proposed method were recorded and compared with results obtained using finite element method and laser Doppler vibrometery, which demonstrated close agreement.

  8. Resolving and quantifying overlapped chromatographic bands by transmutation

    PubMed

    Malinowski

    2000-09-15

    A new chemometric technique called "transmutation" is developed for the purpose of sharpening overlapped chromatographic bands in order to quantify the components. The "transmutation function" is created from the chromatogram of the pure component of interest, obtained from the same instrument, operating under the same experimental conditions used to record the unresolved chromatogram of the sample mixture. The method is used to quantify mixtures containing toluene, ethylbenzene, m-xylene, naphthalene, and biphenyl from unresolved chromatograms previously reported. The results are compared to those obtained using window factor analysis, rank annihilation factor analysis, and matrix regression analysis. Unlike the latter methods, the transmutation method is not restricted to two-dimensional arrays of data, such as those obtained from HPLC/DAD, but is also applicable to chromatograms obtained from single detector experiments. Limitations of the method are discussed.

  9. Multi-beam laser heterodyne measurement with ultra-precision for Young modulus based on oscillating mirror modulation

    NASA Astrophysics Data System (ADS)

    Li, Y. Chao; Ding, Q.; Gao, Y.; Ran, L. Ling; Yang, J. Ru; Liu, C. Yu; Wang, C. Hui; Sun, J. Feng

    2014-07-01

    This paper proposes a novel method of multi-beam laser heterodyne measurement for Young modulus. Based on Doppler effect and heterodyne technology, loaded the information of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by mass variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain value of Young modulus of the sample by the calculation. This novel method is used to simulate measurement for Young modulus of wire under different mass by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.3%.

  10. Two-dimensional imaging of two types of radicals by the CW-EPR method

    NASA Astrophysics Data System (ADS)

    Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech

    2008-01-01

    The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.

  11. Parametric and experimental analysis using a power flow approach

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1990-01-01

    A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.

  12. Determination of JWL Parameters for Non-Ideal Explosive

    NASA Astrophysics Data System (ADS)

    Hamashima, H.; Kato, Y.; Itoh, S.

    2004-07-01

    JWL equation of state is widely used in numerical simulation of detonation phenomena. JWL parameters are determined by cylinder test. Detonation characteristics of non-ideal explosive depend strongly on confinement, and JWL parameters determined by cylinder test do not represent the state of detonation products in many applications. We developed a method to determine JWL parameters from the underwater explosion test. JWL parameters were determined through a method of characteristics applied to the configuration of the underwater shock waves of cylindrical explosives. The numerical results obtained using JWL parameters determined by the underwater explosion test and those obtained using JWL parameters determined by cylinder test were compared with experimental results for typical non-ideal explosive; emulsion explosive. Good agreement was confirmed between the results obtained using JWL parameters determined by the underwater explosion test and experimental results.

  13. Stress concentration in a cylindrical shell containing a circular hole.

    NASA Technical Reports Server (NTRS)

    Adams, N. J. I.

    1971-01-01

    The state of stress in a cylindrical shell containing a circular cutout was determined for axial tension, torsion, and internal pressure loading. The solution was obtained for the shallow shell equations by a variational method. The results were expressed in terms of a nondimensional curvature parameter which was a function of shell radius, shell thickness, and hole radius. The function chosen for the solution was such that when the radius of the cylindrical shell approaches infinity, the flat-plate solution was obtained. The results are compared with solutions obtained by more rigorous analytical methods, and with some experimental results. For small values of the curvature parameter, the agreement is good. For higher values of the curvature parameter, the present solutions indicate a limiting value of stress concentration, which is in contrast to previous results.

  14. An Artificial Neural Networks Method for Solving Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2010-09-01

    While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.

  15. Valuing inter-sectoral costs and benefits of interventions in the healthcare sector: methods for obtaining unit prices.

    PubMed

    Drost, Ruben M W A; Paulus, Aggie T G; Ruwaard, Dirk; Evers, Silvia M A A

    2017-02-01

    There is a lack of knowledge about methods for valuing health intervention-related costs and monetary benefits in the education and criminal justice sectors, also known as 'inter-sectoral costs and benefits' (ICBs). The objective of this study was to develop methods for obtaining unit prices for the valuation of ICBs. By conducting an exploratory literature study and expert interviews, several generic methods were developed. The methods' feasibility was assessed through application in the Netherlands. Results were validated in an expert meeting, which was attended by policy makers, public health experts, health economists and HTA-experts, and discussed at several international conferences and symposia. The study resulted in four methods, including the opportunity cost method (A) and valuation using available unit prices (B), self-constructed unit prices (C) or hourly labor costs (D). The methods developed can be used internationally and are valuable for the broad international field of HTA.

  16. Critical Evaluation of Kinetic Method Measurements: Possible Origins of Nonlinear Effects

    NASA Astrophysics Data System (ADS)

    Bourgoin-Voillard, Sandrine; Afonso, Carlos; Lesage, Denis; Zins, Emilie-Laure; Tabet, Jean-Claude; Armentrout, P. B.

    2013-03-01

    The kinetic method is a widely used approach for the determination of thermochemical data such as proton affinities (PA) and gas-phase acidities ( ΔH° acid ). These data are easily obtained from decompositions of noncovalent heterodimers if care is taken in the choice of the method, references used, and experimental conditions. Previously, several papers have focused on theoretical considerations concerning the nature of the references. Few investigations have been devoted to conditions required to validate the quality of the experimental results. In the present work, we are interested in rationalizing the origin of nonlinear effects that can be obtained with the kinetic method. It is shown that such deviations result from intrinsic properties of the systems investigated but can also be enhanced by artifacts resulting from experimental issues. Overall, it is shown that orthogonal distance regression (ODR) analysis of kinetic method data provides the optimum way of acquiring accurate thermodynamic information.

  17. Full waveform inversion using a decomposed single frequency component from a spectrogram

    NASA Astrophysics Data System (ADS)

    Ha, Jiho; Kim, Seongpil; Koo, Namhyung; Kim, Young-Ju; Woo, Nam-Sub; Han, Sang-Mok; Chung, Wookeen; Shin, Sungryul; Shin, Changsoo; Lee, Jaejoon

    2018-06-01

    Although many full waveform inversion methods have been developed to construct velocity models of subsurface, various approaches have been presented to obtain an inversion result with long-wavelength features even though seismic data lacking low-frequency components were used. In this study, a new full waveform inversion algorithm was proposed to recover a long-wavelength velocity model that reflects the inherent characteristics of each frequency component of seismic data using a single-frequency component decomposed from the spectrogram. We utilized the wavelet transform method to obtain the spectrogram, and the decomposed signal from the spectrogram was used as transformed data. The Gauss-Newton method with the diagonal elements of an approximate Hessian matrix was used to update the model parameters at each iteration. Based on the results of time-frequency analysis in the spectrogram, numerical tests with some decomposed frequency components were performed using a modified SEG/EAGE salt dome (A-A‧) line to demonstrate the feasibility of the proposed inversion algorithm. This demonstrated that a reasonable inverted velocity model with long-wavelength structures can be obtained using a single frequency component. It was also confirmed that when strong noise occurs in part of the frequency band, it is feasible to obtain a long-wavelength velocity model from the noise data with a frequency component that is less affected by the noise. Finally, it was confirmed that the results obtained from the spectrogram inversion can be used as an initial velocity model in conventional inversion methods.

  18. Methods for evaluating tensile and compressive properties of plastic laminates reinforced with unwoven glass fibers

    Treesearch

    Karl Romstad

    1964-01-01

    Methods of obtaining strength and elastic properties of plastic laminates reinforced with unwoven glass fibers were evaluated using the criteria of the strength values obtained and the failure characteristics observed. Variables investigated were specimen configuration and the manner of supporting and loading the specimens. Results of this investigation indicate that...

  19. An analysis of burn-off impact on the structure microporous of activated carbons formation

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Mirosław; Kopac, Türkan

    2017-12-01

    The paper presents the results on the application of the LBET numerical method as a tool for analysis of the microporous structure of activated carbons obtained from a bituminous coal. The LBET method was employed particularly to evaluate the impact of the burn-off on the obtained microporous structure parameters of activated carbons.

  20. Internal rotation of 13 low-mass low-luminosity red giants in the Kepler field

    NASA Astrophysics Data System (ADS)

    Triana, S. A.; Corsaro, E.; De Ridder, J.; Bonanno, A.; Pérez Hernández, F.; García, R. A.

    2017-06-01

    Context. The Kepler space telescope has provided time series of red giants of such unprecedented quality that a detailed asteroseismic analysis becomes possible. For a limited set of about a dozen red giants, the observed oscillation frequencies obtained by peak-bagging together with the most recent pulsation codes allowed us to reliably determine the core/envelope rotation ratio. The results so far show that the current models are unable to reproduce the rotation ratios, predicting higher values than what is observed and thus indicating that an efficient angular momentum transport mechanism should be at work. Here we provide an asteroseismic analysis of a sample of 13 low-luminosity low-mass red giant stars observed by Kepler during its first nominal mission. These targets form a subsample of the 19 red giants studied previously, which not only have a large number of extracted oscillation frequencies, but also unambiguous mode identifications. Aims: We aim to extend the sample of red giants for which internal rotation ratios obtained by theoretical modeling of peak-bagged frequencies are available. We also derive the rotation ratios using different methods, and compare the results of these methods with each other. Methods: We built seismic models using a grid search combined with a Nelder-Mead simplex algorithm and obtained rotation averages employing Bayesian inference and inversion methods. We compared these averages with those obtained using a previously developed model-independent method. Results: We find that the cores of the red giants in this sample are rotating 5 to 10 times faster than their envelopes, which is consistent with earlier results. The rotation rates computed from the different methods show good agreement for some targets, while some discrepancies exist for others.

  1. A Method of Flight Measurement of Spins

    NASA Technical Reports Server (NTRS)

    Soule, Hartley A; Scudder, Nathan F

    1932-01-01

    A method is described involving the use of recording turn meters and accelerometers and a sensitive altimeter, by means of which all of the physical quantities necessary for the complete determination of the flight path, motion, attitude, forces, and couples of a fully developed spin can be obtained in flight. Data are given for several spins of two training type airplanes which indicate that the accuracy of the results obtained with the method is satisfactory.

  2. Theoretical analysis of incompressible flow through a radial-inlet centrifugal impeller at various weight flows

    NASA Technical Reports Server (NTRS)

    Kramer, James J; Prian, Vasily D; Wu, Chung-Hua

    1956-01-01

    A method for the solution of the incompressible nonviscous flow through a centrifugal impeller, including the inlet region, is presented. Several numerical solutions are obtained for four weight flows through an impeller at one operating speed. These solutions are refined in the leading-edge region. The results are presented in a series of figures showing streamlines and relative velocity contours. A comparison is made with the results obtained by using a rapid approximate method of analysis.

  3. Trident Technical College 1999 Graduate Follow-Up Report.

    ERIC Educational Resources Information Center

    Trident Technical Coll., Charleston, SC.

    Presents the results of South Carolina's Trident Technical College's (TTC's) 1999 graduate follow-up survey report. Graduates were surveyed and results were obtained for the following items: graduate goals, employment, placement rates, graduates in related fields, when job obtained, job finding methods, job locations, job satisfaction, job…

  4. Enhanced Detection of Surface-Associated Bacteria in Indoor Environments by Quantitative PCR

    PubMed Central

    Buttner, Mark P.; Cruz-Perez, Patricia; Stetzenbach, Linda D.

    2001-01-01

    Methods for detecting microorganisms on surfaces are needed to locate biocontamination sources and to relate surface and airborne concentrations. Research was conducted in an experimental room to evaluate surface sampling methods and quantitative PCR (QPCR) for enhanced detection of a target biocontaminant present on flooring materials. QPCR and culture analyses were used to quantitate Bacillus subtilis (Bacillus globigii) endospores on vinyl tile, commercial carpet, and new and soiled residential carpet with samples obtained by four surface sampling methods: a swab kit, a sponge swipe, a cotton swab, and a bulk method. The initial data showed that greater overall sensitivity was obtained with the QPCR than with culture analysis; however, the QPCR results for bulk samples from residential carpet were negative. The swab kit and the sponge swipe methods were then tested with two levels of background biological contamination consisting of Penicillium chrysogenum spores. The B. subtilis values obtained by the QPCR method were greater than those obtained by culture analysis. The differences between the QPCR and culture data were significant for the samples obtained with the swab kit for all flooring materials except soiled residential carpet and with the sponge swipe for commercial carpet. The QPCR data showed that there were no significant differences between the swab kit and sponge swipe sampling methods for any of the flooring materials. Inhibition of QPCR due solely to biological contamination of flooring materials was not evident. However, some degree of inhibition was observed with the soiled residential carpet, which may have been caused by the presence of abiotic contaminants, alone or in combination with biological contaminants. The results of this research demonstrate the ability of QPCR to enhance detection and enumeration of biocontaminants on surface materials and provide information concerning the comparability of currently available surface sampling methods. PMID:11375164

  5. Soot Volume Fraction Imaging

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Ku, Jerry C.

    1994-01-01

    A new technique is described for the full-field determination of soot volume fractions via laser extinction measurements. This technique differs from previously reported point-wise methods in that a two-dimensional array (i.e., image) of data is acquired simultaneously. In this fashion, the net data rate is increased, allowing the study of time-dependent phenomena and the investigation of spatial and temporal correlations. A telecentric imaging configuration is employed to provide depth-invariant magnification and to permit the specification of the collection angle for scattered light. To improve the threshold measurement sensitivity, a method is employed to suppress undesirable coherent imaging effects. A discussion of the tomographic inversion process is provided, including the results obtained from numerical simulation. Results obtained with this method from an ethylene diffusion flame are shown to be in close agreement with those previously obtained by sequential point-wise interrogation.

  6. Comparison of the results of refractometric measurements in the process of diffusion, obtained by means of the backgroundoriented schlieren method and the holographic interferometry method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraiskii, A V; Mironova, T V

    2015-08-31

    The results of the study of interdiffusion of two liquids, obtained using the holographic recording scheme with a nonstationary reference wave with the frequency linearly varying in space and time are compared with the results of correlation processing of digital photographs, made with a random background screen. The spatio-temporal behaviour of the signal in four basic representations ('space – temporal frequency', 'space – time', 'spatial frequency – temporal frequency' and 'spatial frequency – time') is found in the holographic experiment and calculated (in the appropriate coordinates) based on the background-oriented schlieren method. Practical coincidence of the results of the correlationmore » analysis and the holographic double-exposure interferometry is demonstrated. (interferometry)« less

  7. Innovative application of the moisture analyzer for determination of dry mass content of processed cheese

    NASA Astrophysics Data System (ADS)

    Kowalska, Małgorzata; Janas, Sławomir; Woźniak, Magdalena

    2018-04-01

    The aim of this work was the presentation of an alternative method of determination of the total dry mass content in processed cheese. The authors claim that the presented method can be used in industry's quality control laboratories for routine testing and for quick in-process control. For the test purposes both reference method of determination of dry mass in processed cheese and moisture analyzer method were used. The tests were carried out for three different kinds of processed cheese. In accordance with the reference method, the sample was placed on a layer of silica sand and dried at the temperature of 102 °C for about 4 h. The moisture analyzer test required method validation, with regard to drying temperature range and mass of the analyzed sample. Optimum drying temperature of 110 °C was determined experimentally. For Hochland cream processed cheese sample, the total dry mass content, obtained using the reference method, was 38.92%, whereas using the moisture analyzer method, it was 38.74%. An average analysis time in case of the moisture analyzer method was 9 min. For the sample of processed cheese with tomatoes, the reference method result was 40.37%, and the alternative method result was 40.67%. For the sample of cream processed cheese with garlic the reference method gave value of 36.88%, and the alternative method, of 37.02%. An average time of those determinations was 16 min. Obtained results confirmed that use of moisture analyzer is effective. Compliant values of dry mass content were obtained for both of the used methods. According to the authors, the fact that the measurement took incomparably less time for moisture analyzer method, is a key criterion of in-process control and final quality control method selection.

  8. A novel approach to the experimental study on methane/steam reforming kinetics using the Orthogonal Least Squares method

    NASA Astrophysics Data System (ADS)

    Sciazko, Anna; Komatsu, Yosuke; Brus, Grzegorz; Kimijima, Shinji; Szmyd, Janusz S.

    2014-09-01

    For a mathematical model based on the result of physical measurements, it becomes possible to determine their influence on the final solution and its accuracy. However, in classical approaches, the influence of different model simplifications on the reliability of the obtained results are usually not comprehensively discussed. This paper presents a novel approach to the study of methane/steam reforming kinetics based on an advanced methodology called the Orthogonal Least Squares method. The kinetics of the reforming process published earlier are divergent among themselves. To obtain the most probable values of kinetic parameters and enable direct and objective model verification, an appropriate calculation procedure needs to be proposed. The applied Generalized Least Squares (GLS) method includes all the experimental results into the mathematical model which becomes internally contradicted, as the number of equations is greater than number of unknown variables. The GLS method is adopted to select the most probable values of results and simultaneously determine the uncertainty coupled with all the variables in the system. In this paper, the evaluation of the reaction rate after the pre-determination of the reaction rate, which was made by preliminary calculation based on the obtained experimental results over a Nickel/Yttria-stabilized Zirconia catalyst, was performed.

  9. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  10. Simulation Analysis of Helicopter Ground Resonance Nonlinear Dynamics

    NASA Astrophysics Data System (ADS)

    Zhu, Yan; Lu, Yu-hui; Ling, Ai-min

    2017-07-01

    In order to accurately predict the dynamic instability of helicopter ground resonance, a modeling and simulation method of helicopter ground resonance considering nonlinear dynamic characteristics of components (rotor lead-lag damper, landing gear wheel and absorber) is presented. The numerical integral method is used to calculate the transient responses of the body and rotor, simulating some disturbance. To obtain quantitative instabilities, Fast Fourier Transform (FFT) is conducted to estimate the modal frequencies, and the mobile rectangular window method is employed in the predictions of the modal damping in terms of the response time history. Simulation results show that ground resonance simulation test can exactly lead up the blade lead-lag regressing mode frequency, and the modal damping obtained according to attenuation curves are close to the test results. The simulation test results are in accordance with the actual accident situation, and prove the correctness of the simulation method. This analysis method used for ground resonance simulation test can give out the results according with real helicopter engineering tests.

  11. Reduction of speckle noise from optical coherence tomography images using multi-frame weighted nuclear norm minimization method

    NASA Astrophysics Data System (ADS)

    Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2015-12-01

    In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.

  12. Comparison of Quantitative Antifungal Testing Methods for Textile Fabrics.

    PubMed

    Imoto, Yasuo; Seino, Satoshi; Nakagawa, Takashi; Yamamoto, Takao A

    2017-01-01

     Quantitative antifungal testing methods for textile fabrics under growth-supportive conditions were studied. Fungal growth activities on unfinished textile fabrics and textile fabrics modified with Ag nanoparticles were investigated using the colony counting method and the luminescence method. Morphological changes of the fungi during incubation were investigated by microscopic observation. Comparison of the results indicated that the fungal growth activity values obtained with the colony counting method depended on the morphological state of the fungi on textile fabrics, whereas those obtained with the luminescence method did not. Our findings indicated that unique characteristics of each testing method must be taken into account for the proper evaluation of antifungal activity.

  13. Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.

    PubMed Central

    Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W

    1994-01-01

    Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184

  14. Effect of postmortem sampling technique on the clinical significance of autopsy blood cultures.

    PubMed

    Hove, M; Pencil, S D

    1998-02-01

    Our objective was to investigate the value of postmortem autopsy blood cultures performed with an iodine-subclavian technique relative to the classical method of atrial heat searing and antemortem blood cultures. The study consisted of a prospective autopsy series with each case serving as its own control relative to subsequent testing, and a retrospective survey of patients coming to autopsy who had both autopsy blood cultures and premortem blood cultures. A busy academic autopsy service (600 cases per year) at University of Texas Medical Branch Hospitals, Galveston, Texas, served as the setting for this work. The incidence of non-clinically relevant (false-positive) culture results were compared using different methods for collecting blood samples in a prospective series of 38 adult autopsy specimens. One hundred eleven adult autopsy specimens in which both postmortem and antemortem blood cultures were obtained were studied retrospectively. For both studies, positive culture results were scored as either clinically relevant or false positives based on analysis of the autopsy findings and the clinical summary. The rate of false-positive culture results obtained by an iodine-subclavian technique from blood drawn soon after death were statistically significantly lower (13%) than using the classical method of obtaining blood through the atrium after heat searing at the time of the autopsy (34%) in the same set of autopsy subjects. When autopsy results were compared with subjects' antemortem blood culture results, there was no significant difference in the rate of non-clinically relevant culture results in a paired retrospective series of antemortem blood cultures and postmortem blood cultures using the iodine-subclavian postmortem method (11.7% v 13.5%). The results indicate that autopsy blood cultures obtained using the iodine-subclavian technique have reliability equivalent to that of antemortem blood cultures.

  15. Different spectrophotometric methods applied for the analysis of simeprevir in the presence of its oxidative degradation product: Acomparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Serag, Ahmed

    2018-02-01

    Five simple spectrophotometric methods were developed for the determination of simeprevir in the presence of its oxidative degradation product namely, ratio difference, mean centering, derivative ratio using the Savitsky-Golay filters, second derivative and continuous wavelet transform. These methods are linear in the range of 2.5-40 μg/mL and validated according to the ICH guidelines. The obtained results of accuracy, repeatability and precision were found to be within the acceptable limits. The specificity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. Furthermore, these methods were statistically comparable to RP-HPLC method and good results were obtained. So, they can be used for the routine analysis of simeprevir in quality-control laboratories.

  16. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  17. Trident Technical College 1998 Graduate Follow-Up.

    ERIC Educational Resources Information Center

    Trident Technical Coll., Charleston, SC.

    Presents the results of South Carolina's Trident Technical College's (TTC's) 1998 graduate follow-up survey report of 915 TTC graduates. Graduates were surveyed and results were obtained for the following items: graduate goals, employment, placement rates, graduates in related fields, when job were obtained, job finding methods, job locations, job…

  18. Horizontal-to-vertical spectral ratio variability in the presence of permafrost

    NASA Astrophysics Data System (ADS)

    Kula, Damian; Olszewska, Dorota; Dobiński, Wojciech; Glazer, Michał

    2018-07-01

    Due to fluctuations in the thickness of the permafrost active layer, there exists a seasonal seismic impedance contrast in the permafrost table. The horizontal-to-vertical spectral ratio (HVSR) method is commonly used to estimate the resonant frequency of sedimentary layers on top of bedrock. Results obtained using this method are thought to be stable in time. The aim of the study is to verify whether seasonal variability in the permafrost active layer influences the results of the HVSR method. The research area lies in the direct vicinity of the Polish Polar Station, Hornsund, which is located in Southern Spitsbergen, Svalbard. Velocity models of the subsurface are obtained using the HVSR method, which are juxtaposed with electrical resistivity tomography profiles conducted near the seismic station. Survey results indicate that the active layer of permafrost has a major influence on the high-frequency section of the HVSR results. In addition, the depth of the permafrost table inferred using the HVSR method is comparable to the depth visible in electrical resistivity tomography results. This study proves that, in certain conditions, the HVSR method results vary seasonally, which must be taken into account in their interpretation.

  19. Usefulness and limitations of various guinea-pig test methods in detecting human skin sensitizers-validation of guinea-pig tests for skin hypersensitivity.

    PubMed

    Marzulli, F; Maguire, H C

    1982-02-01

    Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.

  20. Comparison of the Calculations Results of Heat Exchange Between a Single-Family Building and the Ground Obtained with the Quasi-Stationary and 3-D Transient Models. Part 2: Intermittent and Reduced Heating Mode

    NASA Astrophysics Data System (ADS)

    Staszczuk, Anna

    2017-03-01

    The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.

  1. Representing ductile damage with the dual domain material point method

    DOE PAGES

    Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...

    2015-12-14

    In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less

  2. Microsurgery within reconstructive surgery of extremities.

    PubMed

    Pheradze, I; Pheradze, T; Tsilosani, G; Goginashvili, Z; Mosiava, T

    2006-05-01

    Reconstructive surgery of extremities is an object of a special attention of surgeons. Vessel and nerve damages, deficiency of soft tissue, bone, associated with infection results in a complete loss of extremity function, it also raises a question of amputation. The goal of the study was to improve the role of microsurgery in reconstructive surgery of limbs. We operated on 294 patients with various diseases and damages of extremities: pathology of nerves, vessels, tissue loss. An original method of treatment of large simultaneous functional defects of limbs has been used. Good functional and aesthetic results were obtained. Results of reconstructive operations on extremities might be improved by using of microsurgery methods. Microsurgery is deemed as a method of choice for extremities' reconstructive surgery as far as outcomes achieved through application of microsurgical technique significantly surpass the outcomes obtained through the use of routine surgical methods.

  3. Comparison between refractometer and retinoscopy in determining refractive errors in children--false doubt.

    PubMed

    Pokupec, Rajko; Mrazovac, Danijela; Popović-Suić, Smiljka; Mrazovac, Visnja; Kordić, Rajko; Petricek, Igor

    2013-04-01

    Early detection of a refractive error and its correction are extremely important for the prevention of amblyopia (poor vision). The golden standard in the detection of refractive errors is retinoscopy--a method where the pupils are dilated in order to exclude accomodation. This results in a more accurate measurement of a refractive error. Automatic computer refractometer is also in use. The study included 30 patients, 15 boys, 15 girls aged 4-16. The first examination was conducted with refractometer on narrow pupils. Retinoscopy, followed by another examination with refractometer was performed on pupils dilated with mydriatic drops administered 3 times. The results obtained with three methods were compared. They indicate that in narrow pupils the autorefractometer revealed an increased diopter value in nearsightedness (myopia), the minus overcorrection, whereas findings obtained with retinoscopy and autorefractometer in mydriasis cycloplegia, were much more accurate. The results were statistically processed, which confirmed the differences between obtained measurements. These findings are consistent with the results of studies conducted by other authors. Automatic refractometry on narrow pupils has proven to be a method for detection of refractive errors in children. However, the exact value of the refractive error is obtained only in mydriasis--with retinoscopy or an automatic refractometer on dilated pupils.

  4. Dirac Particles' Hawking Radiation from a Schwarzschild Black Hole

    NASA Astrophysics Data System (ADS)

    He, Xiao-Kai; Liu, Wen-Biao

    2007-08-01

    Considering energy conservation and the backreaction of particles to spacetime, we investigate the massless/massive Dirac particles' Hawking radiation from a Schwarzschild black hole. The exact expression of the emission rate near the horizon is obtained and the result indicates that Hawking radiation spectrum is not purely thermal. The result obtained is consistent with the results obtained before. It satisfies the underlying unitary theory and offers a possible mechanism to explain the information loss paradox. Whereas the improved Damour-Ruffini method is more concise and understandable.

  5. The use of portable equipment for the activity concentration index determination of building materials: method validation and survey of building materials on the Belgian market.

    PubMed

    Stals, M; Verhoeven, S; Bruggeman, M; Pellens, V; Schroeyers, W; Schreurs, S

    2014-01-01

    The Euratom BSS requires that in the near future (2015) the building materials for application in dwellings or buildings such as offices or workshops are screened for NORM nuclides. The screening tool is the activity concentration index (ACI). Therefore it is expected that a large number of building materials will be screened for NORM and thus require ACI determination. Nowadays, the proposed standard for determination of building material ACI is a laboratory analyses technique with high purity germanium spectrometry and 21 days equilibrium delay. In this paper, the B-NORM method for determination of building material ACI is assessed as a faster method that can be performed on-site, alternative to the aforementioned standard method. The B-NORM method utilizes a LaBr3(Ce) scintillation probe to obtain the spectral data. Commercially available software was applied to comprehensively take into account the factors determining the counting efficiency. The ACI was determined by interpreting the gamma spectrum from (226)Ra and its progeny; (232)Th progeny and (40)K. In order to assess the accuracy of the B-NORM method, a large selection of samples was analyzed by a certified laboratory and the results were compared with the B-NORM results. The results obtained with the B-NORM method were in good correlation with the results obtained by the certified laboratory, indicating that the B-NORM method is an appropriate screening method to assess building material ACI. The B-NORM method was applied to analyze more than 120 building materials on the Belgian market. No building materials that exceed the proposed reference level of 1 mSv/year were encountered. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Analysis of Discontinuities in a Rectangular Waveguide Using Dyadic Green's Function Approach in Conjunction with Method of Moments

    NASA Technical Reports Server (NTRS)

    Deshpande, M. D.

    1997-01-01

    The dyadic Green's function for an electric current source placed in a rectangular waveguide is derived using a magnetic vector potential approach. A complete solution for the electric and magnetic fields including the source location is obtained by simple differentiation of the vector potential around the source location. The simple differentiation approach which gives electric and magnetic fields identical to an earlier derivation is overlooked by the earlier workers in the derivation of the dyadic Green's function particularly around the source location. Numerical results obtained using the Green's function approach are compared with the results obtained using the Finite Element Method (FEM).

  7. Fast batch injection analysis system for on-site determination of ethanol in gasohol and fuel ethanol.

    PubMed

    Pereira, Polyana F; Marra, Mariana C; Munoz, Rodrigo A A; Richter, Eduardo M

    2012-02-15

    A simple, accurate and fast (180 injections h(-1)) batch injection analysis (BIA) system with multiple-pulse amperometric detection has been developed for selective determination of ethanol in gasohol and fuel ethanol. A sample aliquot (100 μL) was directly injected onto a gold electrode immersed in 0.5 mol L(-1) NaOH solution (unique reagent). The proposed BIA method requires minimal sample manipulation and can be easily used for on-site analysis. The results obtained with the BIA method were compared to those obtained by gas-chromatography and similar results were obtained (at 95% of confidence level). Published by Elsevier B.V.

  8. Structure and performance of polymer-derived bulk ceramics determined by method of filler incorporation

    NASA Astrophysics Data System (ADS)

    Konegger, T.; Schneider, P.; Bauer, V.; Amsüss, A.; Liersch, A.

    2013-12-01

    The effect of four distinct methods of incorporating fillers into a preceramic polymer matrix was investigated with respect to the structural and mechanical properties of the resulting materials. Investigations were conducted with a polysiloxane/Al2O3/ZrO2 model system used as a precursor for mullite/ZrO2 composites. A quantitative evaluation of the uniformity of filler distribution was obtained by employing a novel image analysis. While solvent-free mixing led to a heterogeneous distribution of constituents resulting in limited mechanical property values, a strong improvement of material homogeneity and properties was obtained by using solvent-assisted methods. The results demonstrate the importance of the processing route on final characteristics of polymer-derived ceramics.

  9. Utilization of Shrimp Skin Waste (Sea Lobster) As Raw Material for the Membrane Filtration

    NASA Astrophysics Data System (ADS)

    Nyoman Rupiasih, Ni; Sumadiyasa, Made; Suyanto, Hery; Windari, Putri

    2017-05-01

    In view of the increasing littering of the sea banks by shells of crustaceans, a study was carried out to investigate the extraction and characterization of chitosan from skin waste of sea lobster i.e. ‘Bamboo Lobster’ (Panulirus versicolor). Chitosan was extracted using conventional methods such as pretreatment, demineralization, deprotienization, and deacetylation. The result showed that the degree of deacetylation of chitosan obtained is 70.02%. The FTIR spectra of the chitosan gave a characteristic of -NH2 band at 3447 cm-1 and carbonyl group band at 1655 cm-1. This chitosan has been used to prepare membrane. The chitosan membrane 2% has been prepared using phase inversion method with precipitation by solvent evaporation. The membranes were characterized by FTIR spectrophotometer, Nova 1200e using BJH method, and filtration method. The results show that thickness of the membrane is about 134 μm. The FTIR spectra show that functional groups present in the membrane are -NH, -CH, C=O, and -OH. Using BJH method obtained that the pore diameter is 3.382 nm with pore density is 8.95 x 105 pores/m3. By filtration method obtained that pure water flux (PWF) of the membrane are 386.662 and 489.627 1/m2.h at pressure 80-85 kPa and 90-100 kPa, respectively. These results show that skin waste of sea lobster was discovered as a raw material to prepare chitosan membrane. The membrane obtained is belonged to mesoporous group which may use in microfiltration process.

  10. High Resolution X-Ray Diffraction of Macromolecules with Synchrotron Radiation

    NASA Technical Reports Server (NTRS)

    Stojanoff, Vivian; Boggon, Titus; Helliwell, John R.; Judge, Russell; Olczak, Alex; Snell, Edward H.; Siddons, D. Peter; Rose, M. Franklin (Technical Monitor)

    2000-01-01

    We recently combined synchrotron-based monochromatic X-ray diffraction topography methods with triple axis diffractometry and rocking curve measurements: high resolution X-ray diffraction imaging techniques, to better understand the quality of protein crystals. We discuss these methods in the light of results obtained on crystals grown under different conditions. These non destructive techniques are powerful tools in the characterization of the protein crystals and ultimately will allow to improve, develop, and understand protein crystal growth. High resolution X-ray diffraction imaging methods will be discussed in detail in light of recent results obtained on Hen Egg White Lysozyme crystals and other proteins.

  11. Development of seismic fragility curves for low-rise masonry infilled reinforced concrete buildings by a coefficient-based method

    NASA Astrophysics Data System (ADS)

    Su, Ray Kai Leung; Lee, Chien-Liang

    2013-06-01

    This study presents a seismic fragility analysis and ultimate spectral displacement assessment of regular low-rise masonry infilled (MI) reinforced concrete (RC) buildings using a coefficient-based method. The coefficient-based method does not require a complicated finite element analysis; instead, it is a simplified procedure for assessing the spectral acceleration and displacement of buildings subjected to earthquakes. A regression analysis was first performed to obtain the best-fitting equations for the inter-story drift ratio (IDR) and period shift factor of low-rise MI RC buildings in response to the peak ground acceleration of earthquakes using published results obtained from shaking table tests. Both spectral acceleration- and spectral displacement-based fragility curves under various damage states (in terms of IDR) were then constructed using the coefficient-based method. Finally, the spectral displacements of low-rise MI RC buildings at the ultimate (or nearcollapse) state obtained from this paper and the literature were compared. The simulation results indicate that the fragility curves obtained from this study and other previous work correspond well. Furthermore, most of the spectral displacements of low-rise MI RC buildings at the ultimate state from the literature fall within the bounded spectral displacements predicted by the coefficient-based method.

  12. Sub-Pixel Extraction of Laser Stripe Center Using an Improved Gray-Gravity Method †

    PubMed Central

    Li, Yuehua; Zhou, Jingbo; Huang, Fengshan; Liu, Lijian

    2017-01-01

    Laser stripe center extraction is a key step for the profile measurement of line structured light sensors (LSLS). To accurately obtain the center coordinates at sub-pixel level, an improved gray-gravity method (IGGM) was proposed. Firstly, the center points of the stripe were computed using the gray-gravity method (GGM) for all columns of the image. By fitting these points using the moving least squares algorithm, the tangential vector, the normal vector and the radius of curvature can be robustly obtained. One rectangular region could be defined around each of the center points. Its two sides that are parallel to the tangential vector could alter their lengths according to the radius of the curvature. After that, the coordinate for each center point was recalculated within the rectangular region and in the direction of the normal vector. The center uncertainty was also analyzed based on the Monte Carlo method. The obtained experimental results indicate that the IGGM is suitable for both the smooth stripes and the ones with sharp corners. The high accuracy center points can be obtained at a relatively low computation cost. The measured results of the stairs and the screw surface further demonstrate the effectiveness of the method. PMID:28394288

  13. Design of A Cyclone Separator Using Approximation Method

    NASA Astrophysics Data System (ADS)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  14. Standardization of ¹³¹I: implementation of CIEMAT/NIST method at BARC, India.

    PubMed

    Kulkarni, D B; Anuradha, R; Reddy, P J; Joseph, Leena

    2011-10-01

    The CIEMAT/NIST efficiency tracing method using ³H standard was implemented at Radiation Safety Systems Division, Bhabha Atomic Research Centre (BARC) for the standardization of ¹³¹I radioactive solution. Measurements were also carried out using the 4π β-γ coincidence counting system maintained as a primary standard at the laboratory. The implementation of the CIEMAT/NIST method was verified by comparing the activity concentration obtained in the laboratory with that of the average value of the APMP intercomparison (Yunoki et al., in progress, (APMP.RI(II)-K2.I-131)). The results obtained by the laboratory is linked to the CIPM Key Comparison Reference Value (KCRV) through the equivalent activity value of National Metrology Institute of Japan (NMIJ) (Yunoki et al., in progress, (APMP.RI(II)-K2.I-131)), which was the pilot laboratory for the intercomparison. The procedure employed to standardize ¹³¹I by the CIEMAT/NIST efficiency tracing technique is presented. The activity concentrations obtained have been normalized with the activity concentration measured by NMIJ to maintain confidentiality of results until the Draft-A report is accepted by all participants. The normalized activity concentrations obtained with the CIEMAT/NIST method was 0.9985 ± 0.0035 kBq/g and using 4π β-γ coincidence counting method was 0.9909 ± 0.0046 kBq/g as on 20 March 2009, 0 h UTC. The normalized activity concentration measured by the NMIJ was 1 ± 0.0024 kBq/g. The normalized average of the activity concentrations of all the participating laboratories was 1.004 ± 0.028 kBq/g. The results obtained in the laboratory are comparable with the other international standards within the uncertainty limits. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Accuracy of a new clean-catch technique for diagnosis of urinary tract infection in infants younger than 90 days of age

    PubMed Central

    Herreros, María Luisa; Tagarro, Alfredo; García-Pose, Araceli; Sánchez, Aida; Cañete, Alfonso; Gili, Pablo

    2015-01-01

    OBJECTIVE: To evaluate the accuracy of diagnosing urinary tract infections using a new, recently described, standardized clean-catch collection technique. METHODS: Cross-sectional study of infants <90 days old admitted due to fever without a source, with two matched samples of urine obtained using two different methods: clean-catch standardized stimulation technique and bladder catheterization. RESULTS: Sixty paired urine cultures were obtained. The median age was 44-days-old. Seventeen percent were male infants. Clean-catch technique sensitivity was 97% (95% CI 82% to 100%) and specificity was 89% (95% CI 65% to 98%). The contamination rate of clean-catch samples was lower (5%) than the contamination rate of catheter specimens (8%). CONCLUSIONS: The sensitivity and specificity of urine cultures obtained using the clean-catch method through the new technique were accurate and the contamination rate was low. These results suggest that this technique is a valuable, alternative method for urinary tract infection diagnosis. PMID:26435675

  16. An analysis of the optimal multiobjective inventory clustering decision with small quantity and great variety inventory by applying a DPSO.

    PubMed

    Wang, Shen-Tsu; Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions.

  17. Two-Flux Method for Transient Radiative Transfer in a Semitransparent Layer

    NASA Technical Reports Server (NTRS)

    Siegel, Robert

    1996-01-01

    The two-flux method was used to obtain transient solutions for a plane layer including internal reflections and scattering. The layer was initially at uniform temperature, and was heated or cooled by external radiation and convection. The two-flux equations were examined as a means for evaluating the radiative flux gradient in the transient energy equation. Comparisons of transient temperature distributions using the two-flux method were made with results where the radiative flux gradient was evaluated from the exact radiative transfer equations. Good agreement was obtained for optical thicknesses from 0.5 to 5 and for refractive indices of 1 and 2. Illustrative results obtained with the two-flux method demonstrate the effect of isotropic scattering coupled with changing the refractive index. For small absorption with large scattering the maximum layer temperature is increased when the refractive index is increased. For larger absorption the effect is opposite, and the maximum temperature decreases with increased refractive index .

  18. Validation of odor concentration from mechanical-biological treatment piles using static chamber and wind tunnel with different wind speed values.

    PubMed

    Szyłak-Szydłowski, Mirosław

    2017-09-01

    The basic principle of odor sampling from surface sources is based primarily on the amount of air obtained from a specific area of the ground, which acts as a source of malodorous compounds. Wind tunnels and flux chambers are often the only available, direct method of evaluating the odor fluxes from small area sources. There are currently no widely accepted chamber-based methods; thus, there is still a need for standardization of these methods to ensure accuracy and comparability. Previous research has established that there is a significant difference between the odor concentration values obtained using the Lindvall chamber and those obtained by a dynamic flow chamber. Thus, the present study compares sampling methods using a streaming chamber modeled on the Lindvall cover (using different wind speeds), a static chamber, and a direct sampling method without any screens. The volumes of chambers in the current work were similar, ~0.08 m 3 . This study was conducted at the mechanical-biological treatment plant in Poland. Samples were taken from a pile covered by the membrane. Measured odor concentration values were between 2 and 150 ou E /m 3 . Results of the study demonstrated that both chambers can be used interchangeably in the following conditions: odor concentration is below 60 ou E /m 3 , wind speed inside the Lindvall chamber is below 0.2 m/sec, and a flow value is below 0.011 m 3 /sec. Increasing the wind speed above the aforementioned value results in significant differences in the results obtained between those methods. In all experiments, the results of the concentration of odor in the samples using the static chamber were consistently higher than those from the samples measured in the Lindvall chamber. Lastly, the results of experiments were employed to determine a model function of the relationship between wind speed and odor concentration values. Several researchers wrote that there are no widely accepted chamber-based methods. Also, there is still a need for standardization to ensure full comparability of these methods. The present study compared the existing methods to improve the standardization of area source sampling. The practical usefulness of the results was proving that both examined chambers can be used interchangeably. Statistically similar results were achieved while odor concentration was below 60 ou E /m 3 and wind speed inside the Lindvall chamber was below 0.2 m/sec. Increasing wind speed over these values results in differences between these methods. A model function of relationship between wind speed and odor concentration value was determined.

  19. A Cubic Radial Basis Function in the MLPG Method for Beam Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.

    2002-01-01

    A non-compactly supported cubic radial basis function implementation of the MLPG method for beam problems is presented. The evaluation of the derivatives of the shape functions obtained from the radial basis function interpolation is much simpler than the evaluation of the moving least squares shape function derivatives. The radial basis MLPG yields results as accurate or better than those obtained by the conventional MLPG method for problems with discontinuous and other complex loading conditions.

  20. Retrieval of constituent mixing ratios from limb thermal emission spectra

    NASA Technical Reports Server (NTRS)

    Shaffer, William A.; Kunde, Virgil G.; Conrath, Barney J.

    1988-01-01

    An onion-peeling iterative, least-squares relaxation method to retrieve mixing ratio profiles from limb thermal emission spectra is presented. The method has been tested on synthetic data, containing various amounts of added random noise for O3, HNO3, and N2O. The retrieval method is used to obtain O3 and HNO3 mixing ratio profiles from high-resolution thermal emission spectra. Results of the retrievals compare favorably with those obtained previously.

  1. Vertical profiles of wind and temperature by remote acoustical sounding

    NASA Technical Reports Server (NTRS)

    Fox, H. L.

    1969-01-01

    An acoustical method was investigated for obtaining meteorological soundings based on the refraction due to the vertical variation of wind and temperature. The method has the potential of yielding horizontally averaged measurements of the vertical variation of wind and temperature up to heights of a few kilometers; the averaging takes place over a radius of 10 to 15 km. An outline of the basic concepts and some of the results obtained with the method are presented.

  2. Estimation of Velocity Structure Using Microtremor Recordings from Arrays: Comparison of Results from the SPAC and the F-K Analysis Methods

    NASA Astrophysics Data System (ADS)

    Flores-Estrella, H.; Aguirre, J.; Boore, D.; Yussim, S.

    2001-12-01

    Microtremor recordings have become a useful tool for microzonation studies in countries with low to moderate seismicity and also in countries where there are few seismographs or the recurrence time for an earthquake is quite long. Microtremor recordings can be made at almost any time and any place without needing to wait for an earthquake. The measurements can be made using one station or an array of stations. Microtremor recordings can be used to estimate site response directly (e.g. by using Nakamura's technique), or they can be used to estimate shear-wave velocities, from which site response can be calculated. A number of studies have found that the direct estimation of site response may be unreliable, except for identifying the fundamental resonant period of a site. Obtaining shear-wave velocities requires inverting measurements of Rayleigh wave phase velocities from microtremors, which are obtained by using the Spatial Autocorrelation (SPAC) (Aki, 1957) or the Frequency-Wave Number (F-K) (Horike, 1985) methods. Estimating shear-wave velocities from microtremor recordings is a cheaper alternative than direct methods, such as the logging of boreholes. In this work we use simultaneous microtremor recordings from triangular arrays located at two sites in Mexico City, Mexico, one ("Texcoco") with a lacustrine sediment layer of about 200 m depth, and the other one ("Ciudad Universitaria") underlain by 2,000 year old basaltic flows from Xitle volcano. The data are analyzed using both the SPAC method and by the standard F-K method. The results obtained from the SPAC method are more consistent with expectations from the geological conditions and an empirical transfer function (Montalvo et al., 2001) than those from F-K method. We also analyze data from the Hollister Municipal Airport in California. The triangular array at this site is located near a borehole from which seismic velocities have been obtained using a downhole logging method (Liu et al., 2000). We compare results from the microtremor recordings analyzed using both the SPAC and F-K methods with those obtained from the downhole logging.

  3. Investigation of thickness uniformity of thin metal films by using α-particle energy loss method and successive scanning measurements

    NASA Astrophysics Data System (ADS)

    Li, Gang; Xu, Jiayun; Bai, Lixin

    2017-03-01

    The metal films are widely used in the Inertial Confinement Fusion (ICF) experiments to obtain the radiation opacity, and the accuracy of the measuring results mainly depends on the accuracy of the film thickness and thickness uniformity. The traditional used measuring methods all have various disadvantages, the optical method and stylus method cannot provide mass thickness which reflects the internal density distribution of the films, and the weighing method cannot provide the uniformity of the thickness distribution. This paper describes a new method which combines the α-particle energy loss (AEL) method and the successive scanning measurements to obtain the film thickness and thickness uniformity. The measuring system was partly installed in the vacuum chamber, and the relationship of chamber pressure and energy loss caused by the residual air in the vacuum chamber was studied for the source-to-detector distance ranging from 1 to 5 cm. The results show that the chamber pressure should be less than 10 Pa for the present measuring system. In the process of measurement, the energy spectrum of α-particles transmitted through each different measuring point were obtained, and then recorded automatically by a self-developed multi-channel analysis software. At the same time, the central channel numbers of the spectrum (CH) were also saved in a text form document. In order to realize the automation of data processing and represent the thickness uniformity visually in a graphic 3D plot, a software package was developed to convert the CH values into film thickness and thickness uniformity. The results obtained in this paper make the film thickness uniformity measurements more accurate and efficient in the ICF experiments.

  4. Selected problems with boron determination in water treatment processes. Part I: comparison of the reference methods for ICP-MS and ICP-OES determinations.

    PubMed

    Kmiecik, Ewa; Tomaszewska, Barbara; Wątor, Katarzyna; Bodzek, Michał

    2016-06-01

    The aim of the study was to compare the two reference methods for the determination of boron in water samples and further assess the impact of the method of preparation of samples for analysis on the results obtained. Samples were collected during different desalination processes, ultrafiltration and the double reverse osmosis system, connected in series. From each point, samples were prepared in four different ways: the first was filtered (through a membrane filter of 0.45 μm) and acidified (using 1 mL ultrapure nitric acid for each 100 mL of samples) (FA), the second was unfiltered and not acidified (UFNA), the third was filtered but not acidified (FNA), and finally, the fourth was unfiltered but acidified (UFA). All samples were analysed using two analytical methods: inductively coupled plasma mass spectrometry (ICP-MS) and inductively coupled plasma optical emission spectrometry (ICP-OES). The results obtained were compared and correlated, and the differences between them were studied. The results show that there are statistically significant differences between the concentrations obtained using the ICP-MS and ICP-OES techniques regardless of the methods of sampling preparation (sample filtration and preservation). Finally, both the ICP-MS and ICP-OES methods can be used for determination of the boron concentration in water. The differences in the boron concentrations obtained using these two methods can be caused by several high-level concentrations in selected whole-water digestates and some matrix effects. Higher concentrations of iron (from 1 to 20 mg/L) than chromium (0.02-1 mg/L) in the samples analysed can influence boron determination. When iron concentrations are high, we can observe the emission spectrum as a double joined and overlapping peak.

  5. Numerical investigation of multi-beam laser heterodyne measurement with ultra-precision for linear expansion coefficient of metal based on oscillating mirror modulation

    NASA Astrophysics Data System (ADS)

    Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You

    2011-01-01

    This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.

  6. The determination of third order linear models from a seventh order nonlinear jet engine model

    NASA Technical Reports Server (NTRS)

    Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex

    1989-01-01

    Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.

  7. [Rapid diagnosis of the most common fetal aneuploidies with the QF-PCR method--a study of 100 cases].

    PubMed

    Łaczmańska, Izabela; Gil, Justyna; Stembalska, Agnieszka; Makowska, Izabela; Kozłowska, Joanna; Skiba, Paweł; Czemarmazowicz, Halina; Pesz, Karolina; Slęzak, Ryszard; Smigiel, Robert; Jakubiak, Aleksandra; Doraczyńska-Kowalik, Anna; Sąsiadek, Maria M

    2015-09-01

    The aim of the study was to assess whether commercial kit QF-PCR can be used as the only method for rapic prenatal dia gnosis of chromosomes 13, 18, 21, X and Y aneuploidies, omitting cell culture and complete cyt6genetik analysis of fetal chromosomes. DNA from amniocytes (94 cases) and trophoblast cells (6 cases) was analyzed witt QF-PCR according to the manufacturer's protocol. The obtained products were separated using ABI 310 Genetic Analyzer and the resulting data were analyzed using GeneMarker software. The results of QF-PCR were obtained in 95 out of 100 cases (95%). Abnormalities were found in 28 casea (29.5%). All these results were confirmed in subsequent cytogenetic analysis. Normal results were obtained in 62 patients (70.5%). However in that group, we found three chromosomal aberrations other than those analyzed b3 QF-PCR. Additionally two abnormal and three normal karyotypes were found in patients with inconclusive QF-POF results. QF-PCR is a fast and reliable tool for chromosomal aneuploidy analysis and can be used as the only method without a full analysis of the karyotype, but only in cases of suspected fetal 13, 18, 21 trisomy or numerica aberrations of X chromosome. In other cases, fetal karyotype analysis from cells obtained after cell culture should be offered to the patient.

  8. The Effect of Solar Radiation Pressure on the Motion of an Artificial Satellite

    NASA Technical Reports Server (NTRS)

    Bryant, Robert W.

    1961-01-01

    The effects of solar radiation pressure on the motion of an artificial satellite are obtained, including the effects of the intermittent acceleration which results from the eclipsing of the satellite by the earth. Vectorial methods have been utilized to obtain the nonlinear equations describing the motion, and the method of Kryloff-Bogoliuboff has been applied in their solution.

  9. Simultaneous spectrophotometric determination of salbutamol and bromhexine in tablets.

    PubMed

    Habib, I H I; Hassouna, M E M; Zaki, G A

    2005-03-01

    Typical anti-mucolytic drugs called salbutamol hydrochloride and bromhexine sulfate encountered in tablets were determined simultaneously either by using linear regression at zero-crossing wavelengths of the first derivation of UV-spectra or by application of multiple linear partial least squares regression method. The results obtained by the two proposed mathematical methods were compared with those obtained by the HPLC technique.

  10. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  11. Research for the Fluid Field of the Centrifugal Compressor Impeller in Accelerating Startup

    NASA Astrophysics Data System (ADS)

    Li, Xiaozhu; Chen, Gang; Zhu, Changyun; Qin, Guoliang

    2013-03-01

    In order to study the flow field in the impeller in the accelerating start-up process of centrifugal compressor, the 3-D and 1-D transient accelerated flow governing equations along streamline in the impeller of the centrifugal compressor are derived in detail, the assumption of pressure gradient distribution is presented, and the solving method for 1-D transient accelerating flow field is given based on the assumption. The solving method is achieved by programming and the computing result is obtained. It is obtained by comparison that the computing method is met with the test result. So the feasibility and effectiveness for solving accelerating start-up problem of centrifugal compressor by the solving method in this paper is proven.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Kim, Seung Jun

    In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less

  13. Rye flour enriched with arabinoxylans in rye bread making.

    PubMed

    Buksa, Krzysztof; Nowotna, Anna; Ziobro, Rafał; Gambuś, Halina

    2015-01-01

    The aim of the study was to investigate physical and chemical properties of preparations of water soluble arabinoxylans (arabinoxylan-enriched flour) obtained by industrial method and their derivatives (obtained by hydrolysis and cross-linking of aranbinoxylans), as well as their impact on baking properties of rye flours. Additionally, these results were compared with highly purified arabinoxylans prepared by laboratory method and well characterized in the literature. Flour enriched with arabinoxylans was obtained by industrial method involving air separation of flour particles. It was characterized by 8.6% arabinoxylan content, lack of insoluble material and substantial residue (67%) of starch and dextrins. The addition of all industrial method preparations in amount of 10% (i.e. approx. 1% water soluble arabinoxylans), to rye flours resulted in an increase in water absorption, bread volume and decrease in hardness of the bread crumb and the effect was especially strong in the case of flour type 720. Due to the easiness of isolation procedure, industrial method preparation could be advised as an improver for rye bread making. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Development and Validation of New Discriminative Dissolution Method for Carvedilol Tablets

    PubMed Central

    Raju, V.; Murthy, K. V. R.

    2011-01-01

    The objective of the present study was to develop and validate a discriminative dissolution method for evaluation of carvedilol tablets. Different conditions such as type of dissolution medium, volume of dissolution medium and rotation speed of paddle were evaluated. The best in vitro dissolution profile was obtained using Apparatus II (paddle), 50 rpm, 900 ml of pH 6.8 phosphate buffer as dissolution medium. The drug release was evaluated by high-performance liquid chromatographic method. The dissolution method was validated according to current ICH and FDA guidelines using parameters such as the specificity, accuracy, precision and stability were evaluated and obtained results were within the acceptable range. The comparison of the obtained dissolution profiles of three different products were investigated using ANOVA-based, model-dependent and model-independent methods, results showed that there is significant difference between the products. The dissolution test developed and validated was adequate for its higher discriminative capacity in differentiating the release characteristics of the products tested and could be applied for development and quality control of carvedilol tablets. PMID:22923865

  15. Varying-energy CT imaging method based on EM-TV

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Han, Yan

    2016-11-01

    For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.

  16. Method for hygromechanical characterization of graphite/epoxy composite

    NASA Technical Reports Server (NTRS)

    Yaniv, Gershon; Peimanidis, Gus; Daniel, Isaac M.

    1987-01-01

    An experimental method is described for measuring hygroscopic swelling strains and mechanical strains of moisture-conditioned composite specimens. The method consists of embedding encapsulated strain gages in the midplane of the composite laminate; thus it does not interfere with normal moisture diffusion. It is particularly suited for measuring moisture swelling coefficients and for mechanical testing of moisture-conditioned specimens at high strain rates. Results obtained by the embedded gage method were shown to be more reliable and reproducible than those obtained by surface gages, dial gages, or extensometers.

  17. Techniques for obtaining subjective response to vertical vibration

    NASA Technical Reports Server (NTRS)

    Clarke, M. J.; Oborne, D. J.

    1975-01-01

    Laboratory experiments were performed to validate the techniques used for obtaining ratings in the field surveys carried out by the University College of Swansea. In addition, attempts were made to evaluate the basic form of the human response to vibration. Some of the results obtained by different methods are described.

  18. Estimation of parameters of dose volume models and their confidence limits

    NASA Astrophysics Data System (ADS)

    van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.

    2003-07-01

    Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.

  19. Comparison of different methods to quantify fat classes in bakery products.

    PubMed

    Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook

    2013-01-15

    The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Novel two wavelength spectrophotometric methods for simultaneous determination of binary mixtures with severely overlapping spectra

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    2015-02-01

    This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.

  1. Sum-of-squares of polynomials approach to nonlinear stability of fluid flows: an example of application

    PubMed Central

    Tutty, O.

    2015-01-01

    With the goal of providing the first example of application of a recently proposed method, thus demonstrating its ability to give results in principle, global stability of a version of the rotating Couette flow is examined. The flow depends on the Reynolds number and a parameter characterizing the magnitude of the Coriolis force. By converting the original Navier–Stokes equations to a finite-dimensional uncertain dynamical system using a partial Galerkin expansion, high-degree polynomial Lyapunov functionals were found by sum-of-squares of polynomials optimization. It is demonstrated that the proposed method allows obtaining the exact global stability limit for this flow in a range of values of the parameter characterizing the Coriolis force. Outside this range a lower bound for the global stability limit was obtained, which is still better than the energy stability limit. In the course of the study, several results meaningful in the context of the method used were also obtained. Overall, the results obtained demonstrate the applicability of the recently proposed approach to global stability of the fluid flows. To the best of our knowledge, it is the first case in which global stability of a fluid flow has been proved by a generic method for the value of a Reynolds number greater than that which could be achieved with the energy stability approach. PMID:26730219

  2. System and method of designing models in a feedback loop

    DOEpatents

    Gosink, Luke C.; Pulsipher, Trenton C.; Sego, Landon H.

    2017-02-14

    A method and system for designing models is disclosed. The method includes selecting a plurality of models for modeling a common event of interest. The method further includes aggregating the results of the models and analyzing each model compared to the aggregate result to obtain comparative information. The method also includes providing the information back to the plurality of models to design more accurate models through a feedback loop.

  3. Monte Carlo simulation of PET/MR scanner and assessment of motion correction strategies

    NASA Astrophysics Data System (ADS)

    Işın, A.; Uzun Ozsahin, D.; Dutta, J.; Haddani, S.; El-Fakhri, G.

    2017-03-01

    Positron Emission Tomography is widely used in three dimensional imaging of metabolic body function and in tumor detection. Important research efforts are made to improve this imaging modality and powerful simulators such as GATE are used to test and develop methods for this purpose. PET requires acquisition time in the order of few minutes. Therefore, because of the natural patient movements such as respiration, the image quality can be adversely affected which drives scientists to develop motion compensation methods to improve the image quality. The goal of this study is to evaluate various image reconstructions methods with GATE simulation of a PET acquisition of the torso area. Obtained results show the need to compensate natural respiratory movements in order to obtain an image with similar quality as the reference image. Improvements are still possible in the applied motion field's extraction algorithms. Finally a statistical analysis should confirm the obtained results.

  4. Summary of water body extraction methods based on ZY-3 satellite

    NASA Astrophysics Data System (ADS)

    Zhu, Yu; Sun, Li Jian; Zhang, Chuan Yin

    2017-12-01

    Extracting from remote sensing images is one of the main means of water information extraction. Affected by spectral characteristics, many methods can be not applied to the satellite image of ZY-3. To solve this problem, we summarize the extraction methods for ZY-3 and analyze the extraction results of existing methods. According to the characteristics of extraction results, the method of WI& single band threshold and the method of texture filtering based on probability statistics are explored. In addition, the advantages and disadvantages of all methods are compared, which provides some reference for the research of water extraction from images. The obtained conclusions are as follows. 1) NIR has higher water sensitivity, consequently when the surface reflectance in the study area is less similar to water, using single band threshold method or multi band operation can obtain the ideal effect. 2) Compared with the water index and HIS optimal index method, object extraction method based on rules, which takes into account not only the spectral information of the water, but also space and texture feature constraints, can obtain better extraction effect, yet the image segmentation process is time consuming and the definition of the rules requires a certain knowledge. 3) The combination of the spectral relationship and water index can eliminate the interference of the shadow to a certain extent. When there is less small water or small water is not considered in further study, texture filtering based on probability statistics can effectively reduce the noises in result and avoid mixing shadows or paddy field with water in a certain extent.

  5. On-line noninvasive one-point measurements of pulse wave velocity.

    PubMed

    Harada, Akimitsu; Okada, Takashi; Niki, Kiyomi; Chang, Dehua; Sugawara, Motoaki

    2002-12-01

    Pulse wave velocity (PWV) is a basic parameter in the dynamics of pressure and flow waves traveling in arteries. Conventional on-line methods of measuring PWV have mainly been based on "two-point" measurements, i.e., measurements of the time of travel of the wave over a known distance. This paper describes two methods by which on-line "one-point" measurements can be made, and compares the results obtained by the two methods. The principle of one method is to measure blood pressure and velocity at a point, and use the water-hammer equation for forward traveling waves. The principle of the other method is to derive PWV from the stiffness parameter of the artery. Both methods were realized by using an ultrasonic system which we specially developed for noninvasive measurements of wave intensity. We applied the methods to the common carotid artery in 13 normal humans. The regression line of the PWV (m/s) obtained by the former method on the PWV (m/s) obtained by the latter method was y = 1.03x - 0.899 (R(2) = 0.83). Although regional PWV in the human carotid artery has not been reported so far, the correlation between the PWVs obtained by the present two methods was so high that we are convinced of the validity of these methods.

  6. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less

  7. Robust and fast-converging level set method for side-scan sonar image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Li, Qingwu; Huo, Guanying

    2017-11-01

    A robust and fast-converging level set method is proposed for side-scan sonar (SSS) image segmentation. First, the noise in each sonar image is removed using the adaptive nonlinear complex diffusion filter. Second, k-means clustering is used to obtain the initial presegmentation image from the denoised image, and then the distance maps of the initial contours are reinitialized to guarantee the accuracy of the numerical calculation used in the level set evolution. Finally, the satisfactory segmentation is achieved using a robust variational level set model, where the evolution control parameters are generated by the presegmentation. The proposed method is successfully applied to both synthetic image with speckle noise and real SSS images. Experimental results show that the proposed method needs much less iteration and therefore is much faster than the fuzzy local information c-means clustering method, the level set method using a gamma observation model, and the enhanced region-scalable fitting method. Moreover, the proposed method can usually obtain more accurate segmentation results compared with other methods.

  8. Is it possible to screen for milk or whey protein adulteration with melamine, urea and ammonium sulphate, combining Kjeldahl and classical spectrophotometric methods?

    PubMed

    Finete, Virgínia de Lourdes Mendes; Gouvêa, Marcos Martins; Marques, Flávia Ferreira de Carvalho; Netto, Annibal Duarte Pereira

    2013-12-15

    The Kjeldahl method and four classic spectrophotometric methods (Biuret, Lowry, Bradford and Markwell) were applied to evaluate the protein content of samples of UHT whole milk deliberately adulterated with melamine, ammonium sulphate or urea, which can be used to defraud milk protein and whey contents. Compared with the Kjeldahl method, the response of the spectrophotometric methods was unaffected by the addition of the nitrogen compounds to milk or whey. The methods of Bradford and Markwell were most robust and did not exhibit interference subject to composition. However, the simultaneous interpretation of results obtained using these methods with those obtained using the Kjeldahl method indicated the addition of nitrogen-rich compounds to milk and/or whey. Therefore, this work suggests a combination of results of Kjeldahl and spectrophotometric methods should be used to screen for milk adulteration by these compounds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Determining pH of strip-mine spoils

    Treesearch

    W. A. Berg

    1969-01-01

    Results with the LaMotte-Morgan method for determining soil pH-or the solution modification of this method-usually agreed fairly well with the results from using a pH meter, the recognized standard. Results obtained with the Soiltex and Hellige-Truog methods often deviated somewhat from the pH meter readings; and the Hydrion papers and the Kelway pH tester often gave...

  10. Automated grain mapping using wide angle convergent beam electron diffraction in transmission electron microscope for nanomaterials.

    PubMed

    Kumar, Vineet

    2011-12-01

    The grain size statistics, commonly derived from the grain map of a material sample, are important microstructure characteristics that greatly influence its properties. The grain map for nanomaterials is usually obtained manually by visual inspection of the transmission electron microscope (TEM) micrographs because automated methods do not perform satisfactorily. While the visual inspection method provides reliable results, it is a labor intensive process and is often prone to human errors. In this article, an automated grain mapping method is developed using TEM diffraction patterns. The presented method uses wide angle convergent beam diffraction in the TEM. The automated technique was applied on a platinum thin film sample to obtain the grain map and subsequently derive grain size statistics from it. The grain size statistics obtained with the automated method were found in good agreement with the visual inspection method.

  11. Efficient IDUA Gene Mutation Detection with Combined Use of dHPLC and Dried Blood Samples

    PubMed Central

    Duarte, Ana Joana; Vieira, Luis

    2013-01-01

    Objectives. Development of a simple mutation directed method in order to allow lowering the cost of mutation testing using an easily obtainable biological material. Assessment of the feasibility of such method was tested using a GC-rich amplicon. Design and Methods. A method of denaturing high-performance liquid chromatography (dHPLC) was improved and implemented as a technique for the detection of variants in exon 9 of the IDUA gene. The optimized method was tested in 500 genomic DNA samples obtained from dried blood spots (DBS). Results. With this dHPLC approach it was possible to detect different variants, including the common p.Trp402Ter mutation in the IDUA gene. The high GC content did not interfere with the resolution and reliability of this technique, and discrimination of G-C transversions was also achieved. Conclusion. This PCR-based dHPLC method is proved to be a rapid, a sensitive, and an excellent option for screening numerous samples obtained from DBS. Furthermore, it resulted in the consistent detection of clearly distinguishable profiles of the common p.Trp402Ter IDUA mutation with an advantageous balance of cost and technical requirements. PMID:27335677

  12. Comparison of aerodynamic coefficients obtained from theoretical calculations wind tunnel tests and flight tests data reduction for the alpha jet aircraft

    NASA Technical Reports Server (NTRS)

    Guiot, R.; Wunnenberg, H.

    1980-01-01

    The methods by which aerodynamic coefficients are determined and discussed. These include: calculations, wind tunnel experiments and experiments in flight for various prototypes of the Alpha Jet. A comparison of obtained results shows good correlation between expectations and in-flight test results.

  13. Rupture process of 2016, 25 January earthquake, Alboran Sea (South Spain, Mw= 6.4) and aftershocks series

    NASA Astrophysics Data System (ADS)

    Buforn, E.; Pro, C.; del Fresno, C.; Cantavella, J.; Sanz de Galdeano, C.; Udias, A.

    2016-12-01

    We have studied the rupture process of the 25 January 2016 earthquake (Mw =6.4) occurred in South Spain in the Alboran Sea. Main shock, foreshock and largest aftershocks (Mw =4.5) have been relocated using the NonLinLoc algorithm. Results obtained show a NE-SW distribution of foci at shallow depth (less than 15 km). For main shock, focal mechanism has been obtained from slip inversion over the rupture plane of teleseismic data, corresponding to left-lateral strike-slip motion. The rupture starts at 7 km depth and it propagates upward with a complex source time function. In order to obtain a more detailed source time function and to validate the results obtained from teleseismic data, we have used the Empirical Green Functions method (EGF) at regional distances. Finally, results of the directivity effect from teleseismic Rayleigh waves and the EGF method, are consistent with a rupture propagation to the NE. These results are interpreted in terms of the main geological features in the region.

  14. Continuous ultrasound-assisted extraction of hexavalent chromium from soil with or without on-line preconcentration prior to photometric monitoring.

    PubMed

    Luque-García, J L; Luque de Castro, M D

    2002-08-01

    A continuous ultrasound-assisted extractor was coupled to a photometric detector in order to obtain a fully automated approach for the determination of CrVI in soil. The use of a flow injection (FI) manifold as interface between the extractor and the photometric detector allowed the monitoring of CrVI after extraction in a continuous manner. The coloured complex formed between 1,5-diphenylcarbazide (DPC) and CrVI was used as recommended in EPA method 7196A because it is one of the most sensitive and selective reactions for CrVI determination. A preconcentration minicolumn packed with a strong anion-exchange resin was placed between the extractor and the detector, providing a more sensitive method. The linear dynamic ranges were 1-10 and 0.25-7.5 mg l-1 for the methods without (method A) and with preconcentration (method B), respectively. The limits of detection were 4.52 ng for method A and 1.23 ng for method B. Both methods were applied to a natural contaminated soil and the results obtained agreed well with those obtained by the reference EPA method 3060A. The influence of different amounts of CrIII in the samples was also studied and the results showed that the proposed methods did not disturb the original species distribution.

  15. Backscattering and absorption coefficients for electrons: Solutions of invariant embedding transport equations using a method of convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, C.; Brizuela, H.; Heluani, S. P.

    2014-05-21

    The backscattering coefficient is a magnitude whose measurement is fundamental for the characterization of materials with techniques that make use of particle beams and particularly when performing microanalysis. In this work, we report the results of an analytic method to calculate the backscattering and absorption coefficients of electrons in similar conditions to those of electron probe microanalysis. Starting on a five level states ladder model in 3D, we deduced a set of integro-differential coupled equations of the coefficients with a method know as invariant embedding. By means of a procedure proposed by authors, called method of convergence, two types ofmore » approximate solutions for the set of equations, namely complete and simple solutions, can be obtained. Although the simple solutions were initially proposed as auxiliary forms to solve higher rank equations, they turned out to be also useful for the estimation of the aforementioned coefficients. In previous reports, we have presented results obtained with the complete solutions. In this paper, we present results obtained with the simple solutions of the coefficients, which exhibit a good degree of fit with the experimental data. Both the model and the calculation method presented here can be generalized to other techniques that make use of different sorts of particle beams.« less

  16. High-accuracy contouring using projection moiré

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Lamberti, Luciano; Sciammarella, Federico M.

    2005-09-01

    Shadow and projection moiré are the oldest forms of moiré to be used in actual technical applications. In spite of this fact and the extensive number of papers that have been published on this topic, the use of shadow moiré as an accurate tool that can compete with alternative devices poses very many problems that go to the very essence of the mathematical models used to obtain contour information from fringe pattern data. In this paper some recent developments on the projection moiré method are presented. Comparisons between the results obtained with the projection method and the results obtained by mechanical devices that operate with contact probes are presented. These results show that the use of projection moiré makes it possible to achieve the same accuracy that current mechanical touch probe devices can provide.

  17. Determination and discrimination of biodiesel fuels by gas chromatographic and chemometric methods

    NASA Astrophysics Data System (ADS)

    Milina, R.; Mustafa, Z.; Bojilov, D.; Dagnon, S.; Moskovkina, M.

    2016-03-01

    Pattern recognition method (PRM) was applied to gas chromatographic (GC) data for a fatty acid methyl esters (FAME) composition of commercial and laboratory synthesized biodiesel fuels from vegetable oils including sunflower, rapeseed, corn and palm oils. Two GC quantitative methods to calculate individual fames were compared: Area % and internal standard. The both methods were applied for analysis of two certified reference materials. The statistical processing of the obtained results demonstrates the accuracy and precision of the two methods and allows them to be compared. For further chemometric investigations of biodiesel fuels by their FAME-profiles any of those methods can be used. PRM results of FAME profiles of samples from different vegetable oils show a successful recognition of biodiesels according to the feedstock. The information obtained can be used for selection of feedstock to produce biodiesels with certain properties, for assessing their interchangeability, for fuel spillage and remedial actions in the environment.

  18. Sensitive enumeration of Listeria monocytogenes and other Listeria species in various naturally contaminated matrices using a membrane filtration method.

    PubMed

    Barre, Léna; Brasseur, Emilie; Doux, Camille; Lombard, Bertrand; Besse, Nathalie Gnanou

    2015-06-01

    For the enumeration of Listeria monocytogenes (L. monocytogenes) in food, a sensitive enumeration method has been recently developed. This method is based on a membrane filtration of the food suspension followed by transfer of the filter on a selective medium to enumerate L. monocytogenes. An evaluation of this method was performed with several categories of foods naturally contaminated with L. monocytogenes. The results obtained with this technique were compared with those obtained from the modified reference EN ISO 11290-2 method for the enumeration of L. monocytogenes in food, and are found to provide more precise results. In most cases, the filtration method enabled to examine a greater quantity of food thus greatly improving the sensitivity of the enumeration. However, it was hardly applicable to some food categories because of filtration problems and background microbiota interference. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Noncontact Measurement of Doping Profile for Bare Silicon

    NASA Astrophysics Data System (ADS)

    Kohno, Motohiro; Matsubara, Hideaki; Okada, Hiroshi; Hirae, Sadao; Sakai, Takamasa

    1998-10-01

    In this study, we evaluate the doping concentrations of bare silicon wafers by noncontact capacitance voltage (C V) measurements. The metal-air-insulator-semiconductor (MAIS) method enables the measurement of C V characteristics of silicon wafers without oxidation and electrode preparation. This method has the advantage that a doping profile close to the wafer surface can be obtained. In our experiment, epitaxial silicon wafers were used to compare the MAIS method with the conventional MIS method. The experimental results obtained from the two methods showed good agreement. Then, doping profiles of boron-doped Czochralski (CZ) wafers were measured by the MAIS method. The result indicated a significant reduction of the doping concentration near the wafer surface. This observation is attributed to the well-known deactivation of boron with atomic hydrogen which permeated the silicon bulk during the polishing process. This deactivation was recovered by annealing in air at 180°C for 120 min.

  20. Boundary element methods for the analysis of crack growth in the presence of residual stress fields

    NASA Astrophysics Data System (ADS)

    Leitao, V. M. A.; Aliabadi, M. H.; Rooke, D. P.; Cook, R.

    1998-06-01

    Two boundary element methods of simulating crack growth in the presence of residual stress fields are presented, and the results are compared to experimental measurements. The first method utilizes linear elastic fracture mechanics (LEFM) and superimposes the solutions due to the applied load and the residual stress field. In this method, the residual stress fields are obtained from an elastoplastic BEM analysis, and numerical weight functions are used to obtain the stress intensity factors due to the fatigue loading. The second method presented is an elastoplastic fracture mechanics (EPFM) approach for crack growth simulation. A nonlinear J-integral is used in the fatigue life calculations. The methods are shown to agree well with experimental measurements of crack growth in prestressed open hole specimens. Results are also presented for the case where the prestress is applied to specimens that have been precracked.

  1. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  2. Comparative analysis of methods for concentrating venom from jellyfish Rhopilema esculentum Kishinouye

    NASA Astrophysics Data System (ADS)

    Li, Cuiping; Yu, Huahua; Feng, Jinhua; Chen, Xiaolin; Li, Pengcheng

    2009-02-01

    In this study, several methods were compared for the efficiency to concentrate venom from the tentacles of jellyfish Rhopilema esculentum Kishinouye. The results show that the methods using either freezing-dry or gel absorption to remove water to concentrate venom are not applicable due to the low concentration of the compounds dissolved. Although the recovery efficiency and the total venom obtained using the dialysis dehydration method are high, some proteins can be lost during the concentrating process. Comparing to the lyophilization method, ultrafiltration is a simple way to concentrate the compounds at high percentage but the hemolytic activities of the proteins obtained by ultrafiltration appear to be lower. Our results suggest that overall lyophilization is the best and recommended method to concentrate venom from the tentacles of jellyfish. It shows not only the high recovery efficiency for the venoms but high hemolytic activities as well.

  3. Fracture Toughness of Advanced Ceramics at Room Temperature

    PubMed Central

    Quinn, George D.; Salem, Jonathan; Bar-on, Isa; Cho, Kyu; Foley, Michael; Fang, Ho

    1992-01-01

    This report presents the results obtained by the five U.S. participating laboratories in the Versailles Advanced Materials and Standards (VAMAS) round-robin for fracture toughness of advanced ceramics. Three test methods were used: indentation fracture, indentation strength, and single-edge pre-cracked beam. Two materials were tested: a gas-pressure sintered silicon nitride and a zirconia toughened alumina. Consistent results were obtained with the latter two test methods. Interpretation of fracture toughness in the zirconia alumina composite was complicated by R-curve and environmentally-assisted crack growth phenomena. PMID:28053447

  4. Elasto-Plastic Behavior of Aluminum Foams Subjected to Compression Loading

    NASA Astrophysics Data System (ADS)

    Silva, H. M.; Carvalho, C. D.; Peixinho, N. R.

    2017-05-01

    The non-linear behavior of uniform-size cellular foams made of aluminum is investigated when subjected to compressive loads while comparing numerical results obtained in the Finite Element Method software (FEM) ANSYS workbench and ANSYS Mechanical APDL (ANSYS Parametric Design Language). The numerical model is built on AUTODESK INVENTOR, being imported into ANSYS and solved by the Newton-Raphson iterative method. The most similar conditions were used in ANSYS mechanical and ANSYS workbench, as possible. The obtained numerical results and the differences between the two programs are presented and discussed

  5. Inband radar cross section of phased arrays with parallel feeds

    NASA Astrophysics Data System (ADS)

    Flokas, Vassilios

    1994-06-01

    Approximate formulas for the inband radar cross section of arrays with parallel feeds are presented. To obtain the formulas, multiple reflections are neglected, and devices of the same type are assumed to have identical electrical performance. The approximate results were compared to the results obtained using a scattering matrix formulation. Both methods were in agreement in predicting RCS lobe positions, levels, and behavior with scanning. The advantages of the approximate method are its computational efficiency and its flexibility in handling an arbitrary number of coupler levels.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alekseev, I. S.; Ivanov, I. E.; Strelkov, P. S., E-mail: strelkov@fpl.gpi.ru

    A method based on the detection of emission of a dielectric screen with metal microinclusions in open air is applied to visualize the transverse structure of a high-power microwave beam. In contrast to other visualization techniques, the results obtained in this work provide qualitative information not only on the electric field strength, but also on the structure of electric field lines in the microwave beam cross section. The interpretation of the results obtained with this method is confirmed by numerical simulations of the structure of electric field lines in the microwave beam cross section by means of the CARAT code.

  7. Time-dependent solution for axisymmetric flow over a blunt body with ideal gas, CF4, or equilibrium air chemistry

    NASA Technical Reports Server (NTRS)

    Hamilton, H. H., II; Spall, J. R.

    1986-01-01

    A time-asymptotic method has been used to obtain steady-flow solutions for axisymmetric inviscid flow over several blunt bodies including spheres, paraboloids, ellipsoids, and spherically blunted cones. Comparisons with experimental data and results of other computational methods have demonstrated that accurate solutions can be obtained using this approach. The method should prove useful as an analysis tool for comparing with experimental data and for making engineering calculations for blunt reentry vehicles.

  8. Time-dependent solution for axisymmetric flow over a blunt body with ideal gas, CF4, or equilibrium air chemistry

    NASA Astrophysics Data System (ADS)

    Hamilton, H. H., II; Spall, J. R.

    1986-07-01

    A time-asymptotic method has been used to obtain steady-flow solutions for axisymmetric inviscid flow over several blunt bodies including spheres, paraboloids, ellipsoids, and spherically blunted cones. Comparisons with experimental data and results of other computational methods have demonstrated that accurate solutions can be obtained using this approach. The method should prove useful as an analysis tool for comparing with experimental data and for making engineering calculations for blunt reentry vehicles.

  9. On the convergence of a discrete Kirchhoff triangle method valid for shells of arbitrary shape

    NASA Astrophysics Data System (ADS)

    Bernadou, Michel; Eiroa, Pilar Mato; Trouve, Pascal

    1994-10-01

    In a recent paper by the same authors, we have thoroughly described how to extend to the case of general shells the well known DKT (discrete Kirchhoff triangle) methods which are now classically used to solve plate problems. In that paper we have also detailed how to realize the implementation and reported some numerical results obtained for classical benchmarks. The aim of this paper is to prove the convergence of a closely related method and to obtain corresponding error estimates.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, E.W.

    A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.

  11. An Equation-Free Reduced-Order Modeling Approach to Tropical Pacific Simulation

    NASA Astrophysics Data System (ADS)

    Wang, Ruiwen; Zhu, Jiang; Luo, Zhendong; Navon, I. M.

    2009-03-01

    The “equation-free” (EF) method is often used in complex, multi-scale problems. In such cases it is necessary to know the closed form of the required evolution equations about oscopic variables within some applied fields. Conceptually such equations exist, however, they are not available in closed form. The EF method can bypass this difficulty. This method can obtain oscopic information by implementing models at a microscopic level. Given an initial oscopic variable, through lifting we can obtain the associated microscopic variable, which may be evolved using Direct Numerical Simulations (DNS) and by restriction, we can obtain the necessary oscopic information and the projective integration to obtain the desired quantities. In this paper we apply the EF POD-assisted method to the reduced modeling of a large-scale upper ocean circulation in the tropical Pacific domain. The computation cost is reduced dramatically. Compared with the POD method, the method provided more accurate results and it did not require the availability of any explicit equations or the right-hand side (RHS) of the evolution equation.

  12. A study of selective spectrophotometric methods for simultaneous determination of Itopride hydrochloride and Rabeprazole sodium binary mixture: Resolving sever overlapping spectra

    NASA Astrophysics Data System (ADS)

    Mohamed, Heba M.

    2015-02-01

    Itopride hydrochloride (IT) and Rabeprazole sodium (RB) are co-formulated together for the treatment of gastro-esophageal reflux disease. Three simple, specific and accurate spectrophotometric methods were applied and validated for simultaneous determination of Itopride hydrochloride (IT) and Rabeprazole sodium (RB) namely; constant center (CC), ratio difference (RD) and mean centering of ratio spectra (MCR) spectrophotometric methods. Linear correlations were obtained in range of 10-110 μg/μL for Itopride hydrochloride and 4-44 μg/mL for Rabeprazole sodium. No preliminary separation steps were required prior the analysis of the two drugs using the proposed methods. Specificity was investigated by analyzing the synthetic mixtures containing the two cited drugs and their capsules dosage form. The obtained results were statistically compared with those obtained by the reported method, no significant difference was obtained with respect to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for IT and RB.

  13. A study of selective spectrophotometric methods for simultaneous determination of Itopride hydrochloride and Rabeprazole sodium binary mixture: Resolving sever overlapping spectra.

    PubMed

    Mohamed, Heba M

    2015-02-05

    Itopride hydrochloride (IT) and Rabeprazole sodium (RB) are co-formulated together for the treatment of gastro-esophageal reflux disease. Three simple, specific and accurate spectrophotometric methods were applied and validated for simultaneous determination of Itopride hydrochloride (IT) and Rabeprazole sodium (RB) namely; constant center (CC), ratio difference (RD) and mean centering of ratio spectra (MCR) spectrophotometric methods. Linear correlations were obtained in range of 10-110μg/μL for Itopride hydrochloride and 4-44μg/mL for Rabeprazole sodium. No preliminary separation steps were required prior the analysis of the two drugs using the proposed methods. Specificity was investigated by analyzing the synthetic mixtures containing the two cited drugs and their capsules dosage form. The obtained results were statistically compared with those obtained by the reported method, no significant difference was obtained with respect to accuracy and precision. The three methods were validated in accordance with ICH guidelines and can be used for quality control laboratories for IT and RB. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. An automated microplate-based method for monitoring DNA strand breaks in plasmids and bacterial artificial chromosomes

    PubMed Central

    Rock, Cassandra; Shamlou, Parviz Ayazi; Levy, M. Susana

    2003-01-01

    A method is described for high-throughput monitoring of DNA backbone integrity in plasmids and artificial chromosomes in solution. The method is based on the denaturation properties of double-stranded DNA in alkaline conditions and uses PicoGreen fluorochrome to monitor denaturation. In the present method, fluorescence enhancement of PicoGreen at pH 12.4 is normalised by its value at pH 8 to give a ratio that is proportional to the average backbone integrity of the DNA molecules in the sample. A good regression fit (r2 > 0.98) was obtained when results derived from the present method and those derived from agarose gel electrophoresis were compared. Spiking experiments indicated that the method is sensitive enough to detect a proportion of 6% (v/v) molecules with an average of less than two breaks per molecule. Under manual operation, validation parameters such as inter-assay and intra-assay variation gave values of <5% coefficient of variation. Automation of the method showed equivalence to the manual procedure with high reproducibility and low variability within wells. The method described requires as little as 0.5 ng of DNA per well and a 96-well microplate can be analysed in 12 min providing an attractive option for analysis of high molecular weight vectors. A preparation of a 116 kb bacterial artificial chromosome was subjected to chemical and shear degradation and DNA integrity was tested using the method. Good correlation was obtained between time of chemical degradation and shear rate with fluorescence response. Results obtained from pulsed- field electrophoresis of sheared samples were in agreement with those obtained using the microplate-based method. PMID:12771229

  15. Research on Methods of High Coherent Target Extraction in Urban Area Based on Psinsar Technology

    NASA Astrophysics Data System (ADS)

    Li, N.; Wu, J.

    2018-04-01

    PSInSAR technology has been widely applied in ground deformation monitoring. Accurate identification of Persistent Scatterers (PS) is key to the success of PSInSAR data processing. In this paper, the theoretic models and specific algorithms of PS point extraction methods are summarized and the characteristics and applicable conditions of each method, such as Coherence Coefficient Threshold method, Amplitude Threshold method, Dispersion of Amplitude method, Dispersion of Intensity method, are analyzed. Based on the merits and demerits of different methods, an improved method for PS point extraction in urban area is proposed, that uses simultaneously backscattering characteristic, amplitude and phase stability to find PS point in all pixels. Shanghai city is chosen as an example area for checking the improvements of the new method. The results show that the PS points extracted by the new method have high quality, high stability and meet the strong scattering characteristics. Based on these high quality PS points, the deformation rate along the line-of-sight (LOS) in the central urban area of Shanghai is obtained by using 35 COSMO-SkyMed X-band SAR images acquired from 2008 to 2010 and it varies from -14.6 mm/year to 4.9 mm/year. There is a large sedimentation funnel in the cross boundary of Hongkou and Yangpu district with a maximum sedimentation rate of more than 14 mm per year. The obtained ground subsidence rates are also compared with the result of spirit leveling and show good consistent. Our new method for PS point extraction is more reasonable, and can improve the accuracy of the obtained deformation results.

  16. Drying effects on the antioxidant properties of polysaccharides obtained from Agaricus blazei Murrill.

    PubMed

    Wu, Songhai; Li, Feng; Jia, Shaoyi; Ren, Haitao; Gong, Guili; Wang, Yanyan; Lv, Zesheng; Liu, Yong

    2014-03-15

    Three polysaccharides (ABMP-F, ABMP-V, ABMP-A) were obtained from Agaricus blazei Murrill via methods such as freeze drying, vacuum drying and air drying, respectively. Their chemical compositions were examined, and antioxidant activities were investigated on the basis of assay for hydroxyl radical, DPPH radical, ABTS free radical scavenging ability and assay for Fe(2+)-chelating ability. Results showed that the three ABMPs have different physicochemical and antioxidant properties. Compared with air drying and vacuum drying methods, freeze drying method resulted to ABMP with higher neutral sugar, polysaccharide yield, uronic acid content, and stronger antioxidant abilities of hydroxyl radical, DPPH radical, ABTS radical scavenging and Fe(2+)-chelating. As a result, Agaricus blazei Murrill polysaccharides are natural antioxidant and freeze drying method serves as a good choice for the preparation of such polysaccharides and should be used to produce antioxidants for food industry. Copyright © 2014. Published by Elsevier Ltd.

  17. Flows of Newtonian and Power-Law Fluids in Symmetrically Corrugated Cappilary Fissures and Tubes

    NASA Astrophysics Data System (ADS)

    Walicka, A.

    2018-02-01

    In this paper, an analytical method for deriving the relationships between the pressure drop and the volumetric flow rate in laminar flow regimes of Newtonian and power-law fluids through symmetrically corrugated capillary fissures and tubes is presented. This method, which is general with regard to fluid and capillary shape, can also be used as a foundation for different fluids, fissures and tubes. It can also be a good base for numerical integration when analytical expressions are hard to obtain due to mathematical complexities. Five converging-diverging or diverging-converging geometrics, viz. wedge and cone, parabolic, hyperbolic, hyperbolic cosine and cosine curve, are used as examples to illustrate the application of this method. For the wedge and cone geometry the present results for the power-law fluid were compared with the results obtained by another method; this comparison indicates a good compatibility between both the results.

  18. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.

  19. Improved methods of vibration analysis of pretwisted, airfoil blades

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1984-01-01

    Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.

  20. An efficient computational method for solving nonlinear stochastic Itô integral equations: Application for stochastic problems in physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less

  1. Lateral Load Capacity of Piles: A Comparative Study Between Indian Standards and Theoretical Approach

    NASA Astrophysics Data System (ADS)

    Jayasree, P. K.; Arun, K. V.; Oormila, R.; Sreelakshmi, H.

    2018-05-01

    As per Indian Standards, laterally loaded piles are usually analysed using the method adopted by IS 2911-2010 (Part 1/Section 2). But the practising engineers are of the opinion that the IS method is very conservative in design. This work aims at determining the extent to which the conventional IS design approach is conservative. This is done through a comparative study between IS approach and the theoretical model based on Vesic's equation. Bore log details for six different bridges were collected from the Kerala Public Works Department. Cast in situ fixed head piles embedded in three soil conditions both end bearing as well as friction piles were considered and analyzed separately. Piles were also modelled in STAAD.Pro software based on IS approach and the results were validated using Matlock and Reese (In Proceedings of fifth international conference on soil mechanics and foundation engineering, 1961) equation. The results were presented as the percentage variation in values of bending moment and deflection obtained by different methods. The results obtained from the mathematical model based on Vesic's equation and that obtained as per the IS approach were compared and the IS method was found to be uneconomical and conservative.

  2. Prediction of forces and moments for hypersonic flight vehicle control effectors

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.; Long, Lyle N.; Pagano, Peter J.

    1991-01-01

    Developing methods of predicting flight control forces and moments for hypersonic vehicles, included a preliminary assessment of subsonic/supersonic panel methods and hypersonic local flow inclination methods for such predictions. While these findings clearly indicate the usefulness of such methods for conceptual design activities, deficiencies exist in some areas. Thus, a second phase of research was proposed in which a better understanding is sought for the reasons of the successes and failures of the methods considered, particularly for the cases at hypersonic Mach numbers. To obtain this additional understanding, a more careful study of the results obtained relative to the methods used was undertaken. In addition, where appropriate and necessary, a more complete modeling of the flow was performed using well proven methods of computational fluid dynamics. As a result, assessments will be made which are more quantitative than those of phase 1 regarding the uncertainty involved in the prediction of the aerodynamic derivatives. In addition, with improved understanding, it is anticipated that improvements resulting in better accuracy will be made to the simple force and moment prediction.

  3. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  4. Fast nuclear staining of head hair roots as a screening method for successful STR analysis in forensics.

    PubMed

    Lepez, Trees; Vandewoestyne, Mado; Van Hoofstat, David; Deforce, Dieter

    2014-11-01

    The success rate of STR profiling of hairs found at a crime scene is quite low and negative results of hair analysis are frequently reported. To increase the success rate of DNA analysis of hairs in forensics, nuclei in hair roots can be counted after staining the hair root with DAPI. Two staining methods were tested: a longer method with two 1h incubations in respectively a DAPI- and a wash-solution, and a fast, direct staining of the hair root on microscope slides. The two staining methods were not significantly different. The results of the STR analysis for both procedures showed that 20 nuclei are necessary to obtain at least partial STR profiles. When more than 50 nuclei were counted, full STR profiles were always obtained. In 96% of the cases where no nuclei were detected, no STR profile could be obtained. However, 4% of the DAPI-negative hair roots resulted in at least partial STR profiles. Therefore, each forensic case has to be evaluated separately in function of the importance of the evidential value of the found hair. The fast staining method was applied in 36 forensic cases on 279 hairs in total. A fast screening method using DAPI can be used to increase the success rate of hair analysis in forensics. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  5. A Computer-Aided Diagnosis System for Measuring Carotid Artery Intima-Media Thickness (IMT) Using Quaternion Vectors.

    PubMed

    Kutbay, Uğurhan; Hardalaç, Fırat; Akbulut, Mehmet; Akaslan, Ünsal; Serhatlıoğlu, Selami

    2016-06-01

    This study aims investigating adjustable distant fuzzy c-means segmentation on carotid Doppler images, as well as quaternion-based convolution filters and saliency mapping procedures. We developed imaging software that will simplify the measurement of carotid artery intima-media thickness (IMT) on saliency mapping images. Additionally, specialists evaluated the present images and compared them with saliency mapping images. In the present research, we conducted imaging studies of 25 carotid Doppler images obtained by the Department of Cardiology at Fırat University. After implementing fuzzy c-means segmentation and quaternion-based convolution on all Doppler images, we obtained a model that can be analyzed easily by the doctors using a bottom-up saliency model. These methods were applied to 25 carotid Doppler images and then interpreted by specialists. In the present study, we used color-filtering methods to obtain carotid color images. Saliency mapping was performed on the obtained images, and the carotid artery IMT was detected and interpreted on the obtained images from both methods and the raw images are shown in Results. Also these results were investigated by using Mean Square Error (MSE) for the raw IMT images and the method which gives the best performance is the Quaternion Based Saliency Mapping (QBSM). 0,0014 and 0,000191 mm(2) MSEs were obtained for artery lumen diameters and plaque diameters in carotid arteries respectively. We found that computer-based image processing methods used on carotid Doppler could aid doctors' in their decision-making process. We developed software that could ease the process of measuring carotid IMT for cardiologists and help them to evaluate their findings.

  6. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    PubMed

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  7. Solution of linear systems by a singular perturbation technique

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.

    1976-01-01

    An approximate solution is obtained for a singularly perturbed system of initial valued, time invariant, linear differential equations with multiple boundary layers. Conditions are stated under which the approximate solution converges uniformly to the exact solution as the perturbation parameter tends to zero. The solution is obtained by the method of matched asymptotic expansions. Use of the results for obtaining approximate solutions of general linear systems is discussed. An example is considered to illustrate the method and it is shown that the formulas derived give a readily computed uniform approximation.

  8. Relative Contributions of Three Descriptive Methods: Implications for Behavioral Assessment

    ERIC Educational Resources Information Center

    Pence, Sacha T.; Roscoe, Eileen M.; Bourret, Jason C.; Ahearn, William H.

    2009-01-01

    This study compared the outcomes of three descriptive analysis methods--the ABC method, the conditional probability method, and the conditional and background probability method--to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior…

  9. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  10. High-resolution differential mode delay measurement for a multimode optical fiber using a modified optical frequency domain reflectometer.

    PubMed

    Ahn, T-J; Kim, D

    2005-10-03

    A novel differential mode delay (DMD) measurement technique for a multimode optical fiber based on optical frequency domain reflectometry (OFDR) has been proposed. We have obtained a high-resolution DMD value of 0.054 ps/m for a commercial multimode optical fiber with length of 50 m by using a modified OFDR in a Mach-Zehnder interferometer structure with a tunable external cavity laser and a Mach-Zehnder interferometer instead of Michelson interferometer. We have also compared the OFDR measurement results with those obtained using a traditional time-domain measurement method. DMD resolution with our proposed OFDR technique is more than an order of magnitude better than a result obtainable with a conventional time-domain method.

  11. A Modified Jaeger's Method for Measuring Surface Tension.

    ERIC Educational Resources Information Center

    Ntibi, J. Effiom-Edem

    1991-01-01

    A static method of measuring the surface tension of a liquid is presented. Jaeger's method is modified by replacing the pressure source with a variable pressure head. By using this method, stationary air bubbles are obtained thus resulting in controllable external parameters. (Author/KR)

  12. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  13. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  14. A convenient spectrophotometric assay for the determination of l-ergothioneine in blood

    PubMed Central

    Carlsson, Jan; Kierstan, Marek P. J.; Brocklehurst, Keith

    1974-01-01

    1. A convenient spectrophotometric assay for the determination of l-ergothioneine in solution including deproteinized blood haemolysate was developed. 2. The method consists of deproteinization by heat precipitation and Cu2+-catalysed oxidation of thiols such as glutathione and of l-ascorbic acid, both in alkaline media, and titration of l-ergothioneine (which is not oxidized under these conditions) by its virtually instantaneous reaction with 2,2′-dipyridyl disulphide at pH1. 3. This method and the results obtained with it for the analysis of human, horse, sheep and pig blood are compared with existing methods of l-ergothioneine analysis and the results obtained thereby. PMID:4463946

  15. State-to-state quantum dynamics of the F + HCl (vi = 0, ji = 0) → HF(vf, jf) + Cl reaction on the ground state potential energy surface.

    PubMed

    Li, Anyang; Guo, Hua; Sun, Zhigang; Kłos, Jacek; Alexander, Millard H

    2013-10-07

    The state-to-state reaction dynamics of the title reaction is investigated on the ground electronic state potential energy surface using two quantum dynamical methods. The results obtained using the Chebyshev real wave packet method are in excellent agreement with those obtained using the time-independent method, except at low translational energies. It is shown that this exothermic hydrogen abstraction reaction is direct, resulting in a strong back-scattered bias in the product angular distribution. The HF product is highly excited internally. Agreement with available experimental data is only qualitative. We discuss several possible causes of disagreement with experiment.

  16. One-dimensional backreacting holographic superconductors with exponential nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Ghotbabadi, B. Binaei; Zangeneh, M. Kord; Sheykhi, A.

    2018-05-01

    In this paper, we investigate the effects of nonlinear exponential electrodynamics as well as backreaction on the properties of one-dimensional s-wave holographic superconductors. We continue our study both analytically and numerically. In analytical study, we employ the Sturm-Liouville method while in numerical approach we perform the shooting method. We obtain a relation between the critical temperature and chemical potential analytically. Our results show a good agreement between analytical and numerical methods. We observe that the increase in the strength of both nonlinearity and backreaction parameters causes the formation of condensation in the black hole background harder and critical temperature lower. These results are consistent with those obtained for two dimensional s-wave holographic superconductors.

  17. Radioisotope measurements of the liquid-gas flow in the horizontal pipeline using phase method

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Jaszczur, Marek; Petryka, Leszek; Świsulski, Dariusz

    2018-06-01

    The paper presents application of the gamma-absorption method to a two-phase liquid-gas flow investigation in a horizontal pipeline. The water-air mixture was examined by a set of two Am-241 radioactive sources and two NaI(Tl) scintillation probes. For analysis of the electrical signals obtained from detectors the cross-spectral density function (CSDF) was applied. Results of the gas phase average velocity measurements for CSDF were compared with results obtained by application of the classical cross-correlation function (CCF). It was found that the combined uncertainties of the gas-phase velocity in the presented experiments did not exceed 1.6% for CSDF method and 5.5% for CCF.

  18. Performance of three reflectance calibration methods for airborne hyperspectral spectrometer data.

    PubMed

    Miura, Tomoaki; Huete, Alfredo R

    2009-01-01

    In this study, the performances and accuracies of three methods for converting airborne hyperspectral spectrometer data to reflectance factors were characterized and compared. The "reflectance mode (RM)" method, which calibrates a spectrometer against a white reference panel prior to mounting on an aircraft, resulted in spectral reflectance retrievals that were biased and distorted. The magnitudes of these bias errors and distortions varied significantly, depending on time of day and length of the flight campaign. The "linear-interpolation (LI)" method, which converts airborne spectrometer data by taking a ratio of linearly-interpolated reference values from the preflight and post-flight reference panel readings, resulted in precise, but inaccurate reflectance retrievals. These reflectance spectra were not distorted, but were subject to bias errors of varying magnitudes dependent on the flight duration length. The "continuous panel (CP)" method uses a multi-band radiometer to obtain continuous measurements over a reference panel throughout the flight campaign, in order to adjust the magnitudes of the linear-interpolated reference values from the preflight and post-flight reference panel readings. Airborne hyperspectral reflectance retrievals obtained using this method were found to be the most accurate and reliable reflectance calibration method. The performances of the CP method in retrieving accurate reflectance factors were consistent throughout time of day and for various flight durations. Based on the dataset analyzed in this study, the uncertainty of the CP method has been estimated to be 0.0025 ± 0.0005 reflectance units for the wavelength regions not affected by atmospheric absorptions. The RM method can produce reasonable results only for a very short-term flight (e.g., < 15 minutes) conducted around a local solar noon. The flight duration should be kept shorter than 30 minutes for the LI method to produce results with reasonable accuracies. An important advantage of the CP method is that the method can be used for long-duration flight campaigns (e.g., 1-2 hours). Although this study focused on reflectance calibration of airborne spectrometer data, the methods evaluated in this study and the results obtained are directly applicable to ground spectrometer measurements.

  19. The Hubbard Model and Piezoresistivity

    NASA Astrophysics Data System (ADS)

    Celebonovic, V.; Nikolic, M. G.

    2018-02-01

    Piezoresistivity was discovered in the nineteenth century. Numerous applications of this phenomenon exist nowadays. The aim of the present paper is to explore the possibility of applying the Hubbard model to theoretical work on piezoresistivity. Results are encouraging, in the sense that numerical values of the strain gauge obtained by using the Hubbard model agree with results obtained by other methods. The calculation is simplified by the fact that it uses results for the electrical conductivity of 1D systems previously obtained within the Hubbard model by one of the present authors.

  20. Bending behaviors of fully covered biodegradable polydioxanone biliary stent for human body by finite element method.

    PubMed

    Liu, Yanhui; Zhu, Guoqing; Yang, Huazhe; Wang, Conger; Zhang, Peihua; Han, Guangting

    2018-01-01

    This paper presents a study of the bending flexibility of fully covered biodegradable polydioxanone biliary stents (FCBPBs) developed for human body. To investigate the relationship between the bending load and structure parameter (monofilament diameter and braid-pin number), biodegradable polydioxanone biliary stents derived from braiding method were covered with membrane prepared via electrospinning method, and nine FCBPBSs were then obtained for bending test to evaluate the bending flexibility. In addition, by the finite element method, nine numerical models based on actual biliary stent were established and the bending load was calculated through the finite element method. Results demonstrate that the simulation and experimental results are in good agreement with each other, indicating that the simulation results can be provided a useful reference to the investigation of biliary stents. Furthermore, the stress distribution on FCBPBSs was studied, and the plastic dissipation analysis and plastic strain of FCBPBSs were obtained via the bending simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Surveying immigrants without sampling frames - evaluating the success of alternative field methods.

    PubMed

    Reichel, David; Morales, Laura

    2017-01-01

    This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.

  2. Atomic mean-square displacement of a solid: A Green's-function approach

    NASA Astrophysics Data System (ADS)

    Shukla, R. C.; Hübschle, Hermann

    1989-07-01

    We have presented a Green's-function method of calculating the atomic mean-square displacement (MSD) of a solid. The method effectively sums a class of all anharmonic contributions to the MSD. From the point of view of perturbation theory (PT) our expression for MSD includes the lowest-order (λ2) PT contributions (cubic and quartic) with correct numerical coefficients. The numerical results obtained by this method in the high-temperature limit for a fcc nearest-neighbor Lennard-Jones-solid model are in excellent agreement with the Monte Carlo (MC) method for the same model over the entire temperature range of the solid. Highly accurate results for the order-λ2 PT contributions to MSD are obtained by eliminating the uncertainty in the convergence of the cubic contributions in the earlier work of Heiser, Shukla, and Cowly and they are now in much better agreement with the MC results but still inferior to the Green's-function method at the highest temperature.

  3. Numerical study on the Welander oscillatory natural circulation problem using high-order numerical methods

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Kim, Seung Jun

    2016-11-16

    In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less

  4. Approximation of the exponential integral (well function) using sampling methods

    NASA Astrophysics Data System (ADS)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  5. High-performance liquid chromatography method for the determination of hydrogen peroxide present or released in teeth bleaching kits and hair cosmetic products.

    PubMed

    Gimeno, Pascal; Bousquet, Claudine; Lassu, Nelly; Maggio, Annie-Françoise; Civade, Corinne; Brenier, Charlotte; Lempereur, Laurent

    2015-03-25

    This manuscript presents an HPLC/UV method for the determination of hydrogen peroxide present or released in teeth bleaching products and hair products. The method is based on an oxidation of triphenylphosphine into triphenylphosphine oxide by hydrogen peroxide. Triphenylphosphine oxide formed is quantified by HPLC/UV. Validation data were obtained using the ISO 12787 standard approach, particularly adapted when it is not possible to make reconstituted sample matrices. For comparative purpose, hydrogen peroxide was also determined using ceric sulfate titrimetry for both types of products. For hair products, a cross validation of both ceric titrimetric method and HPLC/UV method using the cosmetic 82/434/EEC directive (official iodometric titration method) was performed. Results obtained for 6 commercialized teeth whitening products and 5 hair products point out similar hydrogen peroxide contain using either the HPLC/UV method or ceric sulfate titrimetric method. For hair products, results were similar to the hydrogen peroxide content using the cosmetic 82/434/EEC directive method and for the HPLC/UV method, mean recoveries obtained on spiked samples, using the ISO 12787 standard, ranges from 100% to 110% with a RSD<3.0%. To assess the analytical method proposed, the HPLC method was used to control 35 teeth bleaching products during a market survey and highlight for 5 products, hydrogen peroxide contents higher than the regulated limit. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Development of method for experimental determination of wheel-rail contact forces and contact point position by using instrumented wheelset

    NASA Astrophysics Data System (ADS)

    Bižić, Milan B.; Petrović, Dragan Z.; Tomić, Miloš C.; Djinović, Zoran V.

    2017-07-01

    This paper presents the development of a unique method for experimental determination of wheel-rail contact forces and contact point position by using the instrumented wheelset (IWS). Solutions of key problems in the development of IWS are proposed, such as the determination of optimal locations, layout, number and way of connecting strain gauges as well as the development of an inverse identification algorithm (IIA). The base for the solution of these problems is the wheel model and results of FEM calculations, while IIA is based on the method of blind source separation using independent component analysis. In the first phase, the developed method was tested on a wheel model and a high accuracy was obtained (deviations of parameters obtained with IIA and really applied parameters in the model are less than 2%). In the second phase, experimental tests on the real object or IWS were carried out. The signal-to-noise ratio was identified as the main influential parameter on the measurement accuracy. Тhе obtained results have shown that the developed method enables measurement of vertical and lateral wheel-rail contact forces Q and Y and their ratio Y/Q with estimated errors of less than 10%, while the estimated measurement error of contact point position is less than 15%. At flange contact and higher values of ratio Y/Q or Y force, the measurement errors are reduced, which is extremely important for the reliability and quality of experimental tests of safety against derailment of railway vehicles according to the standards UIC 518 and EN 14363. The obtained results have shown that the proposed method can be successfully applied in solving the problem of high accuracy measurement of wheel-rail contact forces and contact point position using IWS.

  7. Optimal Output of Distributed Generation Based On Complex Power Increment

    NASA Astrophysics Data System (ADS)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  8. Development of a rapid and simplified protocol for direct bacterial identification from positive blood cultures by using matrix assisted laser desorption ionization time-of- flight mass spectrometry.

    PubMed

    Jakovljev, Aleksandra; Bergh, Kåre

    2015-11-06

    Bloodstream infections represent serious conditions carrying a high mortality and morbidity rate. Rapid identification of microorganisms and prompt institution of adequate antimicrobial therapy is of utmost importance for a successful outcome. Aiming at the development of a rapid, simplified and efficient protocol, we developed and compared two in-house preparatory methods for the direct identification of bacteria from positive blood culture flasks (BD BACTEC FX system) by using matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI TOF MS). Both methods employed saponin and distilled water for erythrocyte lysis. In method A the cellular pellet was overlaid with formic acid on the MALDI TOF target plate for protein extraction, whereas in method B the pellet was exposed to formic acid followed by acetonitrile prior to placing on the target plate. Best results were obtained by method A. Direct identification was achieved for 81.9 % and 65.8 % (50.3 % and 26.2 % with scores >2.0) of organisms by method A and method B, respectively. Overall concordance with final identification was 100 % to genus and 97.9 % to species level. By applying a lower cut-off score value, the levels of identification obtained by method A and method B increased to 89.3 % and 77.8 % of organisms (81.9 % and 65.8 % identified with scores >1.7), respectively. Using the lowered score criteria, concordance with final results was obtained for 99.3 % of genus and 96.6 % of species identifications. The reliability of results, rapid performance (approximately 25 min) and applicability of in-house method A have contributed to implementation of this robust and cost-effective method in our laboratory.

  9. Fracture mechanics analysis of cracked structures using weight function and neural network method

    NASA Astrophysics Data System (ADS)

    Chen, J. G.; Zang, F. G.; Yang, Y.; Shi, K. K.; Fu, X. L.

    2018-06-01

    Stress intensity factors(SIFs) due to thermal-mechanical load has been established by using weight function method. Two reference stress states sere used to determine the coefficients in the weight function. Results were evaluated by using data from literature and show a good agreement between them. So, the SIFs can be determined quickly using the weight function obtained when cracks subjected to arbitrary loads, and presented method can be used for probabilistic fracture mechanics analysis. A probabilistic methodology considering Monte-Carlo with neural network (MCNN) has been developed. The results indicate that an accurate probabilistic characteristic of the KI can be obtained by using the developed method. The probability of failure increases with the increasing of loads, and the relationship between is nonlinear.

  10. Testing the ISP method with the PARIO device: Accuracy of results and influence of homogenization technique

    NASA Astrophysics Data System (ADS)

    Durner, Wolfgang; Huber, Magdalena; Yangxu, Li; Steins, Andi; Pertassek, Thomas; Göttlein, Axel; Iden, Sascha C.; von Unold, Georg

    2017-04-01

    The particle-size distribution (PSD) is one of the main properties of soils. To determine the proportions of the fine fractions silt and clay, sedimentation experiments are used. Most common are the Pipette and Hydrometer method. Both need manual sampling at specific times. Both are thus time-demanding and rely on experienced operators. Durner et al. (Durner, W., S.C. Iden, and G. von Unold (2017): The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation, Water Resources Research, doi:10.1002/2016WR019830) recently developed the integral suspension method (ISP) method, which is implemented in the METER Group device PARIOTM. This new method estimates continuous PSD's from sedimentation experiments by recording the temporal evolution of the suspension pressure at a certain measurement depth in a sedimentation cylinder. It requires no manual interaction after start and thus no specialized training of the lab personnel. The aim of this study was to test the precision and accuracy of new method with a variety of materials, to answer the following research questions: (1) Are the results obtained by PARIO reliable and stable? (2) Are the results affected by the initial mixing technique to homogenize the suspension, or by the presence of sand in the experiment? (3) Are the results identical to the one that are obtained with the Pipette method as reference method? The experiments were performed with a pure quartz silt material and four real soil materials. PARIO measurements were done repetitively on the same samples in a temperature-controlled lab to characterize the repeatability of the measurements. Subsequently, the samples were investigated by the pipette method to validate the results. We found that the statistical error for silt fraction from replicate and repetitive measurements was in the range of 1% for the quartz material to 3% for soil materials. Since the sand fractions, as in any sedimentation method, must be measured explicitly and are used as fixed parameters in the PARIO evaluation, the error of the clay fraction is determined by error propagation from the sand and silt fraction. Homogenization of the suspension by overhead shaking gave lower reproducibility and smaller silt fractions than vertical stirring. However, it turned out that vertical stirring must be performed with sufficient rigour to obtain a fully homogeneous initial distribution. Analysis of material sieved to < 2000 μm and to < 200 μm gave equal results, i.e., there was no hint towards dragging effects of large particles. Complete removal of the sand fraction, i.e. sieving to < 63 μm lead to less silt, probably due to a loss of fine material by the sieving process. The PSD's obtained with the PARIO corresponded very well with the results of the Pipette method.

  11. Application of a Numerical Inverse Laplace Integration Method to Surface Loading on a Viscoelastic Compressible Earth Model

    NASA Astrophysics Data System (ADS)

    Tanaka, Yoshiyuki; Klemann, Volker; Okuno, Jun'ichi

    2009-09-01

    Normal mode approaches for calculating viscoelastic responses of self-gravitating and compressible spherical earth models have an intrinsic problem of determining the roots of the secular equation and the associated residues in the Laplace domain. To bypass this problem, a method based on numerical inverse Laplace integration was developed by T anaka et al. (2006, 2007) for computations of viscoelastic deformation caused by an internal dislocation. The advantage of this approach is that the root-finding problem is avoided without imposing additional constraints on the governing equations and earth models. In this study, we apply the same algorithm to computations of viscoelastic responses to a surface load and show that the results obtained by this approach agree well with those obtained by a time-domain approach that does not need determinations of the normal modes in the Laplace domain. Using the elastic earth model PREM and a convex viscosity profile, we calculate viscoelastic load Love numbers ( h, l, k) for compressible and incompressible models. Comparisons between the results show that effects due to compressibility are consistent with results obtained by previous studies and that the rate differences between the two models total 10-40%. This will serve as an independent method to confirm results obtained by time-domain approaches and will usefully increase the reliability when modeling postglacial rebound.

  12. New statistical analysis of the horizontal phase velocity distribution of gravity waves observed by airglow imaging

    NASA Astrophysics Data System (ADS)

    Matsuda, Takashi S.; Nakamura, Takuji; Ejiri, Mitsumu K.; Tsutsumi, Masaki; Shiokawa, Kazuo

    2014-08-01

    We have developed a new analysis method for obtaining the power spectrum in the horizontal phase velocity domain from airglow intensity image data to study atmospheric gravity waves. This method can deal with extensive amounts of imaging data obtained on different years and at various observation sites without bias caused by different event extraction criteria for the person processing the data. The new method was applied to sodium airglow data obtained in 2011 at Syowa Station (69°S, 40°E), Antarctica. The results were compared with those obtained from a conventional event analysis in which the phase fronts were traced manually in order to estimate horizontal characteristics, such as wavelengths, phase velocities, and wave periods. The horizontal phase velocity of each wave event in the airglow images corresponded closely to a peak in the spectrum. The statistical results of spectral analysis showed an eastward offset of the horizontal phase velocity distribution. This could be interpreted as the existence of wave sources around the stratospheric eastward jet. Similar zonal anisotropy was also seen in the horizontal phase velocity distribution of the gravity waves by the event analysis. Both methods produce similar statistical results about directionality of atmospheric gravity waves. Galactic contamination of the spectrum was examined by calculating the apparent velocity of the stars and found to be limited for phase speeds lower than 30 m/s. In conclusion, our new method is suitable for deriving the horizontal phase velocity characteristics of atmospheric gravity waves from an extensive amount of imaging data.

  13. Fuzzy-C-Means Clustering Based Segmentation and CNN-Classification for Accurate Segmentation of Lung Nodules

    PubMed

    K, Jalal Deen; R, Ganesan; A, Merline

    2017-07-27

    Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. Creative Commons Attribution License

  14. Fuzzy-C-Means Clustering Based Segmentation and CNN-Classification for Accurate Segmentation of Lung Nodules

    PubMed Central

    K, Jalal Deen; R, Ganesan; A, Merline

    2017-01-01

    Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. PMID:28749127

  15. Characterization and properties of TiO2-SnO2 nanocomposites, obtained by hydrolysis method

    NASA Astrophysics Data System (ADS)

    Kutuzova, Anastasiya S.; Dontsova, Tetiana A.

    2018-04-01

    The paper deals with the process of TiO2-SnO2 nanocomposites synthesis utilizing simple hydrolysis method with further calcination for photocatalytic applications. The obtained nanopowders contain 100, 90, 75, 65 and 25 wt% of TiO2. The synthesized nanocomposite samples were analyzed by X-ray diffraction method, scanning electron microscopy, transmission electron microscopy, Fourier transform infrared spectroscopy and N2 adsorption-desorption method. The correlation between structure and morphology of the obtained nanocrystalline composite powders and their sorption and photocatalytic activity towards methylene blue degradation was established. It was found that the presence of SnO2 in the nanocomposites stabilizes the anatase phase of TiO2. Furthermore, sorption and photocatalytic properties of the obtained composites are significantly influenced not only by specific surface area, but also by pore size distribution and mesopore volume of the samples. In our opinion, the results obtained in this study have shown that the TiO2-SnO2 composites with SnO2 content that does not exceed 10% are promising for photocatalytic applications.

  16. Islanding detection technique using wavelet energy in grid-connected PV system

    NASA Astrophysics Data System (ADS)

    Kim, Il Song

    2016-08-01

    This paper proposes a new islanding detection method using wavelet energy in a grid-connected photovoltaic system. The method detects spectral changes in the higher-frequency components of the point of common coupling voltage and obtains wavelet coefficients by multilevel wavelet analysis. The autocorrelation of the wavelet coefficients can clearly identify islanding detection, even in the variations of the grid voltage harmonics during normal operating conditions. The advantage of the proposed method is that it can detect islanding condition the conventional under voltage/over voltage/under frequency/over frequency methods fail to detect. The theoretical method to obtain wavelet energies is evolved and verified by the experimental result.

  17. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  18. Metrological activity determination of 133Ba by sum-peak absolute method

    NASA Astrophysics Data System (ADS)

    da Silva, R. L.; de Almeida, M. C. M.; Delgado, J. U.; Poledna, R.; Santos, A.; de Veras, E. V.; Rangel, J.; Trindade, O. L.

    2016-07-01

    The National Laboratory for Metrology of Ionizing Radiation provides gamma sources of radionuclide and standardized in activity with reduced uncertainties. Relative methods require standards to determine the sample activity while the absolute methods, as sum-peak, not. The activity is obtained directly with good accuracy and low uncertainties. 133Ba is used in research laboratories and on calibration of detectors for analysis in different work areas. Classical absolute methods don't calibrate 133Ba due to its complex decay scheme. The sum-peak method using gamma spectrometry with germanium detector standardizes 133Ba samples. Uncertainties lower than 1% to activity results were obtained.

  19. The Ultimate Pile Bearing Capacity from Conventional and Spectral Analysis of Surface Wave (SASW) Measurements

    NASA Astrophysics Data System (ADS)

    Faizah Bawadi, Nor; Anuar, Shamilah; Rahim, Mustaqqim A.; Mansor, A. Faizal

    2018-03-01

    A conventional and seismic method for determining the ultimate pile bearing capacity was proposed and compared. The Spectral Analysis of Surface Wave (SASW) method is one of the non-destructive seismic techniques that do not require drilling and sampling of soils, was used in the determination of shear wave velocity (Vs) and damping (D) profile of soil. The soil strength was found to be directly proportional to the Vs and its value has been successfully applied to obtain shallow bearing capacity empirically. A method is proposed in this study to determine the pile bearing capacity using Vs and D measurements for the design of pile and also as an alternative method to verify the bearing capacity from the other conventional methods of evaluation. The objectives of this study are to determine Vs and D profile through frequency response data from SASW measurements and to compare pile bearing capacities obtained from the method carried out and conventional methods. All SASW test arrays were conducted near the borehole and location of conventional pile load tests. In obtaining skin and end bearing pile resistance, the Hardin and Drnevich equation has been used with reference strains obtained from the method proposed by Abbiss. Back analysis results of pile bearing capacities from SASW were found to be 18981 kN and 4947 kN compared to 18014 kN and 4633 kN of IPLT with differences of 5% and 6% for Damansara and Kuala Lumpur test sites, respectively. The results of this study indicate that the seismic method proposed in this study has the potential to be used in estimating the pile bearing capacity.

  20. Construction and evaluation of ion selective electrodes for nitrate with a summing operational amplifier. Application to tobacco analysis.

    PubMed

    Pérez-Olmos, R; Rios, A; Fernández, J R; Lapa, R A; Lima, J L

    2001-01-05

    In this paper, the construction and evaluation of an electrode selective to nitrate with improved sensitivity, constructed like a conventional electrode (ISE) but using an operational amplifier to sum the potentials supplied by four membranes (ESOA) is described. The two types of electrodes, without an inner reference solution, were constructed using tetraoctylammonium bromide as sensor, dibutylphthalate as solvent mediator and PVC as plastic matrix, the membranes obtained directly applied onto a conductive epoxy resin support. After the comparative evaluation of their working characteristics they were used in the determination of nitrate in different types of tobacco. The limit of detection of the direct potentiometric method developed was found to be 0.18 g kg(-1) and the precision and accuracy of the method, when applied to eight different samples of tobacco, expressed in terms of mean R.S.D. and average percentage of spike recovery was 0.6 and 100.3%, respectively. The comparison of variances showed, on all ocassions, that the results obtained by the ESOA were similar to those obtained by the conventional ISE, but with higher precision. Linear regression analysis showed good agreement (r=0.9994) between the results obtained by the developed potentiometric method and those of a spectrophotometric method based on brucine, adopted as reference method, when applied simultaneously to 32 samples of different types of tobacco.

  1. Characterization of Graphite Oxide and Reduced Graphene Oxide Obtained from Different Graphite Precursors and Oxidized by Different Methods Using Raman Spectroscopy.

    PubMed

    Muzyka, Roksana; Drewniak, Sabina; Pustelny, Tadeusz; Chrubasik, Maciej; Gryglewicz, Grażyna

    2018-06-21

    In this paper, the influences of the graphite precursor and the oxidation method on the resulting reduced graphene oxide (especially its composition and morphology) are shown. Three types of graphite were used to prepare samples for analysis, and each of the precursors was oxidized by two different methods (all samples were reduced by the same method of thermal reduction). Each obtained graphite oxide and reduced graphene oxide was analysed by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy (RS).

  2. Method for obtaining electron energy-density functions from Langmuir-probe data using a card-programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhurst, G.R.

    This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.

  3. Comparison of urine analysis using manual and sedimentation methods.

    PubMed

    Kurup, R; Leich, M

    2012-06-01

    Microscopic examination of urine sediment is an essential part in the evaluation of renal and urinary tract diseases. Traditionally, urine sediments are assessed by microscopic examination of centrifuged urine. However the current method used by the Georgetown Public Hospital Corporation Medical Laboratory involves uncentrifuged urine. To encourage high level of care, the results provided to the physician must be accurate and reliable for proper diagnosis. The aim of this study is to determine whether the centrifuge method is more clinically significant than the uncentrifuged method. In this study, a comparison between the results obtained from centrifuged and uncentrifuged methods were performed. A total of 167 urine samples were randomly collected and analysed during the period April-May 2010 at the Medical Laboratory, Georgetown Public Hospital Corporation. The urine samples were first analysed microscopically by the uncentrifuged, and then by the centrifuged method. The results obtained from both methods were recorded in a log book. These results were then entered into a database created in Microsoft Excel, and analysed for differences and similarities using this application. Analysis was further done in SPSS software to compare the results using Pearson ' correlation. When compared using Pearson's correlation coefficient analysis, both methods showed a good correlation between urinary sediments with the exception of white bloods cells. The centrifuged method had a slightly higher identification rate for all of the parameters. There is substantial agreement between the centrifuged and uncentrifuged methods. However the uncentrifuged method provides for a rapid turnaround time.

  4. Correlation functions in first-order phase transitions

    NASA Astrophysics Data System (ADS)

    Garrido, V.; Crespo, D.

    1997-09-01

    Most of the physical properties of systems underlying first-order phase transitions can be obtained from the spatial correlation functions. In this paper, we obtain expressions that allow us to calculate all the correlation functions from the droplet size distribution. Nucleation and growth kinetics is considered, and exact solutions are obtained for the case of isotropic growth by using self-similarity properties. The calculation is performed by using the particle size distribution obtained by a recently developed model (populational Kolmogorov-Johnson-Mehl-Avrami model). Since this model is less restrictive than that used in previously existing theories, the result is that the correlation functions can be obtained for any dependence of the kinetic parameters. The validity of the method is tested by comparison with the exact correlation functions, which had been obtained in the available cases by the time-cone method. Finally, the correlation functions corresponding to the microstructure developed in partitioning transformations are obtained.

  5. DEVELOPMENT AND VALIDATION OF BROMATOMETRIC, DIAZOTIZATION AND VIS-SPECTROPHOTOMETRIC METHODS FOR THE DETERMINATION OF MESALAZINE IN PHARMACEUTICAL FORMULATION.

    PubMed

    Zawada, Elzabieta; Pirianowicz-Chaber, Elzabieta; Somogi, Aleksander; Pawinski, Tomasz

    2017-03-01

    Three new methods were developed for the quantitative determination of mesalazine in the form of the pure substance or in the form of suppositories and tablets - accordingly: bromatometric, diazotization and visible light spectrophotometry method. Optimizing the time and the temperature of the bromination reaction (50⁰C, 50 min) 4-amino-2,3,5,6-tetrabromophenol was obtained. The results obtained were reproducible, accurate and precise. Developed methods were compared to the pharmacopoeial approach - alkalimetry in an aqueous medium. The validation parameters of all methods were comparable. Developed methods for quantification of mesalazine are a viable alternative to other more expensive approaches.

  6. Verification on the use of the Inoue method for precisely determining glomerular filtration rate in Philippine pediatrics

    NASA Astrophysics Data System (ADS)

    Magcase, M. J. D. J.; Duyan, A. Q.; Carpio, J.; Carbonell, C. A.; Trono, J. D.

    2015-06-01

    The objective of this study is to validate the Inoue method so that it would be the preferential choice in determining glomerular filtration rate (GFR) in Philippine pediatrics. The study consisted of 36 patients ranging from ages 2 months to 19 years old. The subjects used were those who were previously subjected to in-vitro method. The scintigrams of the invitro method was obtained and processed for split percentage uptake and for parameters needed to obtain Inoue GFR. The result of this paper correlates the Inoue GFR and In-vitro method (r = 0.926). Thus, Inoue method is a viable, simple, and practical technique in determining GFR in pediatric patients.

  7. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    PubMed

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  8. Preliminary study of the use of radiotracers for leak detection in industrial applications

    NASA Astrophysics Data System (ADS)

    Wetchagarun, S.; Petchrak, A.; Tippayakul, C.

    2015-05-01

    One of the most widespread uses of radiotracers in the industrial applications is the leak detection of the systems. This technique can be applied, for example, to detect leak in heat exchangers or along buried industrial pipelines. The ability to perform online investigation is one of the most important advantages of the radiotracer technique over other non-radioactive leak detection methods. In this paper, a preliminary study of the leak detection using radiotracer in the laboratory scale was presented. Br-82 was selected for this work due to its chemical property, its suitable half-life and its on-site availability. The NH4Br in the form of aqueous solution was injected into the experimental system as the radiotracer. Three NaI detectors were placed along the pipelines to measure system flow rate and to detect the leakage from the piping system. The results obtained from the radiotracer technique were compared to those measured by other methods. It is found that the flow rate obtained from the radiotracer technique agreed well with the one obtained from the flow meter. The leak rate result, however, showed discrepancy between results obtained from two different measuring methods indicating further study on leak detection was required before applying this technique in the industrial system.

  9. Single-step scanner-based digital image correlation (SB-DIC) method for large deformation mapping in rubber

    NASA Astrophysics Data System (ADS)

    Goh, C. P.; Ismail, H.; Yen, K. S.; Ratnam, M. M.

    2017-01-01

    The incremental digital image correlation (DIC) method has been applied in the past to determine strain in large deformation materials like rubber. This method is, however, prone to cumulative errors since the total displacement is determined by combining the displacements in numerous stages of the deformation. In this work, a method of mapping large strains in rubber using DIC in a single-step without the need for a series of deformation images is proposed. The reference subsets were deformed using deformation factors obtained from the fitted mean stress-axial stretch ratio curve obtained experimentally and the theoretical Poisson function. The deformed reference subsets were then correlated with the deformed image after loading. The recently developed scanner-based digital image correlation (SB-DIC) method was applied on dumbbell rubber specimens to obtain the in-plane displacement fields up to 350% axial strain. Comparison of the mean axial strains determined from the single-step SB-DIC method with those from the incremental SB-DIC method showed an average difference of 4.7%. Two rectangular rubber specimens containing circular and square holes were deformed and analysed using the proposed method. The resultant strain maps from the single-step SB-DIC method were compared with the results of finite element modeling (FEM). The comparison shows that the proposed single-step SB-DIC method can be used to map the strain distribution accurately in large deformation materials like rubber at much shorter time compared to the incremental DIC method.

  10. Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Aoyama, Akira; Hara, Takeshi; Kakogawa, Masakatsu; Fujita, Hiroshi; Yamamoto, Tetsuya

    2007-03-01

    The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images, (2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo image pair and used to model the ONH was compared with a physically measured quantity. The measurement results obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.

  11. Complex Langevin simulation of QCD at finite density and low temperature using the deformation technique

    NASA Astrophysics Data System (ADS)

    Nagata, Keitro; Nishimura, Jun; Shimasaki, Shinji

    2018-03-01

    We study QCD at finite density and low temperature by using the complex Langevin method. We employ the gauge cooling to control the unitarity norm and intro-duce a deformation parameter in the Dirac operator to avoid the singular-drift problem. The reliability of the obtained results are judged by the probability distribution of the magnitude of the drift term. By making extrapolations with respect to the deformation parameter using only the reliable results, we obtain results for the original system. We perform simulations on a 43 × 8 lattice and show that our method works well even in the region where the reweighing method fails due to the severe sign problem. As a result we observe a delayed onset of the baryon number density as compared with the phase-quenched model, which is a clear sign of the Silver Blaze phenomenon.

  12. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  13. Strain distribution and band structure of InAs/GaAs quantum ring superlattice

    NASA Astrophysics Data System (ADS)

    Mughnetsyan, Vram; Kirakosyan, Albert

    2017-12-01

    The elastic strain distribution and the band structure of InAs/GaAs one-layer quantum ring superlattice with square symmetry has been considered in this work. The Green's function formalism based on the method of inclusions has been implied to calculate the components of the strain tensor, while the combination of Green's function method with the Fourier transformation to momentum space in Pikus-Bir Hamiltonian has been used for obtaining the miniband energy dispersion surfaces via the exact diagonalization procedure. The dependencies of the strain tensor components on spatial coordinates are compared with ones for single quantum ring and are in good agreement with previously obtained results for cylindrical quantum disks. It is shown that strain significantly affects the miniband structure of the superlattice and has contribution to the degeneracy lifting effect due to heavy hole-light hole coupling. The demonstrated method is simple and provides reasonable results for comparatively small Hamiltonian matrix. The obtained results may be useful for further investigation and construction of novel devices based on quantum ring superlattices.

  14. [Analysis of the results of the SEIMC External Quality Control Program for HIV-1 and HCV viral loads. Year 2008].

    PubMed

    Mira, Nieves Orta; Serrano, María del Remedio Guna; Martínez, José Carlos Latorre; Ovies, María Rosario; Pérez, José L; Cardona, Concepción Gimeno

    2010-01-01

    Human immunodeficiency virus type 1 (HIV-1) and hepatitis C virus (HCV) viral load determinations are among the most relevant markers for the follow up of patients infected with these viruses. External quality control tools are crucial to ensure the accuracy of results obtained by microbiology laboratories. This article summarized the results obtained from the 2008 SEIMC External Quality Control Program for HIV-1 and HCV viral loads. In the HIV-1 program, a total of five standards were sent. One standard consisted in seronegative human plasma, while the remaining four contained plasma from 3 different viremic patients, in the range of 2-5 log(10) copies/mL; two of these standards were identical aiming to determine repeatability. The specificity was complete for all commercial methods, and no false positive results were reported by the participants. A significant proportion of the laboratories (24% on average) obtained values out of the accepted range (mean +/- 0.2 log(10) copies/mL), depending on the standard and on the method used for quantification. Repeatability was very good, with up to 95% of laboratories reporting results within the limits (D < 0.5 log(10) copias/mL). The HCV program consisted of two standards with different viral load contents. Most of the participants (88,7%) obtained results within the accepted range (mean +/- 1.96 SD log(10) UI/mL). Post-analytical errors due to mistranscription of the results were detected for HCV, but not for the HIV-1 program. Data from this analysis reinforce the utility of proficiency programmes to ensure the quality of the results obtained by a particular laboratory, as well as the importance of the post-analytical phase on the overall quality. Due to the remarkable interlaboratory variability, it is advisable to use the same method and the same laboratory for patient follow up. 2010 Elsevier España S.L. All rights reserved.

  15. A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.

    PubMed

    Liu, Chun-Han; Liu, Lian

    2017-05-08

    BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.

  16. Partition coefficients of methylated DNA bases obtained from free energy calculations with molecular electron density derived atomic charges.

    PubMed

    Lara, A; Riquelme, M; Vöhringer-Martinez, E

    2018-05-11

    Partition coefficients serve in various areas as pharmacology and environmental sciences to predict the hydrophobicity of different substances. Recently, they have also been used to address the accuracy of force fields for various organic compounds and specifically the methylated DNA bases. In this study, atomic charges were derived by different partitioning methods (Hirshfeld and Minimal Basis Iterative Stockholder) directly from the electron density obtained by electronic structure calculations in a vacuum, with an implicit solvation model or with explicit solvation taking the dynamics of the solute and the solvent into account. To test the ability of these charges to describe electrostatic interactions in force fields for condensed phases, the original atomic charges of the AMBER99 force field were replaced with the new atomic charges and combined with different solvent models to obtain the hydration and chloroform solvation free energies by molecular dynamics simulations. Chloroform-water partition coefficients derived from the obtained free energies were compared to experimental and previously reported values obtained with the GAFF or the AMBER-99 force field. The results show that good agreement with experimental data is obtained when the polarization of the electron density by the solvent has been taken into account, and when the energy needed to polarize the electron density of the solute has been considered in the transfer free energy. These results were further confirmed by hydration free energies of polar and aromatic amino acid side chain analogs. Comparison of the two partitioning methods, Hirshfeld-I and Minimal Basis Iterative Stockholder (MBIS), revealed some deficiencies in the Hirshfeld-I method related to the unstable isolated anionic nitrogen pro-atom used in the method. Hydration free energies and partitioning coefficients obtained with atomic charges from the MBIS partitioning method accounting for polarization by the implicit solvation model are in good agreement with the experimental values. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  17. Novel asymmetric representation method for solving the higher-order Ginzburg-Landau equation

    PubMed Central

    Wong, Pring; Pang, Lihui; Wu, Ye; Lei, Ming; Liu, Wenjun

    2016-01-01

    In ultrafast optics, optical pulses are generated to be of shorter pulse duration, which has enormous significance to industrial applications and scientific research. The ultrashort pulse evolution in fiber lasers can be described by the higher-order Ginzburg-Landau (GL) equation. However, analytic soliton solutions for this equation have not been obtained by use of existing methods. In this paper, a novel method is proposed to deal with this equation. The analytic soliton solution is obtained for the first time, and is proved to be stable against amplitude perturbations. Through the split-step Fourier method, the bright soliton solution is studied numerically. The analytic results here may extend the integrable methods, and could be used to study soliton dynamics for some equations in other disciplines. It may also provide the other way to obtain two-soliton solutions for higher-order GL equations. PMID:27086841

  18. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA

    2008-05-13

    A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  19. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    DOEpatents

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA

    2012-03-06

    A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  20. PIV-based estimation of unsteady loads on a flat plate at high angle of attack using momentum equation approaches

    NASA Astrophysics Data System (ADS)

    Guissart, Amandine; Bernal, Luis; Dimitriadis, Gregorios; Terrapon, Vincent

    2015-11-01

    The direct measurement of loads with force balance can become challenging when the forces are small or when the body is moving. An alternative is the use of Particle Image Velocimetry (PIV) velocity fields to indirectly obtain the aerodynamic coefficients. This can be done by the use of control volume approaches which lead to the integration of velocities, and other fields deriving from them, on a contour surrounding the studied body and its supporting surface. This work exposes and discusses results obtained with two different methods: the direct use of the integral formulation of the Navier-Stokes equations and the so-called Noca's method. The latter is a reformulation of the integral Navier-Stokes equations in order to get rid of the pressure. Results obtained using the two methods are compared and the influence of different parameters is discussed. The methods are applied to PIV data obtained from water channel testing for the flow around a 16:1 plate. Two cases are considered: a static plate at high angle of attack and a large amplitude imposed pitching motion. Two-dimensional PIV velocity fields are used to compute the aerodynamic forces. Direct measurements of dynamic loads are also carried out in order to assess the quality of the indirectly calculated coefficients.

  1. Temperature scaling method for Markov chains.

    PubMed

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  2. Evaluation of 3 dental unit waterline contamination testing methods

    PubMed Central

    Porteous, Nuala; Sun, Yuyu; Schoolfield, John

    2015-01-01

    Previous studies have found inconsistent results from testing methods used to measure heterotrophic plate count (HPC) bacteria in dental unit waterline (DUWL) samples. This study used 63 samples to compare the results obtained from an in-office chairside method and 2 currently used commercial laboratory HPC methods (Standard Methods 9215C and 9215E). The results suggest that the Standard Method 9215E is not suitable for application to DUWL quality monitoring, due to the detection of limited numbers of heterotrophic organisms at the required 35°C incubation temperature. The results also confirm that while the in-office chairside method is useful for DUWL quality monitoring, the Standard Method 9215C provided the most accurate results. PMID:25574718

  3. Electromagnetic Field Penetration Studies

    NASA Technical Reports Server (NTRS)

    Deshpande, M.D.

    2000-01-01

    A numerical method is presented to determine electromagnetic shielding effectiveness of rectangular enclosure with apertures on its wall used for input and output connections, control panels, visual-access windows, ventilation panels, etc. Expressing EM fields in terms of cavity Green's function inside the enclosure and the free space Green's function outside the enclosure, integral equations with aperture tangential electric fields as unknown variables are obtained by enforcing the continuity of tangential electric and magnetic fields across the apertures. Using the Method of Moments, the integral equations are solved for unknown aperture fields. From these aperture fields, the EM field inside a rectangular enclosure due to external electromagnetic sources are determined. Numerical results on electric field shielding of a rectangular cavity with a thin rectangular slot obtained using the present method are compared with the results obtained using simple transmission line technique for code validation. The present technique is applied to determine field penetration inside a Boeing-757 by approximating its passenger cabin as a rectangular cavity filled with a homogeneous medium and its passenger windows by rectangular apertures. Preliminary results for, two windows, one on each side of fuselage were considered. Numerical results for Boeing-757 at frequencies 26 MHz, 171-175 MHz, and 428-432 MHz are presented.

  4. Integrated analysis on static/dynamic aeroelasticity of curved panels based on a modified local piston theory

    NASA Astrophysics Data System (ADS)

    Yang, Zhichun; Zhou, Jian; Gu, Yingsong

    2014-10-01

    A flow field modified local piston theory, which is applied to the integrated analysis on static/dynamic aeroelastic behaviors of curved panels, is proposed in this paper. The local flow field parameters used in the modification are obtained by CFD technique which has the advantage to simulate the steady flow field accurately. This flow field modified local piston theory for aerodynamic loading is applied to the analysis of static aeroelastic deformation and flutter stabilities of curved panels in hypersonic flow. In addition, comparisons are made between results obtained by using the present method and curvature modified method. It shows that when the curvature of the curved panel is relatively small, the static aeroelastic deformations and flutter stability boundaries obtained by these two methods have little difference, while for curved panels with larger curvatures, the static aeroelastic deformation obtained by the present method is larger and the flutter stability boundary is smaller compared with those obtained by the curvature modified method, and the discrepancy increases with the increasing of curvature of panels. Therefore, the existing curvature modified method is non-conservative compared to the proposed flow field modified method based on the consideration of hypersonic flight vehicle safety, and the proposed flow field modified local piston theory for curved panels enlarges the application range of piston theory.

  5. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang, E-mail: cyzhao@pmo.ac.cn

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and themore » astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.« less

  7. Finite element solution of optimal control problems with inequality constraints

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.

    1990-01-01

    A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.

  8. Salivary defense system alters in vegetarian

    PubMed Central

    Amirmozafari, Nour; Pourghafar, Houra; Sariri, Reyhaneh

    2013-01-01

    Purpose The aim of this research was investigating antimicrobial and enzymatic antioxidant activities in salivary fluids of vegetarians as compared to normal subjects. Material & Methods Antimicrobial activity of the saliva samples was evaluated against four clinically important bacteria. The biological activities of three of the main antioxidant enzymes of saliva were measured using appropriate methods of enzyme assay in both groups. Results According to the results, saliva obtained from vegetarians showed a reduced inhibitory effect on growth of Staphylococcus aureus, Klebsiella oxytoca, Pseudomonas aeruginosa and Escherichia coli as compared to those obtained from the non-vegetarian subjects. The activity of salivary peroxidase, catalase and superoxide dismutase showed a statistically marked decrease in vegetarian group. Conclusions According to our literature survey, this is the first report on the antibacterial and antioxidant capacity in saliva of vegetarians. Results obtained from the present study have opened a new line of research with the basis of saliva as a research tool. PMID:25737889

  9. A new method for evaluating radon and thoron alpha-activities per unit volume inside and outside various natural material samples by calculating SSNTD detection efficiencies for the emitted alpha-particles and measuring the resulting track densities.

    PubMed

    Misdaq, M A; Aitnouh, F; Khajmi, H; Ezzahery, H; Berrazzouk, S

    2001-08-01

    A Monte Carlo computer code for determining detection efficiencies of the CR-39 and LR-115 II solid-state nuclear track detectors (SSNTD) for alpha-particles emitted by the uranium and thorium series inside different natural material samples was developed. The influence of the alpha-particle initial energy on the SSNTD detection efficiencies was investigated. Radon (222Rn) and thoron (220Rn) alpha-activities per unit volume were evaluated inside and outside the natural material samples by exploiting data obtained for the detection efficiencies of the SSNTD utilized for the emitted alpha-particles, and measuring the resulting track densities. Results obtained were compared to those obtained by other methods. Radon emanation coefficients have been determined for some of the considered material samples.

  10. Study Of Nondestructive Techniques For Testing Composites

    NASA Technical Reports Server (NTRS)

    Roth, D.; Kautz, H.; Draper, S.; Bansal, N.; Bowles, K.; Bashyam, M.; Bishop, C.

    1995-01-01

    Study evaluates some nondestructive methods for characterizing ceramic-, metal-, and polymer-matrix composite materials. Results demonstrated utility of two ultrasonic methods for obtaining quantitative data on microstructural anomalies in composite materials.

  11. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods

    PubMed Central

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547

  12. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods.

    PubMed

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.

  13. Interactive Visual Least Absolutes Method: Comparison with the Least Squares and the Median Methods

    ERIC Educational Resources Information Center

    Kim, Myung-Hoon; Kim, Michelle S.

    2016-01-01

    A visual regression analysis using the least absolutes method (LAB) was developed, utilizing an interactive approach of visually minimizing the sum of the absolute deviations (SAB) using a bar graph in Excel; the results agree very well with those obtained from nonvisual LAB using a numerical Solver in Excel. These LAB results were compared with…

  14. A novel and eco-friendly analytical method for phosphorus and sulfur determination in animal feed.

    PubMed

    Novo, Diogo L R; Pereira, Rodrigo M; Costa, Vanize C; Hartwig, Carla A; Mesko, Marcia F

    2018-04-25

    An eco-friendly method for indirect determining phosphorus and sulfur in animal feed by ion chromatography was proposed. Using this method, it was possible to digest 500 mg of animal feed in a microwave system under oxygen pressure (20 bar) using only a diluted acid solution (2 mol L -1 HNO 3 ). The accuracy of the proposed method was evaluated by recovery tests, by analysis of reference material (RM) and by comparison of the results with those obtained using conventional microwave-assisted digestion. Moreover, P results were compared with those obtained from the method recommended by AOAC International for animal feed (Method nr. 965.17) and no significant differences were found between the results. Recoveries for P and S were between 94 and 97%, and agreements with the reference values of RM were better than 94%. Phosphorus and S concentrations in animal feeds ranged from 10,026 to 28,357 mg kg -1 and 2259 to 4601 mg kg -1 , respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Comparison of two surface temperature measurement using thermocouples and infrared camera

    NASA Astrophysics Data System (ADS)

    Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena

    This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  16. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  17. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  18. A two-hour antibiotic susceptibility test by ATP-bioluminescence.

    PubMed

    March Rosselló, Gabriel Alberto; García-Loygorri Jordán de Urries, María Cristina; Gutiérrez Rodríguez, María Purificación; Simarro Grande, María; Orduña Domingo, Antonio; Bratos Pérez, Miguel Ángel

    2016-01-01

    The antibiotic susceptibility test (AST) in Clinical Microbiology laboratories is still time-consuming, and most procedures take 24h to yield results. In this study, a rapid antimicrobial susceptibility test using ATP-bioluminescence has been developed. The design of method was performed using five ATCC collection strains of known susceptibility. This procedure was then validated against standard commercial methods on 10 strains of enterococci, 10 staphylococci, 10 non-fermenting gram negative bacilli, and 13 Enterobacteriaceae from patients. The agreement obtained in the sensitivity between the ATP-bioluminescence method and commercial methods (E-test, MicroScan and VITEK2) was 100%. In summary, the preliminary results obtained in this work show that the ATP-bioluminescence method could provide a fast and reliable AST in two hours. Copyright © 2015 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  19. Ship Detection from Ocean SAR Image Based on Local Contrast Variance Weighted Information Entropy

    PubMed Central

    Huang, Yulin; Pei, Jifang; Zhang, Qian; Gu, Qin; Yang, Jianyu

    2018-01-01

    Ship detection from synthetic aperture radar (SAR) images is one of the crucial issues in maritime surveillance. However, due to the varying ocean waves and the strong echo of the sea surface, it is very difficult to detect ships from heterogeneous and strong clutter backgrounds. In this paper, an innovative ship detection method is proposed to effectively distinguish the vessels from complex backgrounds from a SAR image. First, the input SAR image is pre-screened by the maximally-stable extremal region (MSER) method, which can obtain the ship candidate regions with low computational complexity. Then, the proposed local contrast variance weighted information entropy (LCVWIE) is adopted to evaluate the complexity of those candidate regions and the dissimilarity between the candidate regions with their neighborhoods. Finally, the LCVWIE values of the candidate regions are compared with an adaptive threshold to obtain the final detection result. Experimental results based on measured ocean SAR images have shown that the proposed method can obtain stable detection performance both in strong clutter and heterogeneous backgrounds. Meanwhile, it has a low computational complexity compared with some existing detection methods. PMID:29652863

  20. Ultrasound SIV measurement of helical valvular flow behind the great saphenous vein

    NASA Astrophysics Data System (ADS)

    Park, Jun Hong; Kim, Jeong Ju; Lee, Sang Joon; Yeom, Eunseop; Experimental Fluid Mechanics Laboratory Team; LaboratoryMicrothermal; Microfluidic Measurements Collaboration

    2017-11-01

    Dysfunction of venous valve and induced secondary abnormal flow are closely associated with venous diseases. Thus, detailed analysis of venous valvular flow is invaluable from biological and medical perspectives. However, most previous studies on venous perivalvular flows were based on qualitative analyses. On the contrary, quantitative analysis on the perivalvular flows has not been fully understood yet. In this study, 3D valvular flows under in vitro and in vivo conditions were experimentally investigated using ultrasound speckle image velocimetry (SIV) for analyzing their flow characteristics. The results for in vitro model obtained by the SIV technique were compared with those derived by numerical simulation and color Doppler method to validate its measurement accuracy. Then blood flow in the human great saphenous vein was measured using the SIV with respect to the dimensionless index, helical intensity. The results obtained by the SIV method are well matched well with those obtained by the numerical simulation and color Doppler method. The hemodynamic characteristics of 3D valvular flows measured by the validated SIV method would be helpful in diagnosis of valve-related venous diseases. None.

  1. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  2. Process monitored spectrophotometric titration coupled with chemometrics for simultaneous determination of mixtures of weak acids.

    PubMed

    Liao, Lifu; Yang, Jing; Yuan, Jintao

    2007-05-15

    A new spectrophotometric titration method coupled with chemometrics for the simultaneous determination of mixtures of weak acids has been developed. In this method, the titrant is a mixture of sodium hydroxide and an acid-base indicator, and the indicator is used to monitor the titration process. In a process of titration, both the added volume of titrant and the solution acidity at each titration point can be obtained simultaneously from an absorption spectrum by least square algorithm, and then the concentration of each component in the mixture can be obtained from the titration curves by principal component regression. The method only needs the information of absorbance spectra to obtain the analytical results, and is free of volumetric measurements. The analyses are independent of titration end point and do not need the accurate values of dissociation constants of the indicator and the acids. The method has been applied to the simultaneous determination of the mixtures of benzoic acid and salicylic acid, and the mixtures of phenol, o-chlorophenol and p-chlorophenol with satisfactory results.

  3. An Analysis of the Optimal Multiobjective Inventory Clustering Decision with Small Quantity and Great Variety Inventory by Applying a DPSO

    PubMed Central

    Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  4. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    PubMed Central

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  5. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    PubMed

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  6. Enhancement and restoration of non-uniform illuminated Fundus Image of Retina obtained through thin layer of cataract.

    PubMed

    Mitra, Anirban; Roy, Sudipta; Roy, Somais; Setua, Sanjit Kumar

    2018-03-01

    Retinal fundus images are extensively used in manually or without human intervention to identify and analyze various diseases. Due to the comprehensive imaging arrangement, there is a large radiance, reflectance and contrast inconsistency within and across images. A novel method is proposed based on the cataract physical model to reduce the generated blurriness of the fundus image at the time of image acquisition through the thin layer of cataract by the fundus camera. After the blurriness reduction the method is proposed the enhancement procedure of the images with an objective on contrast perfection with no preamble of artifacts. Due to the uneven distribution of thickness of the cataract, the cataract surroundings are first predicted in the domain of frequency. Second, the resultant image of first step enhanced by the intensity histogram equalization in the adapted Hue Saturation Intensity (HSI) color image space such as the gamut problem can be avoided. The concluding image with suitable color and disparity is acquired by using the proposed max-min color correction approach. The result indicates that not only the proposed method can more effectively enhanced the non-uniform image of retina obtain through thin layer of cataract, but also the resulting image show appropriate brightness and saturation and maintain complete color space information. The projected enhancement method has been tested on the openly available datasets and the result evaluated with the standard used image enhancement algorithms and the cataract removal method. Results show noticeable development over existing methods. Cataract often prevents the clinician from objectively evaluating fundus feature. Cataract also affect subjective test. Enhancement and restoration of non-uniform illuminated Fundus Image of Retina obtained through thin layer of Cataract has shown here to be potentially beneficial. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Coronary arteries segmentation based on the 3D discrete wavelet transform and 3D neutrosophic transform.

    PubMed

    Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man

    2015-01-01

    Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  8. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  9. Two Optical Atmospheric Remote Sensing Techniques and AN Associated Analytic Solution to a Class of Integral Equations

    NASA Astrophysics Data System (ADS)

    Manning, Robert Michael

    This work concerns itself with the analysis of two optical remote sensing methods to be used to obtain parameters of the turbulent atmosphere pertinent to stochastic electromagnetic wave propagation studies, and the well -posed solution to a class of integral equations that are central to the development of these remote sensing methods. A remote sensing technique is theoretically developed whereby the temporal frequency spectrum of the scintillations of a stellar source or a point source within the atmosphere, observed through a variable radius aperture, is related to the space-time spectrum of atmospheric scintillation. The key to this spectral remote sensing method is the spatial filtering performed by a finite aperture. The entire method is developed without resorting to a priori information such as results from stochastic wave propagation theory. Once the space-time spectrum of the scintillations is obtained, an application of known results of atmospheric wave propagation theory and simple geometric considerations are shown to yield such important information such as the spectrum of atmospheric turbulence, the cross-wind velocity, and the path profile of the atmospheric refractive index structure parameter. A method is also developed to independently verify the Taylor frozen flow hypothesis. The success of the spectral remote sensing method relies on the solution to a Fredholm integral equation of the first kind. An entire class of such equations, that are peculiar to inverse diffraction problems, is studied and a well-posed solution (in the sense of Hadamard) is obtained and probed. Conditions of applicability are derived and shown not to limit the useful operating range of the spectral remote sensing method. The general integral equation solution obtained is then applied to another remote sensing problem having to do with the characterization of the particle size distribution to atmospheric aerosols and hydrometeors. By measuring the diffraction pattern in the focal plane of a lens created by the passage of a laser beam through a distribution of particles, it is shown that the particle-size distribution of the particles can be obtained. An intermediate result of the analysis also gives the total volume concentration of the particles.

  10. Calculation of transonic flows using an extended integral equation method

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1976-01-01

    An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.

  11. Acoustic-Liner Admittance in a Duct

    NASA Technical Reports Server (NTRS)

    Watson, W. R.

    1986-01-01

    Method calculates admittance from easily obtainable values. New method for calculating acoustic-liner admittance in rectangular duct with grazing flow based on finite-element discretization of acoustic field and reposing of unknown admittance value as linear eigenvalue problem on admittance value. Problem solved by Gaussian elimination. Unlike existing methods, present method extendable to mean flows with two-dimensional boundary layers as well. In presence of shear, results of method compared well with results of Runge-Kutta integration technique.

  12. Three-dimensional computed tomographic volumetry precisely predicts the postoperative pulmonary function.

    PubMed

    Kobayashi, Keisuke; Saeki, Yusuke; Kitazawa, Shinsuke; Kobayashi, Naohiro; Kikuchi, Shinji; Goto, Yukinobu; Sakai, Mitsuaki; Sato, Yukio

    2017-11-01

    It is important to accurately predict the patient's postoperative pulmonary function. The aim of this study was to compare the accuracy of predictions of the postoperative residual pulmonary function obtained with three-dimensional computed tomographic (3D-CT) volumetry with that of predictions obtained with the conventional segment-counting method. Fifty-three patients scheduled to undergo lung cancer resection, pulmonary function tests, and computed tomography were enrolled in this study. The postoperative residual pulmonary function was predicted based on the segment-counting and 3D-CT volumetry methods. The predicted postoperative values were compared with the results of postoperative pulmonary function tests. Regarding the linear correlation coefficients between the predicted postoperative values and the measured values, those obtained using the 3D-CT volumetry method tended to be higher than those acquired using the segment-counting method. In addition, the variations between the predicted and measured values were smaller with the 3D-CT volumetry method than with the segment-counting method. These results were more obvious in COPD patients than in non-COPD patients. Our findings suggested that the 3D-CT volumetry was able to predict the residual pulmonary function more accurately than the segment-counting method, especially in patients with COPD. This method might lead to the selection of appropriate candidates for surgery among patients with a marginal pulmonary function.

  13. Dynamics of multiple viscoelastic carbon nanotube based nanocomposites with axial magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karličić, Danilo; Cajić, Milan; Murmu, Tony

    2014-06-21

    Nanocomposites and magnetic field effects on nanostructures have received great attention in recent years. A large amount of research work was focused on developing the proper theoretical framework for describing many physical effects appearing in structures on nanoscale level. Great step in this direction was successful application of nonlocal continuum field theory of Eringen. In the present paper, the free transverse vibration analysis is carried out for the system composed of multiple single walled carbon nanotubes (MSWCNT) embedded in a polymer matrix and under the influence of an axial magnetic field. Equivalent nonlocal model of MSWCNT is adopted as viscoelasticallymore » coupled multi-nanobeam system (MNBS) under the influence of longitudinal magnetic field. Governing equations of motion are derived using the Newton second low and nonlocal Rayleigh beam theory, which take into account small-scale effects, the effect of nanobeam angular acceleration, internal damping and Maxwell relation. Explicit expressions for complex natural frequency are derived based on the method of separation of variables and trigonometric method for the “Clamped-Chain” system. In addition, an analytical method is proposed in order to obtain asymptotic damped natural frequency and the critical damping ratio, which are independent of boundary conditions and a number of nanobeams in MNBS. The validity of obtained results is confirmed by comparing the results obtained for complex frequencies via trigonometric method with the results obtained by using numerical methods. The influence of the longitudinal magnetic field on the free vibration response of viscoelastically coupled MNBS is discussed in detail. In addition, numerical results are presented to point out the effects of the nonlocal parameter, internal damping, and parameters of viscoelastic medium on complex natural frequencies of the system. The results demonstrate the efficiency of the suggested methodology to find the closed form solutions for the free vibration response of multiple nanostructure systems under the influence of magnetic field.« less

  14. Analysis of Artificial Neural Network Backpropagation Using Conjugate Gradient Fletcher Reeves In The Predicting Process

    NASA Astrophysics Data System (ADS)

    Wanto, Anjar; Zarlis, Muhammad; Sawaluddin; Hartama, Dedy

    2017-12-01

    Backpropagation is a good artificial neural network algorithm used to predict, one of which is to predict the rate of Consumer Price Index (CPI) based on the foodstuff sector. While conjugate gradient fletcher reeves is a suitable optimization method when juxtaposed with backpropagation method, because this method can shorten iteration without reducing the quality of training and testing result. Consumer Price Index (CPI) data that will be predicted to come from the Central Statistics Agency (BPS) Pematangsiantar. The results of this study will be expected to contribute to the government in making policies to improve economic growth. In this study, the data obtained will be processed by conducting training and testing with artificial neural network backpropagation by using parameter learning rate 0,01 and target error minimum that is 0.001-0,09. The training network is built with binary and bipolar sigmoid activation functions. After the results with backpropagation are obtained, it will then be optimized using the conjugate gradient fletcher reeves method by conducting the same training and testing based on 5 predefined network architectures. The result, the method used can increase the speed and accuracy result.

  15. Electro-optical properties of the metal oxide-carbon thin film system of CdO-LCC

    NASA Astrophysics Data System (ADS)

    Kokshina, A. V.; Smirnov, A. V.; Razina, A. G.

    2016-08-01

    This article presents the results of a study electrical and optical properties of the thin film system of CdO-LCC. Cadmium oxide films were obtained by method of thermal oxidation. CdO-LCC thin film system was produced by applying on a CdO film a linear chain carbon film in thickness of 100 nm using the ion-plasma method, after which the obtained system was annealed. The studies showed that the obtained CdO-LCC films are quite transparent in the visible region; it has polycrystalline structure, thickness around 300 nm, the band gap to 2.3 eV. The obtained thin film system has photosensitive properties.

  16. Chosen interval methods for solving linear interval systems with special type of matrix

    NASA Astrophysics Data System (ADS)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  17. U/Th dating of carbonate deposits from Constantina (Sevilla), Spain.

    PubMed

    Alcaraz-Pelegrina, J M; Martínez-Aguirre, A

    2007-07-01

    Uranium-series method has been applied to continental carbonate deposits from Constantina, Seville, in Spain. All samples analysed were impure carbonates and the leachate-leachate method was used to obtain activity ratios in carbonate fraction. Leachate-residue methods were applied to one of the samples in order to compare with leachate-leachate method, but leachate-residue method assumptions did not meet and ages resulting from leachate-residue methods were not valid. Ages obtained by leachate-leachate method range from 1.8 to 23.5ky BP and are consistent with stratigraphical positions of samples analysed. Initial activity ratios for uranium isotopes are practically constant in this period, thus indicating that no changes in environmental conditions occur between 1.8 and 23.5ky period.

  18. Investigation on a coupled CFD/DSMC method for continuum-rarefied flows

    NASA Astrophysics Data System (ADS)

    Tang, Zhenyu; He, Bijiao; Cai, Guobiao

    2012-11-01

    The purpose of the present work is to investigate the coupled CFD/DSMC method using the existing CFD and DSMC codes developed by the authors. The interface between the continuum and particle regions is determined by the gradient-length local Knudsen number. A coupling scheme combining both state-based and flux-based coupling methods is proposed in the current study. Overlapping grids are established between the different grid systems of CFD and DSMC codes. A hypersonic flow over a 2D cylinder has been simulated using the present coupled method. Comparison has been made between the results obtained from both methods, which shows that the coupled CFD/DSMC method can achieve the same precision as the pure DSMC method and obtain higher computational efficiency.

  19. Delta13C and delta18O isotopic composition of CaCO3 measured by continuous flow isotope ratio mass spectrometry: statistical evaluation and verification by application to Devils Hole core DH-11 calcite.

    PubMed

    Révész, Kinga M; Landwehr, Jurate M

    2002-01-01

    A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400 +/- 20 micro g) of calcium carbonate. This new method streamlines the classical phosphoric acid/calcium carbonate (H(3)PO(4)/CaCO(3)) reaction method by making use of a recently available Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. Conditions for which the H(3)PO(4)/CaCO(3) reaction produced reproducible and accurate results with minimal error had to be determined. When the acid/carbonate reaction temperature was kept at 26 degrees C and the reaction time was between 24 and 54 h, the precision of the carbon and oxygen isotope ratios for pooled samples from three reference standard materials was

  20. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  1. [The importance of memory bias in obtaining age of menarche by recall method in Brazilian adolescents].

    PubMed

    Castilho, Silvia Diez; Nucci, Luciana Bertoldi; Assuino, Samanta Ramos; Hansen, Lucca Ortolan

    2014-06-01

    To compare the age at menarche obtained by recall method according to the time elapsed since the event, in order to verify the importance of the recall bias. Were evaluated 1,671 girls (7-18 years) at schools in Campinas-SP regarding the occurrence of menarche by the status quo method (menarche: yes or no) and the recall method (date of menarche, for those who mentioned it). The age at menarche obtained by the status quo method was calculated by logit, which considers the whole group, and the age obtained by the recall method was calculated as the average of the mentioned age at menarche. In this group, the age at menarche was obtained by the difference between the date of the event and the date of birth. Girls who reported menarche (883, 52.8%) were divided into four groups according to the time elapsed since the event. To analyze the results, we used ANOVA and logistic regression for the analysis, with a significance level of 0.05. The age at menarche calculated by logit was 12.14 y/o (95% CI 12.08 to 12.20). Mean ages obtained by recall were: for those who experienced menarche within the previous year 12.26 y/o (±1.14), between > 1-2 years before, 12.29 y (±1.22); between > 2-3 years before, 12.23 y/o (±1.27); and more than 3 years before, 11.55y/o (±1.24), p < 0.001. The age at menarche obtained by the recall method was similar for girls who menstruated within the previous 3 years (and approaches the age calculated by logit); when more than 3 years have passed, the recall bias was significant.

  2. A three-dimensional, compressible, laminar boundary-layer method for general fuselages. Volume 1: Numerical method

    NASA Technical Reports Server (NTRS)

    Wie, Yong-Sun

    1990-01-01

    A procedure for calculating 3-D, compressible laminar boundary layer flow on general fuselage shapes is described. The boundary layer solutions can be obtained in either nonorthogonal 'body oriented' coordinates or orthogonal streamline coordinates. The numerical procedure is 'second order' accurate, efficient and independent of the cross flow velocity direction. Numerical results are presented for several test cases, including a sharp cone, an ellipsoid of revolution, and a general aircraft fuselage at angle of attack. Comparisons are made between numerical results obtained using nonorthogonal curvilinear 'body oriented' coordinates and streamline coordinates.

  3. The ensemble switch method for computing interfacial tensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  4. Design and Development of the Trash Spliter with Three Different Sensors

    NASA Astrophysics Data System (ADS)

    Perangin Angin, Despaleri; Siagian, Hendrik; Dodi Suryanto, Eka; Sashanti, Rahayu; Marcopolo

    2018-04-01

    Abstract. Trash has become a major problem in everyday life. Until now there is no right method to handle it. This paper discusses a method of development of the trash splitter with three different sensors. There are three sensors are used infrared, metal, and light sensors. The results obtained are more effective with the results obtained show the devices have similar accuracy garbage sorting is a metal (98%), organic (26.67%), paper (32%), and plastics (58%). The accuracy of the mixed waste sorting is a metal (94.67%), organic (28%), paper (12%), and plastics (41.3%).

  5. Flameless atomic-absorption determination of gold in geological materials

    USGS Publications Warehouse

    Meier, A.L.

    1980-01-01

    Gold in geologic material is dissolved using a solution of hydrobromic acid and bromine, extracted with methyl isobutyl ketone, and determined using an atomic-absorption spectrophotometer equipped with a graphite furnace atomizer. A comparison of results obtained by this flameless atomic-absorption method on U.S. Geological Survey reference rocks and geochemical samples with reported values and with results obtained by flame atomic-absorption shows that reasonable accuracy is achieved with improved precision. The sensitivity, accuracy, and precision of the method allows acquisition of data on the distribution of gold at or below its crustal abundance. ?? 1980.

  6. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  7. A note on the computation of antenna-blocking shadows

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1993-01-01

    A simple and readily applied method is provided to compute the shadow on the main reflector of a Cassegrain antenna, when cast by the subreflector and the subreflector supports. The method entails some convenient minor approximations that will produce results similar to results obtained with a lengthier, mainframe computer program.

  8. Fremdsprachenlernbedarf--Begriff, Ermittlungsverfahren, Resultate vorliegender Untersuchungen (The Demand for Foreign Language Instruction--Method of Inquiry, Research Results).

    ERIC Educational Resources Information Center

    Stedtfeld, Wolfgang

    1979-01-01

    Discusses the concept of the demand for foreign language teaching in various types of West German schools. An account is given of the methods by which data were obtained. One conclusion reached was that currently accepted teaching goals are questionable. (IFS/WGA)

  9. Effect of sintering process and additives on the properties of cordierite based ceramics

    NASA Astrophysics Data System (ADS)

    Rundans, M.; Sperberga, I.; Sedmale, G.; Stinkulis, G.

    2013-12-01

    It is possible to obtain cordierite ceramics with high temperature synthesis using both synthetic and raw natural materials. This paper discusses the possibilities to obtain cordierite ceramics, replacing part of required oxides with raw materials from various Latvian deposits of dolomite and clay. The obtained raw cordierite powders were ground in two modes (3 and 12 hours) and fired at 1200 °C. Ceramic samples were characterized by hydrostatic weighting method; crystalline phase composition was studied by XRD. Obtained samples were evaluated by their mechanical (compressive) strength and linear coefficient of thermal expansion (CTE). Thermal shock resistance was tested using water quenching method and afterwards evaluated by using ultrasonic method to test changes in Young's modulus of elasticity. Results show that increase in grinding time causes samples to densify and promote formation of cordierite crystalline phase which corresponds to increase in total compressive strength and decrease of CTE values. CTE values of samples ground for 12 hours conform to that of obtained in other researches.

  10. Analysis of the cylinder’s movement characteristics after entering water based on CFD

    NASA Astrophysics Data System (ADS)

    Liu, Xianlong

    2017-10-01

    It’s a variable speed motion after the cylinder vertical entry the water. Using dynamic mesh is mostly unstructured grid, and the calculation results are not ideal and consume huge computing resources. CFD method is used to calculate the resistance of the cylinder at different velocities. Cubic spline interpolation method is used to obtain the resistance at fixed speeds. The finite difference method is used to solve the motion equation, and the acceleration, velocity, displacement and other physical quantities are obtained after the cylinder enters the water.

  11. The holistic analysis of gamma-ray spectra in instrumental neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Blaauw, Menno

    1994-12-01

    A method for the interpretation of γ-ray spectra as obtained in INAA using linear least squares techniques is described. Results obtained using this technique and the traditional method previously in use at IRI are compared. It is concluded that the method presented performs better with respect to the number of detected elements, the resolution of interferences and the estimation of the accuracies of the reported element concentrations. It is also concluded that the technique is robust enough to obviate the deconvolution of multiplets.

  12. METHOD FOR EVALUATING MOLD GROWTH ON CEILING TILE

    EPA Science Inventory

    A method to extract mold spores from porous ceiling tiles was developed using a masticator blender. Ceiling tiles were inoculated and analyzed using four species of mold. Statistical analysis comparing results obtained by masticator extraction and the swab method was performed. T...

  13. Potentiometric detection in UPLC as an easy alternative to determine cocaine in biological samples.

    PubMed

    Daems, Devin; van Nuijs, Alexander L N; Covaci, Adrian; Hamidi-Asl, Ezat; Van Camp, Guy; Nagels, Luc J

    2015-07-01

    The analytical methods which are often used for the determination of cocaine in complex biological matrices are a prescreening immunoassay and confirmation by chromatography combined with mass spectrometry. We suggest an ultra-high-pressure liquid chromatography combined with a potentiometric detector, as a fast and practical method to detect and quantify cocaine in biological samples. An adsorption/desorption model was used to investigate the usefulness of the potentiometric detector to determine cocaine in complex matrices. Detection limits of 6.3 ng mL(-1) were obtained in plasma and urine, which is below the maximum residue limit (MRL) of 25 ng mL(-1). A set of seven plasma samples and 10 urine samples were classified identically by both methods as exceeding the MRL or being inferior to it. The results obtained with the UPLC/potentiometric detection method were compared with the results obtained with the UPLC/MS method for samples spiked with varying cocaine concentrations. The intraclass correlation coefficient was 0.997 for serum (n =7) and 0.977 for urine (n =8). As liquid chromatography is an established technique, and as potentiometry is very simple and cost-effective in terms of equipment, we believe that this method is potentially easy, inexpensive, fast and reliable. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Generation of Protein Crystals Using a Solution-Stirring Technique

    NASA Astrophysics Data System (ADS)

    Adachi, Hiroaki; Niino, Ai; Matsumura, Hiroyoshi; Takano, Kazufumi; Kinoshita, Takayoshi; Warizaya, Masaichi; Inoue, Tsuyoshi; Mori, Yusuke; Sasaki, Takatomo

    2004-06-01

    Crystals of bovine adenosine deaminase (ADA) were grown over a two week period in the presence of an inhibitor, whereas ADA crystals did not form using conventional crystallization methods when the inhibitor was excluded. To obtain ADA crystals in the absence of the inhibitor, a solution-stirring technique was used. The crystals obtained using this technique were found to be of high quality and were shown to have high structural resolution for X-ray diffraction analysis. The results of this study indicate that the stirring technique is a useful method for obtaining crystals of proteins that do not crystallize using conventional techniques.

  15. Comparison of cyclosporine determinations in whole blood by three different methods. HPLC, /sup 125/I RIA and /sup 3/H RIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, W.Y.; Lipsey, A.I.; Cheng, M.H.

    1987-04-01

    The authors have analyzed and compared the cyclosporine concentrations in whole blood specimens from pediatric renal transplant patients using three different methods: high-performance liquid chromatography (HPLC) (5u C18 reverse-phase column), /sup 3/H radioimmunoassay (RIA), and /sup 125/I RIA (substituted /sup 3/H-tracer in Sandoz Kit with /sup 125/I tracer. Results obtained by the /sup 125/I RIA correlated well with results obtained by the /sup 3/H RIA. Both RIA methods had similar correlation with the HPLC method. The /sup 125/I RIA method showed higher sensitivity and greater precision than the /sup 3/H RIA method. The authors conclude that the /sup 125/I RIAmore » method can be used for cyclosporine determination in whole blood specimens. The use of the /sup 125/I RIA provides a simple and rapid method with higher counting efficiency and less background quenching than the /sup 3/H RIA method, which requires cumbersome liquid scintillation counting procedures.« less

  16. Credit allocation for research institutes

    NASA Astrophysics Data System (ADS)

    Wang, J.-P.; Guo, Q.; Yang, K.; Han, J.-T.; Liu, J.-G.

    2017-05-01

    It is a challenging work to assess research performance of multiple institutes. Considering that it is unfair to average the credit to the institutes which is in the different order from a paper, in this paper, we present a credit allocation method (CAM) with a weighted order coefficient for multiple institutes. The results for the APS dataset with 18987 institutes show that top-ranked institutes obtained by the CAM method correspond to well-known universities or research labs with high reputation in physics. Moreover, we evaluate the performance of the CAM method when citation links are added or rewired randomly quantified by the Kendall's Tau and Jaccard index. The experimental results indicate that the CAM method has better performance in robustness compared with the total number of citations (TC) method and Shen's method. Finally, we give the first 20 Chinese universities in physics obtained by the CAM method. However, this method is valid for any other branch of sciences, not just for physics. The proposed method also provides universities and policy makers an effective tool to quantify and balance the academic performance of university.

  17. Determination of the spin and recovery characteristics of a typical low-wing general aviation design

    NASA Technical Reports Server (NTRS)

    Tischler, M. B.; Barlow, J. B.

    1980-01-01

    The equilibrium spin technique implemented in a graphical form for obtaining spin and recovery characteristics from rotary balance data is outlined. Results of its application to recent rotary balance tests of the NASA Low-Wing General Aviation Aircraft are discussed. The present results, which are an extension of previously published findings, indicate the ability of the equilibrium method to accurately evaluate spin modes and recovery control effectiveness. A comparison of the calculated results with available spin tunnel and full scale findings is presented. The technique is suitable for preliminary design applications as determined from the available results and data base requirements. A full discussion of implementation considerations and a summary of the results obtained from this method to date are presented.

  18. Application of time dependent Green's function method to scattering of elastic waves in anisotropic solids

    NASA Astrophysics Data System (ADS)

    Tewary, Vinod K.; Fortunko, Christopher M.

    The present, time-dependent 3D Green's function method resembles that used to study the propagation of elastic waves in a general, anisotropic half-space in the lattice dynamics of crystals. The method is used to calculate the scattering amplitude of elastic waves from a discontinuity in the half-space; exact results are obtained for 3D pulse propagation in a general, anisotropic half-space that contains either an interior point or a planar scatterer. The results thus obtained are applicable in the design of ultrasonic scattering experiments, especially as an aid in the definition of the spatial and time-domain transducer responses that can maximize detection reliability for specific categories of flaws in highly anisotropic materials.

  19. A finite element analysis of viscoelastically damped sandwich plates

    NASA Astrophysics Data System (ADS)

    Ma, B.-A.; He, J.-F.

    1992-01-01

    A finite element analysis associated with an asymptotic solution method for the harmonic flexural vibration of viscoelastically damped unsymmetrical sandwich plates is given. The element formulation is based on generalization of the discrete Kirchhoff theory (DKT) element formulation. The results obtained with the first order approximation of the asymptotic solution presented here are the same as those obtained by means of the modal strain energy (MSE) method. By taking more terms of the asymptotic solution, with successive calculations and use of the Padé approximants method, accuracy can be improved. The finite element computation has been verified by comparison with an analytical exact solution for rectangular plates with simply supported edges. Results for the same plates with clamped edges are also presented.

  20. UOE Pipe Numerical Model: Manufacturing Process And Von Mises Residual Stresses Resulted After Each Technological Step

    NASA Astrophysics Data System (ADS)

    Delistoian, Dmitri; Chirchor, Mihael

    2017-12-01

    Fluid transportation from production areas to final customer is effectuated by pipelines. For oil and gas industry, pipeline safety and reliability represents a priority. From this reason, pipe quality guarantee directly influence pipeline designed life, but first of all protects environment. A significant number of longitudinally welded pipes, for onshore/offshore pipelines, are manufactured by UOE method. This method is based on cold forming. In present study, using finite element method is modeled UOE pipe manufacturing process and is obtained von Mises stresses for each step. Numerical simulation is performed for L415 MB (X60) steel plate with 7,9 mm thickness, length 30 mm and width 1250mm, as result it is obtained a DN 400 pipe.

  1. Study of different filtering techniques applied to spectra from airborne gamma spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilhelm, Emilien; Gutierrez, Sebastien; Reboli, Anne

    2015-07-01

    One of the features of spectra obtained by airborne gamma spectrometry is low counting statistics due to the short acquisition time (1 s) and the large source-detector distance (40 m). It leads to considerable uncertainty in radionuclide identification and determination of their respective activities from the windows method recommended by the IAEA, especially for low-level radioactivity. The present work compares the results obtained with filters in terms of errors of the filtered spectra with the window method and over the whole gamma energy range. The results are used to determine which filtering technique is the most suitable in combination withmore » some method for total stripping of the spectrum. (authors)« less

  2. A novel wavelet neural network based pathological stage detection technique for an oral precancerous condition

    PubMed Central

    Paul, R R; Mukherjee, A; Dutta, P K; Banerjee, S; Pal, M; Chatterjee, J; Chaudhuri, K; Mukkerjee, K

    2005-01-01

    Aim: To describe a novel neural network based oral precancer (oral submucous fibrosis; OSF) stage detection method. Method: The wavelet coefficients of transmission electron microscopy images of collagen fibres from normal oral submucosa and OSF tissues were used to choose the feature vector which, in turn, was used to train the artificial neural network. Results: The trained network was able to classify normal and oral precancer stages (less advanced and advanced) after obtaining the image as an input. Conclusions: The results obtained from this proposed technique were promising and suggest that with further optimisation this method could be used to detect and stage OSF, and could be adapted for other conditions. PMID:16126873

  3. Procedures utilized for obtaining direct and remote atmospheric carbon monoxide measurements over the lower Lake Michigan Basin in August of 1976

    NASA Technical Reports Server (NTRS)

    Casas, J. C.; Condon, E.; Campbell, S. A.

    1978-01-01

    In order to establish the applicability of a gas filter correlation radiometer, GFCR, to remote carbon monoxide, CO, measurements on a regional and worldwide basis, Old Dominion University has been engaged in the development of accurate and cost effective techniques for inversion of GFCR CO data and in the development of an independent gas chromatographic technique for measuring CO. This independent method is used to verify the results and the associated inversion method obtained from the GFCR. A description of both methods (direct and remote) will be presented. Data obtained by both techniques during a flight test over the lower Lake Michigan Basin in August of 1976 will also be discussed.

  4. Efficient flow injection and sequential injection methods for spectrophotometric determination of oxybenzone in sunscreens based on reaction with Ni(II).

    PubMed

    Chisvert, A; Salvador, A; Pascual-Martí, M C; March, J G

    2001-04-01

    Spectrophotometric determination of a widely used UV-filter, such as oxybenzone, is proposed. The method is based on the complexation reaction between oxybenzone and Ni(II) in ammoniacal medium. The stoichiometry of the reaction, established by the Job method, was 1:1. Reaction conditions were studied and the experimental parameters were optimized, for both flow injection (FI) and sequential injection (SI) determinations, with comparative purposes. Sunscreen formulations containing oxybenzone were analyzed by the proposed methods and results compared with those obtained by HPLC. Data show that both FI and SI procedures provide accurate and precise results. The ruggedness, sensitivity and LOD are adequate to the analysis requirements. The sample frequency obtained by FI is three-fold higher than that of SI analysis. SI is less reagent-consuming than FI.

  5. Multiframe super resolution reconstruction method based on light field angular images

    NASA Astrophysics Data System (ADS)

    Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao

    2017-12-01

    The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.

  6. New approach to canonical partition functions computation in Nf=2 lattice QCD at finite baryon density

    NASA Astrophysics Data System (ADS)

    Bornyakov, V. G.; Boyda, D. L.; Goy, V. A.; Molochkov, A. V.; Nakamura, Atsushi; Nikolaev, A. A.; Zakharov, V. I.

    2017-05-01

    We propose and test a new approach to computation of canonical partition functions in lattice QCD at finite density. We suggest a few steps procedure. We first compute numerically the quark number density for imaginary chemical potential i μq I . Then we restore the grand canonical partition function for imaginary chemical potential using the fitting procedure for the quark number density. Finally we compute the canonical partition functions using high precision numerical Fourier transformation. Additionally we compute the canonical partition functions using the known method of the hopping parameter expansion and compare results obtained by two methods in the deconfining as well as in the confining phases. The agreement between two methods indicates the validity of the new method. Our numerical results are obtained in two flavor lattice QCD with clover improved Wilson fermions.

  7. Characterization of Linum usitatissimum L. oil obtained from different extraction technique and in vitro antioxidant potential of supercritical fluid extract

    PubMed Central

    Chauhan, Rishika; Chester, Karishma; Khan, Yasmeen; Tamboli, Ennus Tajuddin; Ahmad, Sayeed

    2015-01-01

    Aim: Present investigation was aimed to characterize the fixed oil of Linum usitatissimum L. using five different extraction methods: Supercritical fluid extraction (SFE), ultrasound-assistance, soxhlet extraction, solvent extraction, and three phase partitioning method. Materials and Methods: The SFE conditions (temperature, pressure, and volume of CO2) were optimized prior for better yield. The extracted oils were analyzed and compared for their physiochemical parameters, high performance thin layer chromatography (HPTLC), gas chromatography-mass spectrometry (GC-MS), and Fourier-transformed infrared spectroscopy (FT-IR) fingerprinting. Antioxidant activity was also determined using 1,1-diphenyl-2-picrylhydrazyl and superoxide scavenging method. Result: The main fatty acids were α-linolenic acid, linoleic acid, palmitic acid, and stearic acid as obtained by GC-MS. HPTLC analysis revealed the presence of similar major components in chromatograms. Similarly, the pattern of peaks, as obtained in FT-IR and GC-MS spectra of same oils by different extraction methods, were superimposable. Conclusion: Analysis reported that the fixed oil of L. usitatissimum L. is a good source of n-3 fatty acid with the significant antioxidant activity of oil obtained from SFE extraction method. PMID:26681884

  8. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    PubMed Central

    2012-01-01

    Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791

  9. Theory of the dynamical thermal conductivity of metals

    NASA Astrophysics Data System (ADS)

    Bhalla, Pankaj; Kumar, Pradeep; Das, Nabyendu; Singh, Navinder

    2016-09-01

    The Mori's projection method, known as the memory function method, is an important theoretical formalism to study various transport coefficients. In the present work, we calculate the dynamical thermal conductivity in the case of metals using the memory function formalism. We introduce thermal memory functions for the first time and discuss the behavior of thermal conductivity in both the zero frequency limit and in the case of nonzero frequencies. We compare our results for the zero frequency case with the results obtained by the Bloch-Boltzmann kinetic approach and find that both approaches agree with each other. Motivated by some recent experimental advancements, we obtain several new results for the ac or the dynamical thermal conductivity.

  10. Recovery and Determination of Adsorbed Technetium on Savannah River Site Charcoal Stack Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lahoda, Kristy G.; Engelmann, Mark D.; Farmer, Orville T.

    2008-03-01

    Experimental results are provided for the sample analyses for technetium (Tc) in charcoal samples placed in-line with a Savannah River Site (SRS) processing stack effluent stream as a part of an environmental surveillance program. The method for Tc removal from charcoal was based on that originally developed with high purity charcoal. Presented is the process that allowed for the quantitative analysis of 99Tc in SRS charcoal stack samples with and without 97Tc as a tracer. The results obtained with the method using the 97Tc tracer quantitatively confirm the results obtained with no tracer added. All samples contain 99Tc at themore » pg g-1 level.« less

  11. Numerical Simulation of Ballistic Impact on Particulate Composite Target using Discrete Element Method: 1-D and 2-D Models

    NASA Astrophysics Data System (ADS)

    Nair, Rajesh P.; Lakshmana Rao, C.

    2014-01-01

    Ballistic impact (BI) is a study that deals with a projectile hitting a target and observing its effects in terms of deformation and fragmentation of the target. The Discrete Element Method (DEM) is a powerful numerical technique used to model solid and particulate media. Here, an attempt is made to simulate the BI process using DEM. 1-D DEM for BI is developed and depth of penetration (DOP) is obtained. The DOP is compared with results obtained from 2-D DEM. DEM results are found to match empirical results. Effects of strain rate sensitivity of the material response on DOP are also simulated.

  12. A novel line segment detection algorithm based on graph search

    NASA Astrophysics Data System (ADS)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  13. Advances in ultrasonic testing of austenitic stainless steel welds. Towards a 3D description of the material including attenuation and optimisation by inversion

    NASA Astrophysics Data System (ADS)

    Moysan, J.; Gueudré, C.; Ploix, M.-A.; Corneloup, G.; Guy, Ph.; Guerjouma, R. El; Chassignole, B.

    In the case of multi-pass welds, the material is very difficult to describe due to its anisotropic and heterogeneous properties. Anisotropy results from the metal solidification and is correlated with the grain orientation. A precise description of the material is one of the key points to obtain reliable results with wave propagation codes. A first advance is the model MINA which predicts the grain orientations in multi-pass 316-L steel welds. For flat position welding, good predictions of the grains orientations were obtained using 2D modelling. In case of welding in position the resulting grain structure may be 3D oriented. We indicate how the MINA model can be improved for 3D description. A second advance is a good quantification of the attenuation. Precise measurements are obtained using plane waves angular spectrum method together with the computation of the transmission coefficients for triclinic material. With these two first advances, the third one is now possible: developing an inverse method to obtain the material description through ultrasonic measurements at different positions.

  14. Laser ultrasonics for measurements of high-temperature elastic properties and internal temperature distribution

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro

    2001-06-01

    We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.

  15. Extension of electronic speckle correlation interferometry to large deformations

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Sciammarella, Federico M.

    1998-07-01

    The process of fringe formation under simultaneous illumination in two orthogonal directions is analyzed. Procedures to extend the applicability of this technique to large deformation and high density of fringes are introduced. The proposed techniques are applied to a number of technical problems. Good agreement is obtained when the experimental results are compared with results obtained by other methods.

  16. Determination of carotid disease with the application of STFT and CWT methods.

    PubMed

    Hardalaç, Firat; Yildirim, Hanefi; Serhatlioğlu, Selami

    2007-06-01

    In this study, Doppler signals were recorded from the output of carotid arteries of 40 subjects and transferred to a personal computer (PC) by using a 16-bit sound card. Doppler difference frequencies were recorded from each of the subjects, and then analyzed by using short-time Fourier transform (STFT) and the continuous wavelet transform (CWT) methods to obtain their sonograms. These sonograms were then used to determine the relationships of applied methods with medical conditions. The sonograms that were obtained by CWT method gave better results for spectral resolution than the STFT method. The sonograms of CWT method offer net envelope and better imaging, so that the measurement of blood flow and brain pressure can be made more accurately. Simultaneously, receiver operating characteristic (ROC) analysis has been conducted for this study and the estimation performance of the spectral resolution for the STFT and CTW has been obtained. The STFT has shown a 80.45% success for the spectral resolution while CTW has shown a 89.90% success.

  17. A continuous spectrophotometric assay method for peptidylarginine deiminase type 4 activity.

    PubMed

    Liao, Ya-Fan; Hsieh, Hui-Chieh; Liu, Guang-Yaw; Hung, Hui-Chih

    2005-12-15

    A simple, continuous spectrophotometric assay for peptidylarginine deiminase (PAD) is described. Deimination of peptidylarginine results in the formation of peptidylcitrulline and ammonia. The ammonia released during peptidylarginine hydrolysis is coupled to the glutamate-dehydrogenase-catalyzed reductive amination of alpha-ketoglutarate to glutamate and reduced nicotinamide adenine dinucleotide (NADH) oxidation. The disappearance of absorbance at 340nm due to NADH oxidation is continuously measured. The specific activity obtained by this new protocol for highly purified human PAD is comparable to that obtained by a commonly used colorimetric procedure, which measures the ureido group of peptidylcitrulline by coupling with diacetyl monoxime. The present continuous spectrophotometric method is highly sensitive and accurate and is thus suitable for enzyme kinetic analysis of PAD. The Ca(2+) concentration for half-maximal activity of PAD obtained by this method is comparable to that previously obtained by the colorimetric procedure.

  18. Monitoring of rock glacier dynamics by multi-temporal UAV images

    NASA Astrophysics Data System (ADS)

    Morra di Cella, Umberto; Pogliotti, Paolo; Diotri, Fabrizio; Cremonese, Edoardo; Filippa, Gianluca; Galvagno, Marta

    2015-04-01

    During the last years several steps forward have been made in the comprehension of rock glaciers dynamics mainly for their potential evolution into rapid mass movements phenomena. Monitoring the surface movement of creeping mountain permafrost is important for understanding the potential effect of ongoing climate change on such a landforms. This study presents the reconstruction of two years of surface movements and DEM changes obtained by multi-temporal analysis of UAV images (provided by SenseFly Swinglet CAM drone). The movement rate obtained by photogrammetry are compared to those obtained by differential GNSS repeated campaigns on almost fifty points distributed on the rock glacier. Results reveals a very good agreements between both rates velocities obtained by the two methods and vertical displacements on fixed points. Strengths, weaknesses and shrewdness of this methods will be discussed. Such a method is very promising mainly for remote regions with difficult access.

  19. Investigating a hybrid perturbation-Galerkin technique using computer algebra

    NASA Technical Reports Server (NTRS)

    Andersen, Carl M.; Geer, James F.

    1988-01-01

    A two-step hybrid perturbation-Galerkin method is presented for the solution of a variety of differential equations type problems which involve a scalar parameter. The resulting (approximate) solution has the form of a sum where each term consists of the product of two functions. The first function is a function of the independent field variable(s) x, and the second is a function of the parameter lambda. In step one the functions of x are determined by forming a perturbation expansion in lambda. In step two the functions of lambda are determined through the use of the classical Bubnov-Gelerkin method. The resulting hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Bubnov-Galerkin methods applied separately, while combining some of the good features of each. In particular, the results can be useful well beyond the radius of convergence associated with the perturbation expansion. The hybrid method is applied with the aid of computer algebra to a simple two-point boundary value problem where the radius of convergence is finite and to a quantum eigenvalue problem where the radius of convergence is zero. For both problems the hybrid method apparently converges for an infinite range of the parameter lambda. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.

  20. Comparison of phase velocities from array measurements of Rayleigh waves associated with microtremor and results calculated from borehole shear-wave velocity profiles

    USGS Publications Warehouse

    Liu, Hsi-Ping; Boore, David M.; Joyner, William B.; Oppenheimer, David H.; Warrick, Richard E.; Zhang, Wenbo; Hamilton, John C.; Brown, Leo T.

    2000-01-01

    Shear-wave velocities (VS) are widely used for earthquake ground-motion site characterization. VS data are now largely obtained using borehole methods. Drilling holes, however, is expensive. Nonintrusive surface methods are inexpensive for obtaining VS information, but not many comparisons with direct borehole measurements have been published. Because different assumptions are used in data interpretation of each surface method and public safety is involved in site characterization for engineering structures, it is important to validate the surface methods by additional comparisons with borehole measurements. We compare results obtained from a particular surface method (array measurement of surface waves associated with microtremor) with results obtained from borehole methods. Using a 10-element nested-triangular array of 100-m aperture, we measured surface-wave phase velocities at two California sites, Garner Valley near Hemet and Hollister Municipal Airport. The Garner Valley site is located at an ancient lake bed where water-saturated sediment overlies decomposed granite on top of granite bedrock. Our array was deployed at a location where seismic velocities had been determined to a depth of 500 m by borehole methods. At Hollister, where the near-surface sediment consists of clay, sand, and gravel, we determined phase velocities using an array located close to a 60-m deep borehole where downhole velocity logs already exist. Because we want to assess the measurements uncomplicated by uncertainties introduced by the inversion process, we compare our phase-velocity results with the borehole VS depth profile by calculating fundamental-mode Rayleigh-wave phase velocities from an earth model constructed from the borehole data. For wavelengths less than ~2 times of the array aperture at Garner Valley, phase-velocity results from array measurements agree with the calculated Rayleigh-wave velocities to better than 11%. Measurement errors become larger for wavelengths 2 times greater than the array aperture. At Hollister, the measured phase velocity at 3.9 Hz (near the upper edge of the microtremor frequency band) is within 20% of the calculated Rayleigh-wave velocity. Because shear-wave velocity is the predominant factor controlling Rayleigh-wave phase velocities, the comparisons suggest that this nonintrusive method can provide VS information adequate for ground-motion estimation.

  1. Empirical evaluation of the market price of risk using the CIR model

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Torosantucci, L.; Uboldi, A.

    2007-03-01

    We describe a simple but effective method for the estimation of the market price of risk. The basic idea is to compare the results obtained by following two different approaches in the application of the Cox-Ingersoll-Ross (CIR) model. In the first case, we apply the non-linear least squares method to cross sectional data (i.e., all rates of a single day). In the second case, we consider the short rate obtained by means of the first procedure as a proxy of the real market short rate. Starting from this new proxy, we evaluate the parameters of the CIR model by means of martingale estimation techniques. The estimate of the market price of risk is provided by comparing results obtained with these two techniques, since this approach makes possible to isolate the market price of risk and evaluate, under the Local Expectations Hypothesis, the risk premium given by the market for different maturities. As a test case, we apply the method to data of the European Fixed Income Market.

  2. Research of obtaining TiO2 by sol-gel method using titanium isopropoxide TIP and tetra-n-butyl orthotitanate TNB

    NASA Astrophysics Data System (ADS)

    Gómez de Salazar, J. M.; Nutescu Duduman, C.; Juárez Gonzalez, M.; Palamarciuc, I.; Barrena Pérez, M. I.; Carcea, I.

    2016-08-01

    Titanium dioxide crystallises in three polymorphs: anatase, rutile and brookite. Rutile is most stable form of the TiO2 polymorphs. In this paper we concentrate on obtaining rutile and anatase, both used in various applications. The chosen method is sol-gel, which is a reliable method used for obtaining titanium oxides. We prepared titanium dioxide with using titanium isopropoxide (TIP) with chemical construction (C12H28O4Ti) and tetra-n-butyl orthotitanate (TNB) with chemical construction (C16H36O4Ti). The experiments were carried out in order to compare the results of the samples with similar reaction conditions, but with different precursors, thus concluding which precursor gives best results. Using different analysis techniques as X-ray Diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM) and Thermogravimetric Analysis (TGA) we characterised the samples morphologically and structurally.

  3. Finite Element Creep Damage Analyses and Life Prediction of P91 Pipe Containing Local Wall Thinning Defect

    NASA Astrophysics Data System (ADS)

    Xue, Jilin; Zhou, Changyu

    2016-03-01

    Creep continuum damage finite element (FE) analyses were performed for P91 steel pipe containing local wall thinning (LWT) defect subjected to monotonic internal pressure, monotonic bending moment and combined internal pressure and bending moment by orthogonal experimental design method. The creep damage lives of pipe containing LWT defect under different load conditions were obtained. Then, the creep damage life formulas were regressed based on the creep damage life results from FE method. At the same time a skeletal point rupture stress was found and used for life prediction which was compared with creep damage lives obtained by continuum damage analyses. From the results, the failure lives of pipe containing LWT defect can be obtained accurately by using skeletal point rupture stress method. Finally, the influence of LWT defect geometry was analysed, which indicated that relative defect depth was the most significant factor for creep damage lives of pipe containing LWT defect.

  4. Evaluation of retrieval methods of daytime convective boundary layer height based on lidar data

    NASA Astrophysics Data System (ADS)

    Li, Hong; Yang, Yi; Hu, Xiao-Ming; Huang, Zhongwei; Wang, Guoyin; Zhang, Beidou; Zhang, Tiejun

    2017-04-01

    The atmospheric boundary layer height is a basic parameter in describing the structure of the lower atmosphere. Because of their high temporal resolution, ground-based lidar data are widely used to determine the daytime convective boundary layer height (CBLH), but the currently available retrieval methods have their advantages and drawbacks. In this paper, four methods of retrieving the CBLH (i.e., the gradient method, the idealized backscatter method, and two forms of the wavelet covariance transform method) from lidar normalized relative backscatter are evaluated, using two artificial cases (an idealized profile and a case similar to real profile), to test their stability and accuracy. The results show that the gradient method is suitable for high signal-to-noise ratio conditions. The idealized backscatter method is less sensitive to the first estimate of the CBLH; however, it is computationally expensive. The results obtained from the two forms of the wavelet covariance transform method are influenced by the selection of the initial input value of the wavelet amplitude. Further sensitivity analysis using real profiles under different orders of magnitude of background counts show that when different initial input values are set, the idealized backscatter method always obtains consistent CBLH. For two wavelet methods, the different CBLH are always obtained with the increase in the wavelet amplitude when noise is significant. Finally, the CBLHs as measured by three lidar-based methods are evaluated by as measured from L-band soundings. The boundary layer heights from two instruments coincide with ±200 m in most situations.

  5. Automatic cloud tracking applied to GOES and Meteosat observations

    NASA Technical Reports Server (NTRS)

    Endlich, R. M.; Wolf, D. E.

    1981-01-01

    An improved automatic processing method for the tracking of cloud motions as revealed by satellite imagery is presented and applications of the method to GOES observations of Hurricane Eloise and Meteosat water vapor and infrared data are presented. The method is shown to involve steps of picture smoothing, target selection and the calculation of cloud motion vectors by the matching of a group at a given time with its best likeness at a later time, or by a cross-correlation computation. Cloud motion computations can be made in as many as four separate layers simultaneously. For data of 4 and 8 km resolution in the eye of Hurricane Eloise, the automatic system is found to provide results comparable in accuracy and coverage to those obtained by NASA analysts using the Atmospheric and Oceanographic Information Processing System, with results obtained by the pattern recognition and cross correlation computations differing by only fractions of a pixel. For Meteosat water vapor data from the tropics and midlatitudes, the automatic motion computations are found to be reliable only in areas where the water vapor fields contained small-scale structure, although excellent results are obtained using Meteosat IR data in the same regions. The automatic method thus appears to be competitive in accuracy and coverage with motion determination by human analysts.

  6. Convective heat transfer for a gaseous slip flow in micropipe and parallel-plate microchannel with uniform wall heat flux: effect of axial heat conduction

    NASA Astrophysics Data System (ADS)

    Haddout, Y.; Essaghir, E.; Oubarra, A.; Lahjomri, J.

    2017-12-01

    Thermally developing laminar slip flow through a micropipe and a parallel plate microchannel, with axial heat conduction and uniform wall heat flux, is studied analytically by using a powerful method of self-adjoint formalism. This method results from a decomposition of the elliptic energy equation into a system of two first-order partial differential equations. The advantage of this method over other methods, resides in the fact that the decomposition procedure leads to a selfadjoint problem although the initial problem is apparently not a self-adjoint one. The solution is an extension of prior studies and considers a first order slip model boundary conditions at the fluid-wall interface. The analytical expressions for the developing temperature and local Nusselt number in the thermal entrance region are obtained in the general case. Therefore, the solution obtained could be extended easily to any hydrodynamically developed flow and arbitrary heat flux distribution. The analytical results obtained are compared for select simplified cases with available numerical calculations and they both agree. The results show that the heat transfer characteristics of flow in the thermal entrance region are strongly influenced by the axial heat conduction and rarefaction effects which are respectively characterized by Péclet and Knudsen numbers.

  7. Study on the wind field and pollutant dispersion in street canyons using a stable numerical method.

    PubMed

    Xia, Ji-Yang; Leung, Dennis Y C

    2005-01-01

    A stable finite element method for the time dependent Navier-Stokes equations was used for studying the wind flow and pollutant dispersion within street canyons. A three-step fractional method was used to solve the velocity field and the pressure field separately from the governing equations. The Streamline Upwind Petrov-Galerkin (SUPG) method was used to get stable numerical results. Numerical oscillation was minimized and satisfactory results can be obtained for flows at high Reynolds numbers. Simulating the flow over a square cylinder within a wide range of Reynolds numbers validates the wind field model. The Strouhal numbers obtained from the numerical simulation had a good agreement with those obtained from experiment. The wind field model developed in the present study is applied to simulate more complex flow phenomena in street canyons with two different building configurations. The results indicated that the flow at rooftop of buildings might not be assumed parallel to the ground as some numerical modelers did. A counter-clockwise rotating vortex may be found in street canyons with an inflow from the left to right. In addition, increasing building height can increase velocity fluctuations in the street canyon under certain circumstances, which facilitate pollutant dispersion. At high Reynolds numbers, the flow regimes in street canyons do not change with inflow velocity.

  8. Convective heat transfer for a gaseous slip flow in micropipe and parallel-plate microchannel with uniform wall heat flux: effect of axial heat conduction

    NASA Astrophysics Data System (ADS)

    Haddout, Y.; Essaghir, E.; Oubarra, A.; Lahjomri, J.

    2018-06-01

    Thermally developing laminar slip flow through a micropipe and a parallel plate microchannel, with axial heat conduction and uniform wall heat flux, is studied analytically by using a powerful method of self-adjoint formalism. This method results from a decomposition of the elliptic energy equation into a system of two first-order partial differential equations. The advantage of this method over other methods, resides in the fact that the decomposition procedure leads to a selfadjoint problem although the initial problem is apparently not a self-adjoint one. The solution is an extension of prior studies and considers a first order slip model boundary conditions at the fluid-wall interface. The analytical expressions for the developing temperature and local Nusselt number in the thermal entrance region are obtained in the general case. Therefore, the solution obtained could be extended easily to any hydrodynamically developed flow and arbitrary heat flux distribution. The analytical results obtained are compared for select simplified cases with available numerical calculations and they both agree. The results show that the heat transfer characteristics of flow in the thermal entrance region are strongly influenced by the axial heat conduction and rarefaction effects which are respectively characterized by Péclet and Knudsen numbers.

  9. Vibration analysis based on electronic stroboscopic speckle-shearing pattern interferometry

    NASA Astrophysics Data System (ADS)

    Jia, Dagong; Yu, Changsong; Xu, Tianhua; Jin, Chao; Zhang, Hongxia; Jing, Wencai; Zhang, Yimo

    2008-12-01

    In this paper, an electronic speckle-shearing pattern interferometer with pulsed laser and pulse frequency controller is fabricated. The principle of measuring the vibration in the object using electronic stroboscopic speckle--shearing pattern interferometer is analyzed. Using a metal plate, the edge of which is clamped, as an experimental specimen, the shear interferogram are obtained under two experimental frequencies, 100 Hz and 200 Hz. At the same time, the vibration of this metal plate under the same experimental conditions is measured using the time-average method in order to test the performance of this electronic stroboscopic speckle-shearing pattern interferometer. The result indicated that the fringe of shear interferogram become dense with the experimental frequency increasing. Compared the fringe pattern obtained by the stroboscopic method with the fringe obtained by the time-average method, the shearing interferogram of stroboscopic method is clearer than the time-average method. In addition, both the time-average method and stroboscopic method are suited for qualitative analysis for the vibration of the object. More over, the stroboscopic method is well adapted to quantitative vibration analysis.

  10. Solution of second order quasi-linear boundary value problems by a wavelet method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn

    2015-03-10

    A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less

  11. The use of National Weather Service Data to Compute the Dose to the MEOI.

    PubMed

    Vickers, Linda

    2018-05-01

    The Turner method is the "benchmark method" for computing the stability class that is used to compute the X/Q (s m). The Turner method should be used to ascertain the validity of X/Q results determined by other methods. This paper used site-specific meteorological data obtained from the National Weather Service. The Turner method described herein is simple, quick, accurate, and transparent because all of the data, calculations, and results are visible for verification and validation with published literature.

  12. Comparative Study on Two Different Methods for Determination of Hydraulic Conductivity of HeLa Cells During Freezing.

    PubMed

    Li, Lei; Gao, Cai; Zhao, Gang; Shu, Zhiquan; Cao, Yunxia; Gao, Dayong

    2016-12-01

    The measurement of hydraulic conductivity of the cell membrane is very important for optimizing the protocol of cryopreservation and cryosurgery. There are two different methods using differential scanning calorimetry (DSC) to measure the freezing response of cells and tissues. Devireddy et al. presented the slow-fast-slow (SFS) cooling method, in which the difference of the heat release during the freezing process between the osmotically active and inactive cells is used to obtain the cell membrane hydraulic conductivity and activation energy. Luo et al. simplified the procedure and introduced the single-slow (SS) cooling protocol, which requires only one cooling process although different cytocrits are required for the determination of the membrane transport properties. To the best of our knowledge, there is still a lack of comparison of experimental processes and requirements for experimental conditions between these two methods. This study made a systematic comparison between these two methods from the aforementioned aspects in detail. The SFS and SS cooling methods mentioned earlier were utilized to obtain the reference hydraulic conductivity (L pg ) and activation energy (E Lp ) of HeLa cells by fitting the model to DSC data. With the SFS method, it was determined that L pg  = 0.10 μm/(min·atm) and E Lp  = 22.9 kcal/mol; whereas the results obtained by the SS cooling method showed that L pg  = 0.10 μm/(min·atm) and E Lp  = 23.6 kcal/mol. The results indicated that the values of the water transport parameters measured by two methods were comparable. In other words, the two parameters can be obtained by comparing the heat releases between two slow cooling processes of the same sample according to the SFS method. However, the SS method required analyzing heat releases of samples with different cytocrits. Thus, more experimental time was required.

  13. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  14. An Aggregated Method for Determining Railway Defects and Obstacle Parameters

    NASA Astrophysics Data System (ADS)

    Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat

    2018-03-01

    The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.

  15. Quantification and Statistical Analysis Methods for Vessel Wall Components from Stained Images with Masson's Trichrome

    PubMed Central

    Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco

    2016-01-01

    Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643

  16. Water stress assessment of cork oak leaves and maritime pine needles based on LIF spectra

    NASA Astrophysics Data System (ADS)

    Lavrov, A.; Utkin, A. B.; Marques da Silva, J.; Vilar, Rui; Santos, N. M.; Alves, B.

    2012-02-01

    The aim of the present work was to develop a method for the remote assessment of the impact of fire and drought stress on Mediterranean forest species such as the cork oak ( Quercus suber) and maritime pine ( Pinus pinaster). The proposed method is based on laser induced fluorescence (LIF): chlorophyll fluorescence is remotely excited by frequency-doubled YAG:Nd laser radiation pulses and collected and analyzed using a telescope and a gated high sensitivity spectrometer. The plant health criterion used is based on the I 685/ I 740 ratio value, calculated from the fluorescence spectra. The method was benchmarked by comparing the results achieved with those obtained by conventional, continuous excitation fluorometric method and water loss gravimetric measurements. The results obtained with both methods show a strong correlation between them and with the weight-loss measurements, showing that the proposed method is suitable for fire and drought impact assessment on these two species.

  17. A Compact Immunoassay Platform Based on a Multicapillary Glass Plate

    PubMed Central

    Xue, Shuhua; Zeng, Hulie; Yang, Jianmin; Nakajima, Hizuru; Uchiyama, Katsumi

    2014-01-01

    A highly sensitive, rapid immunoassay performed in the multi-channels of a micro-well array consisting of a multicapillary glass plate (MCP) and a polydimethylsiloxane (PDMS) slide is described. The micro-dimensions and large surface area of the MCP permitted the diffusion distance to be decreased and the reaction efficiency to be increased. To confirm the concept of the method, human immunoglobulin A (h-IgA) was measured using both the proposed immunoassay system and the traditional 96-well plate method. The proposed method resulted in a 1/5-fold decrease of immunoassay time, and a 1/56-fold cut in reagent consumption with a 0.05 ng/mL of limit of detection (LOD) for IgA. The method was also applied to saliva samples obtained from healthy volunteers. The results correlated well to those obtained by the 96-well plate method. The method has the potential for use in disease diagnostic or on-site immunoassays. PMID:24859022

  18. Applications of IBSOM and ETEM for solving the nonlinear chains of atoms with long-range interactions

    NASA Astrophysics Data System (ADS)

    Foroutan, Mohammadreza; Zamanpour, Isa; Manafian, Jalil

    2017-10-01

    This paper presents a number of new solutions obtained for solving a complex nonlinear equation describing dynamics of nonlinear chains of atoms via the improved Bernoulli sub-ODE method (IBSOM) and the extended trial equation method (ETEM). The proposed solutions are kink solitons, anti-kink solitons, soliton solutions, hyperbolic solutions, trigonometric solutions, and bellshaped soliton solutions. Then our new results are compared with the well-known results. The methods used here are very simple and succinct and can be also applied to other nonlinear models. The balance number of these methods is not constant contrary to other methods. The proposed methods also allow us to establish many new types of exact solutions. By utilizing the Maple software package, we show that all obtained solutions satisfy the conditions of the studied model. More importantly, the solutions found in this work can have significant applications in Hamilton's equations and generalized momentum where solitons are used for long-range interactions.

  19. Theoretical evaluation of accuracy in position and size of brain activity obtained by near-infrared topography

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji

    2004-06-01

    Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.

  20. Application of P-wave Hybrid Theory to the Scattering of Electrons from He+ and Resonances in He and H ion

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.

    2012-01-01

    The P-wave hybrid theory of electron-hydrogen elastic scattering [Phys. Rev. A 85, 052708 (2012)] is applied to the P-wave scattering from He ion. In this method, both short-range and long-range correlations are included in the Schroedinger equation at the same time, by using a combination of a modified method of polarized orbitals and the optical potential formalism. The short-correlation functions are of Hylleraas type. It is found that the phase shifts are not significantly affected by the modification of the target function by a method similar to the method of polarized orbitals and they are close to the phase shifts calculated earlier by Bhatia [Phys. Rev. A 69, 032714 (2004)]. This indicates that the correlation function is general enough to include the target distortion (polarization) in the presence of the incident electron. The important fact is that in the present calculation, to obtain similar results only a 20-term correlation function is needed in the wave function compared to the 220- term wave function required in the above-mentioned calculation. Results for the phase shifts, obtained in the present hybrid formalism, are rigorous lower bounds to the exact phase shifts. The lowest P-wave resonances in He atom and hydrogen ion have been calculated and compared with the results obtained using the Feshbach projection operator formalism [Phys. Rev. A, 11, 2018 (1975)]. It is concluded that accurate resonance parameters can be obtained by the present method, which has the advantage of including corrections due to neighboring resonances, bound states and the continuum in which these resonance are embedded.

  1. Anisotropic Resistivity Forward Modelling Using Automatic Generated Higher-order Finite Element Codes

    NASA Astrophysics Data System (ADS)

    Wang, W.; Liu, J.

    2016-12-01

    Forward modelling is the general way to obtain responses of geoelectrical structures. Field investigators might find it useful for planning surveys and choosing optimal electrode configurations with respect to their targets. During the past few decades much effort has been put into the development of numerical forward codes, such as integral equation method, finite difference method and finite element method. Nowadays, most researchers prefer the finite element method (FEM) for its flexible meshing scheme, which can handle models with complex geometry. Resistivity Modelling with commercial sofewares such as ANSYS and COMSOL is convenient, but like working with a black box. Modifying the existed codes or developing new codes is somehow a long period. We present a new way to obtain resistivity forward modelling codes quickly, which is based on the commercial sofeware FEPG (Finite element Program Generator). Just with several demanding scripts, FEPG could generate FORTRAN program framework which can easily be altered to adjust our targets. By supposing the electric potential is quadratic in each element of a two-layer model, we obtain quite accurate results with errors less than 1%, while more than 5% errors could appear by linear FE codes. The anisotropic half-space model is supposed to concern vertical distributed fractures. The measured apparent resistivities along the fractures are bigger than results from its orthogonal direction, which are opposite of the true resistivities. Interpretation could be misunderstood if this anisotropic paradox is ignored. The technique we used can obtain scientific codes in a short time. The generated powerful FORTRAN codes could reach accurate results by higher-order assumption and can handle anisotropy to make better interpretations. The method we used could be expand easily to other domain where FE codes are needed.

  2. Comparison of five pretreatments for the production of fermentable sugars obtained from Pinus pseudostrobus L. wood

    PubMed Central

    Farías-Sánchez, Juan Carlos; López-Miranda, Javier; Castro-Montoya, Agustín Jaime; Saucedo-Luna, Jaime; Carrillo-Parra, Artemio; López-Albarrán, Pablo; Pineda-Pimentel, María Guadalupe; Rutiaga-Quiñones, José Guadalupe

    2015-01-01

    To benefit from the use of a waste product such as pine sawdust from a sawmill in Michoacán, Mexico, five different pretreatments for the production of reducing sugars by enzymatic hydrolysis were evaluated (sodium hydroxide, sulfuric acid, steam explosion, organosolv and combined method nitric acid / sodium hydroxide). The main finding of the study was that the pretreatment with 6 % HNO3 and 1 % NaOH led to better yields than those obtained with sodium hydroxide, dilute sulfuric acid, steam explosion, and organosolv pretreatments. Also, HNO3 yields were maximized by the factorial method. With those results the maxima concentration of reducing sugar found was 97.83 ± 1.59, obtained after pretreatment with 7.5 % HNO3 at 120 °C for 30 minutes; followed by 1 % of NaOH at 90 °C for 30 minutes at pH 4.5 for 168 hours with a load enzyme of 25 FPU/g of total carbohydrates. Comparing the results obtained by the authors with those reported in the literature, the combined method was found to be suitable for use in the exploitation of sawdust. PMID:26535036

  3. Comparison of five pretreatments for the production of fermentable sugars obtained from Pinus pseudostrobus L. wood.

    PubMed

    Farías-Sánchez, Juan Carlos; López-Miranda, Javier; Castro-Montoya, Agustín Jaime; Saucedo-Luna, Jaime; Carrillo-Parra, Artemio; López-Albarrán, Pablo; Pineda-Pimentel, María Guadalupe; Rutiaga-Quiñones, José Guadalupe

    2015-01-01

    To benefit from the use of a waste product such as pine sawdust from a sawmill in Michoacán, Mexico, five different pretreatments for the production of reducing sugars by enzymatic hydrolysis were evaluated (sodium hydroxide, sulfuric acid, steam explosion, organosolv and combined method nitric acid / sodium hydroxide). The main finding of the study was that the pretreatment with 6 % HNO3 and 1 % NaOH led to better yields than those obtained with sodium hydroxide, dilute sulfuric acid, steam explosion, and organosolv pretreatments. Also, HNO3 yields were maximized by the factorial method. With those results the maxima concentration of reducing sugar found was 97.83 ± 1.59, obtained after pretreatment with 7.5 % HNO3 at 120 °C for 30 minutes; followed by 1 % of NaOH at 90 °C for 30 minutes at pH 4.5 for 168 hours with a load enzyme of 25 FPU/g of total carbohydrates. Comparing the results obtained by the authors with those reported in the literature, the combined method was found to be suitable for use in the exploitation of sawdust.

  4. Online geometric calibration of cone-beam computed tomography for arbitrary imaging objects.

    PubMed

    Meng, Yuanzheng; Gong, Hui; Yang, Xiaoquan

    2013-02-01

    A novel online method based on the symmetry property of the sum of projections (SOP) is proposed to obtain the geometric parameters in cone-beam computed tomography (CBCT). This method requires no calibration phantom and can be used in circular trajectory CBCT with arbitrary cone angles. An objective function is deduced to illustrate the dependence of the symmetry of SOP on geometric parameters, which will converge to its minimum when the geometric parameters achieve their true values. Thus, by minimizing the objective function, we can obtain the geometric parameters for image reconstruction. To validate this method, numerical phantom studies with different noise levels are simulated. The results show that our method is insensitive to the noise and can determine the skew (in-plane rotation angle of the detector), the roll (rotation angle around the projection of the rotation axis on the detector), and the rotation axis with high accuracy, while the mid-plane and source-to-detector distance will be obtained with slightly lower accuracy. However, our simulation studies validate that the errors of the latter two parameters brought by our method will hardly degrade the quality of reconstructed images. The small animal studies show that our method is able to deal with arbitrary imaging objects. In addition, the results of the reconstructed images in different slices demonstrate that we have achieved comparable image quality in the reconstructions as some offline methods.

  5. Fast two-stream method for computing diurnal-mean actinic flux in vertically inhomogeneous atmospheres

    NASA Technical Reports Server (NTRS)

    Filyushkin, V. V.; Madronich, S.; Brasseur, G. P.; Petropavlovskikh, I. V.

    1994-01-01

    Based on a derivation of the two-stream daytime-mean equations of radiative flux transfer, a method for computing the daytime-mean actinic fluxes in the absorbing and scattering vertically inhomogeneous atmosphere is suggested. The method applies direct daytime integration of the particular solutions of the two-stream approximations or the source functions. It is valid for any duration of period of averaging. The merit of the method is that the multiple scattering computation is carried out only once for the whole averaging period. It can be implemented with a number of widely used two-stream approximations. The method agrees with the results obtained with 200-point multiple scattering calculations. The method was also tested in runs with a 1-km cloud layer with optical depth of 10, as well as with aerosol background. Comparison of the results obtained for a cloud subdivided into 20 layers with those obtained for a one-layer cloud with the same optical parameters showed that direct integration of particular solutions possesses an 'analytical' accuracy. In the case of the source function interpolation, the actinic fluxes calculated above the one-layer and 20-layer clouds agreed within 1%-1.5%, while below the cloud they may differ up to 5% (in the worst case). The ways of enhancing the accuracy (in a 'two-stream sense') and computational efficiency of the method are discussed.

  6. An efficient method for the computation of Legendre moments.

    PubMed

    Yap, Pew-Thian; Paramesran, Raveendran

    2005-12-01

    Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.

  7. Theoretical Study of the Effect of Enamel Parameters on Laser-Induced Surface Acoustic Waves in Human Incisor

    NASA Astrophysics Data System (ADS)

    Yuan, Ling; Sun, Kaihua; Shen, Zhonghua; Ni, Xiaowu; Lu, Jian

    2015-06-01

    The laser ultrasound technique has great potential for clinical diagnosis of teeth because of its many advantages. To study laser surface acoustic wave (LSAW) propagation in human teeth, two theoretical methods, the finite element method (FEM) and Laguerre polynomial extension method (LPEM), are presented. The full field temperature values and SAW displacements in an incisor can be obtained by the FEM. The SAW phase velocity in a healthy incisor and dental caries is obtained by the LPEM. The methods and results of this work can provide a theoretical basis for nondestructive evaluation of human teeth with LSAWs.

  8. Investigation on filter method for smoothing spiral phase plate

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian

    2018-03-01

    Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.

  9. Measurement of rheologic property of blood by a falling-ball blood viscometer.

    PubMed

    Eguchi, Yoko; Karino, Takeshi

    2008-04-01

    The viscosity of blood obtained by using a rotational viscometer decreases with the time elapsed from the beginning of measurement until it reaches a constant value determined by the magnitude of shear rate. It is not possible to obtain an initial value of viscosity at time t = 0 that is considered to exhibit an intrinsic property of the fluid by this method. Therefore, we devised a new method by which one can obtain the viscosity of various fluids that are not affected by both the time elapsed from the beginning of measurement and the magnitude of shear rate by considering the balance of the forces acting on a solid spherical particle freely falling in a quiescent viscous fluid. By using the new method, we studied the rheologic behavior of corn syrups, carboxy-methyl cellulose, and human blood; and compared the results with those obtained with a cone-and-plate viscometer. It was found that in the case of corn syrups and washed red cell suspensions in which no red cell aggregate (rouleau) was formed, the viscosity obtained with the two different methods were almost the same. In contrast to this, in the case of the whole blood in which massive aggregates were formed, the viscosity obtained with a falling-ball viscometer was much larger than that obtained with a cone-plate viscometer.

  10. New method of scoliosis assessment: preliminary results using computerized photogrammetry.

    PubMed

    Aroeira, Rozilene Maria Cota; Leal, Jefferson Soares; de Melo Pertence, Antônio Eustáquio

    2011-09-01

    A new method for nonradiographic evaluation of scoliosis was independently compared with the Cobb radiographic method, for the quantification of scoliotic curvature. To develop a protocol for computerized photogrammetry, as a nonradiographic method, for the quantification of scoliosis, and to mathematically relate this proposed method with the Cobb radiographic method. Repeated exposure to radiation of children can be harmful to their health. Nevertheless, no nonradiographic method until now proposed has gained popularity as a routine method for evaluation, mainly due to a low correspondence to the Cobb radiographic method. Patients undergoing standing posteroanterior full-length spine radiographs, who were willing to participate in this study, were submitted to dorsal digital photography in the orthostatic position with special surface markers over the spinous process, specifically the vertebrae C7 to L5. The radiographic and photographic images were sent separately for independent analysis to two examiners, trained in quantification of scoliosis for the types of images received. The scoliosis curvature angles obtained through computerized photogrammetry (the new method) were compared to those obtained through the Cobb radiographic method. Sixteen individuals were evaluated (14 female and 2 male). All presented idiopathic scoliosis, and were between 21.4 ± 6.1 years of age; 52.9 ± 5.8 kg in weight; 1.63 ± 0.05 m in height, with a body mass index of 19.8 ± 0.2. There was no statistically significant difference between the scoliosis angle measurements obtained in the comparative analysis of both methods, and a mathematical relationship was formulated between both methods. The preliminary results presented demonstrate equivalence between the two methods. More studies are needed to firmly assess the potential of this new method as a coadjuvant tool in the routine following of scoliosis treatment.

  11. Microorganism Identification Based On MALDI-TOF-MS Fingerprints

    NASA Astrophysics Data System (ADS)

    Elssner, Thomas; Kostrzewa, Markus; Maier, Thomas; Kruppa, Gary

    Advances in MALDI-TOF mass spectrometry have enabled the ­development of a rapid, accurate and specific method for the identification of bacteria directly from colonies picked from culture plates, which we have named the MALDI Biotyper. The picked colonies are placed on a target plate, a drop of matrix solution is added, and a pattern of protein molecular weights and intensities, "the protein fingerprint" of the bacteria, is produced by the MALDI-TOF mass spectrometer. The obtained protein mass fingerprint representing a molecular signature of the microorganism is then matched against a database containing a library of previously measured protein mass fingerprints, and scores for the match to every library entry are produced. An ID is obtained if a score is returned over a pre-set threshold. The sensitivity of the techniques is such that only approximately 104 bacterial cells are needed, meaning that an overnight culture is sufficient, and the results are obtained in minutes after culture. The improvement in time to result over biochemical methods, and the capability to perform a non-targeted identification of bacteria and spores, potentially makes this method suitable for use in the detect-to-treat timeframe in a bioterrorism event. In the case of white-powder samples, the infectious spore is present in sufficient quantity in the powder so that the MALDI Biotyper result can be obtained directly from the white powder, without the need for culture. While spores produce very different patterns from the vegetative colonies of the corresponding bacteria, this problem is overcome by simply including protein fingerprints of the spores in the library. Results on spores can be returned within minutes, making the method suitable for use in the "detect-to-protect" timeframe.

  12. Soil Particle Size Analysis by Laser Diffractometry: Result Comparison with Pipette Method

    NASA Astrophysics Data System (ADS)

    Šinkovičová, Miroslava; Igaz, Dušan; Kondrlová, Elena; Jarošová, Miriam

    2017-10-01

    Soil texture as the basic soil physical property provides a basic information on the soil grain size distribution as well as grain size fraction representation. Currently, there are several methods of particle dimension measurement available that are based on different physical principles. Pipette method based on the different sedimentation velocity of particles with different diameter is considered to be one of the standard methods of individual grain size fraction distribution determination. Following the technical advancement, optical methods such as laser diffraction can be also used nowadays for grain size distribution determination in the soil. According to the literature review of domestic as well as international sources related to this topic, it is obvious that the results obtained by laser diffractometry do not correspond with the results obtained by pipette method. The main aim of this paper was to analyse 132 samples of medium fine soil, taken from the Nitra River catchment in Slovakia, from depths of 15-20 cm and 40-45 cm, respectively, using laser analysers: ANALYSETTE 22 MicroTec plus (Fritsch GmbH) and Mastersizer 2000 (Malvern Instruments Ltd). The results obtained by laser diffractometry were compared with pipette method and the regression relationships using linear, exponential, power and polynomial trend were derived. Regressions with the three highest regression coefficients (R2) were further investigated. The fit with the highest tightness was observed for the polynomial regression. In view of the results obtained, we recommend using the estimate of the representation of the clay fraction (<0.01 mm) polynomial regression, to achieve a highest confidence value R2 at the depths of 15-20 cm 0.72 (Analysette 22 MicroTec plus) and 0.95 (Mastersizer 2000), from a depth of 40-45 cm 0.90 (Analysette 22 MicroTec plus) and 0.96 (Mastersizer 2000). Since the percentage representation of clayey particles (2nd fraction according to the methodology of Complex Soil Survey done in Slovakia) in soil is the determinant for soil type specification, we recommend using the derived relationships in soil science when the soil texture analysis is done according to laser diffractometry. The advantages of laser diffraction method comprise the short analysis time, usage of small sample amount, application for the various grain size fraction and soil type classification systems, and a wide range of determined fractions. Therefore, it is necessary to focus on this issue further to address the needs of soil science research and attempt to replace the standard pipette method with more progressive laser diffraction method.

  13. Acoustic Signature from Flames as a Combustion Diagnostic Tool

    DTIC Science & Technology

    1983-11-01

    empirical visual flame length had to be input to the computer for the inversion method to give good results. That is, if the experiment cnd inversion...method were asked to yield the flame length , poor results were obtained. Since this wa3 part of the information sought for practical application of the...to small experimental uncertainty. The method gave reasonably good results for the open flame but substantial input (the flame length ) had to be

  14. Comparative analysis for strength serum sodium and potassium in three different methods: Flame photometry, ion-selective electrode (ISE) and colorimetric enzymatic.

    PubMed

    Garcia, Rafaela Alvim; Vanelli, Chislene Pereira; Pereira Junior, Olavo Dos Santos; Corrêa, José Otávio do Amaral

    2018-06-19

    Hydroelectrolytic disorders are common in clinical situations and may be harmful to the patient, especially those involving plasma sodium and potassium dosages. Among the possible methods for the dosages are flame photometry, ion-selective electrode (ISE) and colorimetric enzymatic method. We analyzed 175 samples in the three different methods cited from patients attending the laboratory of the University Hospital of the Federal University of Juiz de Fora. The values obtained were statistically treated using SPSS 19.0 software. The present study aims to evaluate the impact of the use of these different methods in the determination of plasma sodium and potassium. The averages obtained for sodium and potassium measurements by flame photometry were similar (P > .05) to the means obtained for the two electrolytes by ISE. The averages obtained by the colorimetric enzymatic method presented statistical difference in relation to ISE, both for sodium and potassium. In the correlation analysis, both flame photometry and colorimetric enzymatic showed a strong correlation with the ISE method for both dosages. At the first time in the same work sodium and potassium were analyzed by three different methods and the results allowed us to conclude that the methods showed a positive and strong correlation, and can be applied in the clinical routine. © 2018 Wiley Periodicals, Inc.

  15. An Exponential Finite Difference Technique for Solving Partial Differential Equations. M.S. Thesis - Toledo Univ., Ohio

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.

    1987-01-01

    An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that were more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.

  16. exponential finite difference technique for solving partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handschuh, R.F.

    1987-01-01

    An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that weremore » more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.« less

  17. A closed-form trim solution yielding minimum trim drag for airplanes with multiple longitudinal-control effectors

    NASA Technical Reports Server (NTRS)

    Goodrich, Kenneth H.; Sliwa, Steven M.; Lallman, Frederick J.

    1989-01-01

    Airplane designs are currently being proposed with a multitude of lifting and control devices. Because of the redundancy in ways to generate moments and forces, there are a variety of strategies for trimming each airplane. A linear optimum trim solution (LOTS) is derived using a Lagrange formulation. LOTS enables the rapid calculation of the longitudinal load distribution resulting in the minimum trim drag in level, steady-state flight for airplanes with a mixture of three or more aerodynamic surfaces and propulsive control effectors. Comparisons of the trim drags obtained using LOTS, a direct constrained optimization method, and several ad hoc methods are presented for vortex-lattice representations of a three-surface airplane and two-surface airplane with thrust vectoring. These comparisons show that LOTS accurately predicts the results obtained from the nonlinear optimization and that the optimum methods result in trim drag reductions of up to 80 percent compared to the ad hoc methods.

  18. New approach application of data transformation in mean centering of ratio spectra method

    NASA Astrophysics Data System (ADS)

    Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.

    2015-05-01

    Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.

  19. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method.

  20. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  1. Analysis shear wave velocity structure obtained from surface wave methods in Bornova, Izmir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pamuk, Eren, E-mail: eren.pamuk@deu.edu.tr; Akgün, Mustafa, E-mail: mustafa.akgun@deu.edu.tr; Özdağ, Özkan Cevdet, E-mail: cevdet.ozdag@deu.edu.tr

    2016-04-18

    Properties of the soil from the bedrock is necessary to describe accurately and reliably for the reduction of earthquake damage. Because seismic waves change their amplitude and frequency content owing to acoustic impedance difference between soil and bedrock. Firstly, shear wave velocity and depth information of layers on bedrock is needed to detect this changing. Shear wave velocity can be obtained using inversion of Rayleigh wave dispersion curves obtained from surface wave methods (MASW- the Multichannel Analysis of Surface Waves, ReMi-Refraction Microtremor, SPAC-Spatial Autocorrelation). While research depth is limeted in active source study, a passive source methods are utilized formore » deep depth which is not reached using active source methods. ReMi method is used to determine layer thickness and velocity up to 100 m using seismic refraction measurement systems.The research carried out up to desired depth depending on radius using SPAC which is utilized easily in conditions that district using of seismic studies in the city. Vs profiles which are required to calculate deformations in under static and dynamic loads can be obtained with high resolution using combining rayleigh wave dispersion curve obtained from active and passive source methods. In the this study, Surface waves data were collected using the measurements of MASW, ReMi and SPAC at the İzmir Bornova region. Dispersion curves obtained from surface wave methods were combined in wide frequency band and Vs-depth profiles were obtained using inversion. Reliability of the resulting soil profiles were provided by comparison with theoretical transfer function obtained from soil paremeters and observed soil transfer function from Nakamura technique and by examination of fitting between these functions. Vs values are changed between 200-830 m/s and engineering bedrock (Vs>760 m/s) depth is approximately 150 m.« less

  2. Ultrasonic tracking of shear waves using a particle filter

    PubMed Central

    Ingle, Atul N.; Ma, Chi; Varghese, Tomy

    2015-01-01

    Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761

  3. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  4. A qualitative and quantitative HPTLC densitometry method for the analysis of cannabinoids in Cannabis sativa L.

    PubMed

    Fischedick, Justin T; Glas, Ronald; Hazekamp, Arno; Verpoorte, Rob

    2009-01-01

    Cannabis and cannabinoid based medicines are currently under serious investigation for legitimate development as medicinal agents, necessitating new low-cost, high-throughput analytical methods for quality control. The goal of this study was to develop and validate, according to ICH guidelines, a simple rapid HPTLC method for the quantification of Delta(9)-tetrahydrocannabinol (Delta(9)-THC) and qualitative analysis of other main neutral cannabinoids found in cannabis. The method was developed and validated with the use of pure cannabinoid reference standards and two medicinal cannabis cultivars. Accuracy was determined by comparing results obtained from the HTPLC method with those obtained from a validated HPLC method. Delta(9)-THC gives linear calibration curves in the range of 50-500 ng at 206 nm with a linear regression of y = 11.858x + 125.99 and r(2) = 0.9968. Results have shown that the HPTLC method is reproducible and accurate for the quantification of Delta(9)-THC in cannabis. The method is also useful for the qualitative screening of the main neutral cannabinoids found in cannabis cultivars.

  5. Supercritical fluid extraction from spent coffee grounds and coffee husks: antioxidant activity and effect of operational variables on extract composition.

    PubMed

    Andrade, Kátia S; Gonçalvez, Ricardo T; Maraschin, Marcelo; Ribeiro-do-Valle, Rosa Maria; Martínez, Julian; Ferreira, Sandra R S

    2012-01-15

    The present study describes the chemical composition and the antioxidant activity of spent coffee grounds and coffee husks extracts, obtained by supercritical fluid extraction (SFE) with CO(2) and with CO(2) and co-solvent. In order to evaluate the high pressure method in terms of process yield, extract composition and antioxidant activity, low pressure methods, such as ultrasound (UE) and soxhlet (SOX) with different organic solvents, were also applied to obtain the extracts. The conditions for the SFE were: temperatures of 313.15K, 323.15K and 333.15K and pressures from 100 bar to 300 bar. The SFE kinetics and the mathematical modeling of the overall extraction curves (OEC) were also investigated. The extracts obtained by LPE (low pressure extraction) with ethanol showed the best results for the global extraction yield (X(0)) when compared to SFE results. The best extraction yield was 15±2% for spent coffee grounds with ethanol and 3.1±04% for coffee husks. The antioxidant potential was evaluated by DPPH method, ABTS method and Folin-Ciocalteau method. The best antioxidant activity was showed by coffee husk extracts obtained by LPE. The quantification and the identification of the extracts were accomplished using HPLC analysis. The main compounds identified were caffeine and chlorogenic acid for the supercritical extracts from coffee husks. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Identification of the optic nerve head with genetic algorithms.

    PubMed

    Carmona, Enrique J; Rincón, Mariano; García-Feijoó, Julián; Martínez-de-la-Casa, José M

    2008-07-01

    This work proposes creating an automatic system to locate and segment the optic nerve head (ONH) in eye fundus photographic images using genetic algorithms. Domain knowledge is used to create a set of heuristics that guide the various steps involved in the process. Initially, using an eye fundus colour image as input, a set of hypothesis points was obtained that exhibited geometric properties and intensity levels similar to the ONH contour pixels. Next, a genetic algorithm was used to find an ellipse containing the maximum number of hypothesis points in an offset of its perimeter, considering some constraints. The ellipse thus obtained is the approximation to the ONH. The segmentation method is tested in a sample of 110 eye fundus images, belonging to 55 patients with glaucoma (23.1%) and eye hypertension (76.9%) and random selected from an eye fundus image base belonging to the Ophthalmology Service at Miguel Servet Hospital, Saragossa (Spain). The results obtained are competitive with those in the literature. The method's generalization capability is reinforced when it is applied to a different image base from the one used in our study and a discrepancy curve is obtained very similar to the one obtained in our image base. In addition, the robustness of the method proposed can be seen in the high percentage of images obtained with a discrepancy delta<5 (96% and 99% in our and a different image base, respectively). The results also confirm the hypothesis that the ONH contour can be properly approached with a non-deformable ellipse. Another important aspect of the method is that it directly provides the parameters characterising the shape of the papilla: lengths of its major and minor axes, its centre of location and its orientation with regard to the horizontal position.

  7. Phase-sensitive spectral estimation by the hybrid filter diagonalization method.

    PubMed

    Celik, Hasan; Ridge, Clark D; Shaka, A J

    2012-01-01

    A more robust way to obtain a high-resolution multidimensional NMR spectrum from limited data sets is described. The Filter Diagonalization Method (FDM) is used to analyze phase-modulated data and cast the spectrum in terms of phase-sensitive Lorentzian "phase-twist" peaks. These spectra are then used to obtain absorption-mode phase-sensitive spectra. In contrast to earlier implementations of multidimensional FDM, the absolute phase of the data need not be known beforehand, and linear phase corrections in each frequency dimension are possible, if they are required. Regularization is employed to improve the conditioning of the linear algebra problems that must be solved to obtain the spectral estimate. While regularization smoothes away noise and small peaks, a hybrid method allows the true noise floor to be correctly represented in the final result. Line shape transformation to a Gaussian-like shape improves the clarity of the spectra, and is achieved by a conventional Lorentzian-to-Gaussian transformation in the time-domain, after inverse Fourier transformation of the FDM spectra. The results obtained highlight the danger of not using proper phase-sensitive line shapes in the spectral estimate. The advantages of the new method for the spectral estimate are the following: (i) the spectrum can be phased by conventional means after it is obtained; (ii) there is a true and accurate noise floor; and (iii) there is some indication of the quality of fit in each local region of the spectrum. The method is illustrated with 2D NMR data for the first time, but is applicable to n-dimensional data without any restriction on the number of time/frequency dimensions. Copyright © 2011. Published by Elsevier Inc.

  8. Validation and transferability study of a method based on near-infrared hyperspectral imaging for the detection and quantification of ergot bodies in cereals.

    PubMed

    Vermeulen, Ph; Fernández Pierna, J A; van Egmond, H P; Zegers, J; Dardenne, P; Baeten, V

    2013-09-01

    In recent years, near-infrared (NIR) hyperspectral imaging has proved its suitability for quality and safety control in the cereal sector by allowing spectroscopic images to be collected at single-kernel level, which is of great interest to cereal control laboratories. Contaminants in cereals include, inter alia, impurities such as straw, grains from other crops, and insects, as well as undesirable substances such as ergot (sclerotium of Claviceps purpurea). For the cereal sector, the presence of ergot creates a high toxicity risk for animals and humans because of its alkaloid content. A study was undertaken, in which a complete procedure for detecting ergot bodies in cereals was developed, based on their NIR spectral characteristics. These were used to build relevant decision rules based on chemometric tools and on the morphological information obtained from the NIR images. The study sought to transfer this procedure from a pilot online NIR hyperspectral imaging system at laboratory level to a NIR hyperspectral imaging system at industrial level and to validate the latter. All the analyses performed showed that the results obtained using both NIR hyperspectral imaging cameras were quite stable and repeatable. In addition, a correlation higher than 0.94 was obtained between the predicted values obtained by NIR hyperspectral imaging and those supplied by the stereo-microscopic method which is the reference method. The validation of the transferred protocol on blind samples showed that the method could identify and quantify ergot contamination, demonstrating the transferability of the method. These results were obtained on samples with an ergot concentration of 0.02% which is less than the EC limit for cereals (intervention grains) destined for humans fixed at 0.05%.

  9. Automatic Road Gap Detection Using Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.

    2011-09-01

    Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  10. Analytical investigation of different mathematical approaches utilizing manipulation of ratio spectra

    NASA Astrophysics Data System (ADS)

    Osman, Essam Eldin A.

    2018-01-01

    This work represents a comparative study of different approaches of manipulating ratio spectra, applied on a binary mixture of ciprofloxacin HCl and dexamethasone sodium phosphate co-formulated as ear drops. The proposed new spectrophotometric methods are: ratio difference spectrophotometric method (RDSM), amplitude center method (ACM), first derivative of the ratio spectra (1DD) and mean centering of ratio spectra (MCR). The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitations and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  11. Novel Imaging Method of Continuous Shear Wave by Ultrasonic Color Flow Mapping

    NASA Astrophysics Data System (ADS)

    Yamakoshi, Yoshiki; Yamamoto, Atsushi; Yuminaka, Yasushi

    Shear wave velocity measurement is a promising method in evaluation of tissue stiffness. Several methods have been developed to measure the shear wave velocity, however, it is difficult to obtain quantitative shear wave image in real-time by low cost system. In this paper, a novel shear wave imaging method for continuous shear wave is proposed. This method uses a color flow imaging which is used in ultrasonic imaging system to obtain shear wave's wavefront map. Two conditions, shear wave frequency condition and shear wave displacement amplitude condition, are required, however, these conditions are not severe restrictions in most applications. Using the proposed method, shear wave velocity of trapezius muscle is measured. The result is consistent with the velocity which is calculated from shear elastic modulus measured by ARFI method.

  12. Studies of excited states of HeH by the multi-reference configuration-interaction method

    NASA Astrophysics Data System (ADS)

    Lee, Chun-Woo; Gim, Yeongrok

    2013-11-01

    The excited states of a HeH molecule for an n of up to 4 are studied using the multi-reference configuration-interaction method and Kaufmann's Rydberg basis functions. The advantages of using two different ways of locating Rydberg orbitals, either on the atomic nucleus or at the charge centre of molecules, are exploited by limiting their application to different ranges of R. Using this method, the difference between the experimental binding energies of the lower Rydberg states obtained by Ketterle and the ab initio results obtained by van Hemert and Peyerimhoff is reduced from a few hundreds of wave numbers to a few tens of wave numbers. A substantial improvement in the accuracy allows us to obtain quantum defect curves characterized by the correct behaviour. We obtain several Rydberg series that have more than one member, such as the ns series (n = 2, 3 and 4), npσ series (n = 3 and 4), npπ (n = 2, 3, 4) series and ndπ (n = 3, 4) series. These quantum defect curves are compared to the quantum defect curves obtained by the R-matrix or the multichannel quantum defect theory methods.

  13. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.

  14. Improved phase shift approach to the energy correction of the infinite order sudden approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, B.; Eno, L.; Rabitz, H.

    1980-07-15

    A new method is presented for obtaining energy corrections to the infinite order sudden (IOS) approximation by incorporating the effect of the internal molecular Hamiltonian into the IOS wave function. This is done by utilizing the JWKB approximation to transform the Schroedinger equation into a differential equation for the phase. It is found that the internal Hamiltonian generates an effective potential from which a new improved phase shift is obtained. This phase shift is then used in place of the IOS phase shift to generate new transition probabilities. As an illustration the resulting improved phase shift (IPS) method is appliedmore » to the Secrest--Johnson model for the collinear collision of an atom and diatom. In the vicinity of the sudden limit, the IPS method gives results for transition probabilities, P/sub n/..-->..n+..delta..n, in significantly better agreement with the 'exact' close coupling calculations than the IOS method, particularly for large ..delta..n. However, when the IOS results are not even qualitatively correct, the IPS method is unable to satisfactorily provide improvements.« less

  15. a Method of Generating dem from Dsm Based on Airborne Insar Data

    NASA Astrophysics Data System (ADS)

    Lu, W.; Zhang, J.; Xue, G.; Wang, C.

    2018-04-01

    Traditional methods of terrestrial survey to acquire DEM cannot meet the requirement of acquiring large quantities of data in real time, but the DSM can be quickly obtained by using the dual antenna synthetic aperture radar interferometry and the DEM generated by the DSM is more fast and accurate. Therefore it is most important to acquire DEM from DSM based on airborne InSAR data. This paper aims to the method that generate DEM from DSM accurately. Two steps in this paper are applied to acquire accurate DEM. First of all, when the DSM is generated by interferometry, unavoidable factors such as overlay and shadow will produce gross errors to affect the data accuracy, so the adaptive threshold segmentation method is adopted to remove the gross errors and the threshold is selected according to the coherence of the interferometry. Secondly DEM will be generated by the progressive triangulated irregular network densification filtering algorithm. Finally, experimental results are compared with the existing high-precision DEM results. The results show that this method can effectively filter out buildings, vegetation and other objects to obtain the high-precision DEM.

  16. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  17. Experimental validation of the intrinsic spatial efficiency method over a wide range of sizes for cylindrical sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe; Camilla, S.

    The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the referencemore » material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.« less

  18. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  19. The Global Optimization of Pt13 Cluster Using the First-Principle Molecular Dynamics with the Quenching Technique

    NASA Astrophysics Data System (ADS)

    Chen, Xiangping; Duan, Haiming; Cao, Biaobing; Long, Mengqiu

    2018-03-01

    The high-temperature first-principle molecular dynamics method used to obtain the low energy configurations of clusters [L. L. Wang and D. D. Johnson, PRB 75, 235405 (2007)] is extended to a considerably large temperature range by combination with the quenching technique. Our results show that there are strong correlations between the possibilities for obtaining the ground-state structure and the temperatures. Larger possibilities can be obtained at relatively low temperatures (as corresponds to the pre-melting temperature range). Details of the structural correlation with the temperature are investigated by taking the Pt13 cluster as an example, which suggests a quite efficient method to obtain the lowest-energy geometries of metal clusters.

  20. Commissioning and initial acceptance tests for a commercial convolution dose calculation algorithm for radiotherapy treatment planning in comparison with Monte Carlo simulation and measurement

    PubMed Central

    Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen

    2012-01-01

    In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081

  1. [Evaluation of the methods of treatment of epithelioma basocellulare at the I Dermatology Clinic, Silesian Medical Academy, in Katowice].

    PubMed

    Brzezińska-Wcisło, L; Bogdanowski, T; Suwała-Jurczyk, B

    1990-01-01

    The therapeutic results are presented in cases of basocellular epithelioma treated by three methods. The best and most radical results were obtained by the surgical method, followed in the order of effectiveness by radiotherapy (45-55 kV) in a total dose of 4500-6000 R. In case of contraindications to these methods local chemotherapy was applied which was associated with a high proportion of failures (28.6%).

  2. Comparison of Arterial Spin-labeling Perfusion Images at Different Spatial Normalization Methods Based on Voxel-based Statistical Analysis.

    PubMed

    Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi

    2017-01-01

    Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.

  3. Development of a Coordinate Transformation method for direct georeferencing in map projection frames

    NASA Astrophysics Data System (ADS)

    Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao

    2013-03-01

    This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.

  4. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  5. Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.

    ERIC Educational Resources Information Center

    Jackson, David P.

    1996-01-01

    Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…

  6. First international collaborative study to evaluate rabies antibody detection method for use in monitoring the effectiveness of oral vaccination programmes in fox and raccoon dog in Europe.

    PubMed

    Wasniewski, M; Almeida, I; Baur, A; Bedekovic, T; Boncea, D; Chaves, L B; David, D; De Benedictis, P; Dobrostana, M; Giraud, P; Hostnik, P; Jaceviciene, I; Kenklies, S; König, M; Mähar, K; Mojzis, M; Moore, S; Mrenoski, S; Müller, T; Ngoepe, E; Nishimura, M; Nokireki, T; Pejovic, N; Smreczak, M; Strandbygaard, B; Wodak, E; Cliquet, F

    2016-12-01

    The most effective and sustainable method to control and eliminate rabies in wildlife is the oral rabies vaccination (ORV) of target species, namely foxes and raccoon dogs in Europe. According to WHO and OIE, the effectiveness of oral vaccination campaigns should be regularly assessed via disease surveillance and ORV antibody monitoring. Rabies antibodies are generally screened for in field animal cadavers, whose body fluids are often of poor quality. Therefore, the use of alternative methods such as the enzyme-linked immunosorbent assay (ELISA) has been proposed to improve reliability of serological results obtained on wildlife samples. We undertook an international collaborative study to determine if the commercial BioPro ELISA Rabies Ab kit is a reliable and reproducible tool for rabies serological testing. Our results reveal that the overall specificity evaluated on naive samples reached 96.7%, and the coefficients of concordance obtained for fox and raccoon dog samples were 97.2% and 97.5%, respectively. The overall agreement values obtained for the four marketed oral vaccines used in Europe were all equal to or greater than 95%. The coefficients of concordance obtained by laboratories ranged from 87.2% to 100%. The results of this collaborative study show good robustness and reproducibility of the BioPro ELISA Rabies Ab kit. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. The morphological changes of optically cleared cochlea using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lee, Jaeyul; Song, Jaewon; Jeon, Mansik; Kim, Jeehyun

    2017-02-01

    In this study, we monitored the optical clearing effects by immersing ex vivo guinea pig cochlea samples in ethylenediaminetetraacetic acid (EDTA) to study the internal microstructures in the morphology of guinea pig cochlea. The imaging limitations due to the guinea pig cochlea structures were overcome by optical clearing technique. Subsequently, the study was carried out to confirm the required approximate immersing duration of cochlea in EDTA-based optical clearing to obtain the best optimal depth visibility for guinea pig cochlea samples. Thus, we implemented a decalcification-based optical clearing effect to guinea pig cochlea samples to enhance the depth visualization of internal microstructures using swept source optical coherence tomography (OCT). The obtained nondestructive two-dimensional OCT images successfully illustrated the feasibility of the proposed method by providing clearly visible microstructures in the depth direction as a result of decalcification. The most optimal clearing outcomes for the guinea pig cochlea were obtained after 14 consecutive days. The quantitative assessment results verified the increase of the intensity as well as the thickness measurements of the internal microstructures. Following this method, difficulties in imaging of internal cochlea microstructures of guinea pigs could be avoided. The obtained results verified that the depth visibility of the decalcified ex vivo guinea pig cochlea samples was enhanced. Therefore, the proposed EDTA-based optical clearing method for guinea pig can be considered as a potential application for depth-enhanced OCT visualization.

  8. Development of a practical costing method for hospitals.

    PubMed

    Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei

    2006-03-01

    To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.

  9. Determination of polyphenolic compounds of red wines by UV-VIS-NIR spectroscopy and chemometrics tools.

    PubMed

    Martelo-Vidal, M J; Vázquez, M

    2014-09-01

    Spectral analysis is a quick and non-destructive method to analyse wine. In this work, trans-resveratrol, oenin, malvin, catechin, epicatechin, quercetin and syringic acid were determined in commercial red wines from DO Rías Baixas and DO Ribeira Sacra (Spain) by UV-VIS-NIR spectroscopy. Calibration models were developed using principal component regression (PCR) or partial least squares (PLS) regression. HPLC was used as reference method. The results showed that reliable PLS models were obtained to quantify all polyphenols for Rías Baixas wines. For Ribeira Sacra, feasible models were obtained to determine quercetin, epicatechin, oenin and syringic acid. PCR calibration models showed worst reliable of prediction than PLS models. For red wines from mencía grapes, feasible models were obtained for catechin and oenin, regardless the geographical origin. The results obtained demonstrate that UV-VIS-NIR spectroscopy can be used to determine individual polyphenolic compounds in red wines. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. New solid state forms of antineoplastic 5-fluorouracil with anthelmintic piperazine

    NASA Astrophysics Data System (ADS)

    Moisescu-Goia, C.; Muresan-Pop, M.; Simon, V.

    2017-12-01

    The aim of the present study was to asses the formation of solid forms between the 5-fluorouracil chemotherapy drug and the anthelmintic piperazine. Two new solid forms of antineoplastic agent 5-fluorouracil with anthelmintic piperazine were obtained by liquid assisted ball milling and slurry crystallization methods. The Nsbnd H hydrogen bonding donors and C = O hydrogen bonding acceptors of 5-fluorouracil allow to form co-crystals with other drugs delivering improved properties for medical applications, as proved for other compounds of pharmaceutical interest. Both new solid forms were investigated using X-ray powder diffraction (XRD), differential thermal analysis (DTA) and Fourier transform infrared (FTIR) spectroscopy. The XRD results show that by both methods were successfully synthesized new solid forms of 5-fluorouracil with piperazine. According to FTIR results the form prepared by lichid assisted grinding process was obtained as co-crystal and the other one, prepared by slurry method, resulted as a salt.

  11. Cooperative parallel adaptive neighbourhood search for the disjunctively constrained knapsack problem

    NASA Astrophysics Data System (ADS)

    Quan, Zhe; Wu, Lei

    2017-09-01

    This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.

  12. Development of Water Softening Method of Intake in Magnitogorsk

    NASA Astrophysics Data System (ADS)

    Meshcherova, E. A.; Novoselova, J. N.; Moreva, J. A.

    2017-11-01

    This article contains an appraisal of the drinking water quality of Magnitogorsk intake. A water analysis was made which led to the conclusion that the standard for general water hardness was exceeded. As a result, it became necessary to develop a number of measures to reduce water hardness. To solve this problem all the necessary studies of the factors affecting the value of increased water hardness were carried out and the water softening method by using an ion exchange filter was proposed. The calculation of the cation-exchanger filling volume of the proposed filter is given in the article, its overall dimensions are chosen. The obtained calculations were confirmed by the results of laboratory studies by using the test installation. The research and laboratory tests results make the authors conclude that the proposed method should be used to obtain softened water for the requirements of SanPin.

  13. Efficient color correction method for smartphone camera-based health monitoring application.

    PubMed

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  14. Comparison of infinite and wedge fringe settings in Mach Zehnder interferometer for temperature field measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haridas, Divya; P, Vibin Antony; Sajith, V.

    2014-10-15

    Interferometric method, which utilizes the interference of coherent light beams, is used to determine the temperature distribution in the vicinity of a vertical heater plate. The optical components are arranged so as to obtain wedge fringe and infinite fringe patterns and isotherms obtained in each case were compared. In wedge fringe setting, image processing techniques has been used for obtaining isotherms by digital subtraction of initial parallel fringe pattern from deformed fringe pattern. The experimental results obtained are compared with theoretical correlations. The merits and demerits of the fringe analysis techniques are discussed on the basis of the experimental results.

  15. Fine-grained indexing of the biomedical literature: MeSH subheading attachment for a MEDLINE indexing tool.

    PubMed

    Névéol, Aurélie; Shooshan, Sonya E; Mork, James G; Aronson, Alan R

    2007-10-11

    This paper reports on the latest results of an Indexing Initiative effort addressing the automatic attachment of subheadings to MeSH main headings recommended by the NLM's Medical Text Indexer. Several linguistic and statistical approaches are used to retrieve and attach the subheadings. Continuing collaboration with NLM indexers also provided insight on how automatic methods can better enhance indexing practice. The methods were evaluated on corpus of 50,000 MEDLINE citations. For main heading/subheading pair recommendations, the best precision is obtained with a post-processing rule method (58%) while the best recall is obtained by pooling all methods (64%). For stand-alone subheading recommendations, the best performance is obtained with the PubMed Related Citations algorithm. Significant progress has been made in terms of subheading coverage. After further evaluation, some of this work may be integrated in the MEDLINE indexing workflow.

  16. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  17. Exact vibration analysis of a double-nanobeam-systems embedded in an elastic medium by a Hamiltonian-based method

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenhuan; Li, Yuejie; Fan, Junhai; Rong, Dalun; Sui, Guohao; Xu, Chenghui

    2018-05-01

    A new Hamiltonian-based approach is presented for finding exact solutions for transverse vibrations of double-nanobeam-systems embedded in an elastic medium. The continuum model is established within the frameworks of the symplectic methodology and the nonlocal Euler-Bernoulli and Timoshenko beam beams. The symplectic eigenfunctions are obtained after expressing the governing equations in a Hamiltonian form. Exact frequency equations, vibration modes and displacement amplitudes are obtained by using symplectic eigenfunctions and end conditions. Comparisons with previously published work are presented to illustrate the accuracy and reliability of the proposed method. The comprehensive results for arbitrary boundary conditions could serve as benchmark results for verifying numerically obtained solutions. In addition, a study on the difference between the nonlocal beam and the nonlocal plate is also included.

  18. Symmetry Reductions, Integrability and Solitary Wave Solutions to High-Order Modified Boussinesq Equations with Damping Term

    NASA Astrophysics Data System (ADS)

    Yan, Zhen-Ya; Xie, Fu-Ding; Zhang, Hong-Qing

    2001-07-01

    Both the direct method due to Clarkson and Kruskal and the improved direct method due to Lou are extended to reduce the high-order modified Boussinesq equation with the damping term (HMBEDT) arising in the general Fermi-Pasta-Ulam model. As a result, several types of similarity reductions are obtained. It is easy to show that the nonlinear wave equation is not integrable under the sense of Ablowitz's conjecture from the reduction results obtained. In addition, kink-shaped solitary wave solutions, which are of important physical significance, are found for HMBEDT based on the obtained reduction equation. The project supported by National Natural Science Foundation of China under Grant No. 19572022, the National Key Basic Research Development Project Program of China under Grant No. G1998030600 and Doctoral Foundation of China under Grant No. 98014119

  19. Ice Shape Scaling for Aircraft in SLD Conditions

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2008-01-01

    This paper has summarized recent NASA research into scaling of SLD conditions with data from both SLD and Appendix C tests. Scaling results obtained by applying existing scaling methods for size and test-condition scaling will be reviewed. Large feather growth issues, including scaling approaches, will be discussed briefly. The material included applies only to unprotected, unswept geometries. Within the limits of the conditions tested to date, the results show that the similarity parameters needed for Appendix C scaling also can be used for SLD scaling, and no additional parameters are required. These results were based on visual comparisons of reference and scale ice shapes. Nearly all of the experimental results presented have been obtained in sea-level tunnels. The currently recommended methods to scale model size, icing limit and test conditions are described.

  20. A Method for Obtaining Large Populations of Synchronized Caenorhabditis elegans Dauer Larvae.

    PubMed

    Ow, Maria C; Hall, Sarah E

    2015-01-01

    The C. elegans dauer is an attractive model with which to investigate fundamental biological questions, such as how environmental cues are sensed and are translated into developmental decisions through a series of signaling cascades that ultimately result in a transformed animal. Here we describe a simple method of using egg white plates to obtain highly synchronized purified dauers that can be used in downstream applications requiring large quantities of dauers or postdauer animals.

  1. An Interactive Scheduling Method for Railway Rolling Stock Allocation

    NASA Astrophysics Data System (ADS)

    Otsuki, Tomoshi; Nakajima, Masayoshi; Fuse, Toru; Shimizu, Tadashi; Aisu, Hideyuki; Yasumoto, Takanori; Kaneko, Kenichi; Yokoyama, Nobuyuki

    Experts working for railway schedule planners still have to devote considerable time and effort for creating rolling stock allocation plans. In this paper, we propose a semiautomatic planning method for creating these plans. Our scheduler is able to interactively deal with flexible constraint-expression inputs and to output easy-to-understand failure messages. Owing to these useful features, the scheduler can provide results that are comparable to those obtained by experts and are obtained faster than before.

  2. Application of P-wave hybrid theory to the scattering of electrons from He+ and resonances in He and H-

    NASA Astrophysics Data System (ADS)

    Bhatia, A. K.

    2012-09-01

    The P-wave hybrid theory of electron-hydrogen elastic scattering [Bhatia, Phys. Rev. A10.1103/PhysRevA.85.052708 85, 052708 (2012)] is applied to the P-wave scattering from He ion. In this method, both short-range and long-range correlations are included in the Schrödinger equation at the same time, by using a combination of a modified method of polarized orbitals and the optical potential formalism. The short-range-correlation functions are of Hylleraas type. It is found that the phase shifts are not significantly affected by the modification of the target function by a method similar to the method of polarized orbitals and they are close to the phase shifts calculated earlier by Bhatia [Phys. Rev. A10.1103/PhysRevA.69.032714 69, 032714 (2004)]. This indicates that the correlation function is general enough to include the target distortion (polarization) in the presence of the incident electron. The important fact is that in the present calculation, to obtain similar results only a 20-term correlation function is needed in the wave function compared to the 220-term wave function required in the above-mentioned calculation. Results for the phase shifts, obtained in the present hybrid formalism, are rigorous lower bounds to the exact phase shifts. The lowest P-wave resonances in He atom and hydrogen ion have also been calculated and compared with the results obtained using the Feshbach projection operator formalism [Bhatia and Temkin, Phys. Rev. A10.1103/PhysRevA.11.2018 11, 2018 (1975)] and also with the results of other calculations. It is concluded that accurate resonance parameters can be obtained by the present method, which has the advantage of including corrections due to neighboring resonances, bound states, and the continuum in which these resonances are embedded.

  3. A FEM-based method to determine the complex material properties of piezoelectric disks.

    PubMed

    Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C

    2014-08-01

    Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is considered in the adjustment procedure, the obtained material properties allow simulating the displacement amplitude accurately. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Bioanalysis works in the IAA AMS facility: Comparison of AMS analytical method with LSC method in human mass balance study

    NASA Astrophysics Data System (ADS)

    Miyaoka, Teiji; Isono, Yoshimi; Setani, Kaoru; Sakai, Kumiko; Yamada, Ichimaro; Sato, Yoshiaki; Gunji, Shinobu; Matsui, Takao

    2007-06-01

    Institute of Accelerator Analysis Ltd. (IAA) is the first Contract Research Organization in Japan providing Accelerator Mass Spectrometry (AMS) analysis services for carbon dating and bioanalysis works. The 3 MV AMS machines are maintained by validated analysis methods using multiple control compounds. It is confirmed that these AMS systems have reliabilities and sensitivities enough for each objective. The graphitization of samples for bioanalysis is prepared by our own purification lines including the measurement of total carbon content in the sample automatically. In this paper, we present the use of AMS analysis in human mass balance and metabolism profiling studies with IAA 3 MV AMS, comparing results obtained from the same samples with liquid scintillation counting (LSC). Human samples such as plasma, urine and feces were obtained from four healthy volunteers orally administered a 14C-labeled drug Y-700, a novel xanthine oxidase inhibitor, of which radioactivity was about 3 MBq (85 μCi). For AMS measurement, these samples were diluted 100-10,000-fold with pure-water or blank samples. The results indicated that AMS method had a good correlation with LSC method (e.g. plasma: r = 0.998, urine: r = 0.997, feces: r = 0.997), and that the drug recovery in the excreta exceeded 92%. The metabolite profiles of plasma, urine and feces obtained with HPLC-AMS corresponded to radio-HPLC results measured at much higher radioactivity level. These results revealed that AMS analysis at IAA is useful to measure 14C-concentration in bioanalysis studies at very low radioactivity level.

  5. Scheduling of House Development Projects with CPM and PERT Method for Time Efficiency (Case Study: House Type 36)

    NASA Astrophysics Data System (ADS)

    Kholil, Muhammad; Nurul Alfa, Bonitasari; Hariadi, Madjumsyah

    2018-04-01

    Network planning is one of the management techniques used to plan and control the implementation of a project, which shows the relationship between activities. The objective of this research is to arrange network planning on house construction project on CV. XYZ and to know the role of network planning in increasing the efficiency of time so that can be obtained the optimal project completion period. This research uses descriptive method, where the data collected by direct observation to the company, interview, and literature study. The result of this research is optimal time planning in project work. Based on the results of the research, it can be concluded that the use of the both methods in scheduling of house construction project gives very significant effect on the completion time of the project. The company’s CPM (Critical Path Method) method can complete the project with 131 days, PERT (Program Evaluation Review and Technique) Method takes 136 days. Based on PERT calculation obtained Z = -0.66 or 0,2546 (from normal distribution table), and also obtained the value of probability or probability is 74,54%. This means that the possibility of house construction project activities can be completed on time is high enough. While without using both methods the project completion time takes 173 days. So using the CPM method, the company can save time up to 42 days and has time efficiency by using network planning.

  6. An integrated bioanalytical method development and validation approach: case studies.

    PubMed

    Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M

    2012-10-01

    We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.

  7. Accelerated Testing of Polymeric Composites Using the Dynamic Mechanical Analyzer

    NASA Technical Reports Server (NTRS)

    Abdel-Magid, Becky M.; Gates, Thomas S.

    2000-01-01

    Creep properties of IM7/K3B composite material were obtained using three accelerated test methods at elevated temperatures. Results of flexural creep tests using the dynamic mechanical analyzer (DMA) were compared with results of conventional tensile and compression creep tests. The procedures of the three test methods are described and the results are presented. Despite minor differences in the time shift factor of the creep compliance curves, the DMA results compared favorably with the results from the tensile and compressive creep tests. Some insight is given into establishing correlations between creep compliance in flexure and creep compliance in tension and compression. It is shown that with careful consideration of the limitations of flexure creep, a viable and reliable accelerated test procedure can be developed using the DMA to obtain the viscoelastic properties of composites in extreme environments.

  8. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  9. A novel method of estimation of lipophilicity using distance-based topological indices: dominating role of equalized electronegativity.

    PubMed

    Agrawal, Vijay K; Gupta, Madhu; Singh, Jyoti; Khadikar, Padmakar V

    2005-03-15

    Attempt is made to propose yet another method of estimating lipophilicity of a heterogeneous set of 223 compounds. The method is based on the use of equalized electronegativity along with topological indices. It was observed that excellent results are obtained in multiparametric regression upon introduction of indicator parameters. The results are discussed critically on the basis various statistical parameters.

  10. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  11. A Differential Scanning Calorimetry Method for Construction of Continuous Cooling Transformation Diagram of Blast Furnace Slag

    NASA Astrophysics Data System (ADS)

    Gan, Lei; Zhang, Chunxia; Shangguan, Fangqin; Li, Xiuping

    2012-06-01

    The continuous cooling crystallization of a blast furnace slag was studied by the application of the differential scanning calorimetry (DSC) method. A kinetic model describing the correlation between the evolution of the degree of crystallization with time was obtained. Bulk cooling experiments of the molten slag coupled with numerical simulation of heat transfer were conducted to validate the results of the DSC methods. The degrees of crystallization of the samples from the bulk cooling experiments were estimated by means of the X-ray diffraction (XRD) and the DSC method. It was found that the results from the DSC cooling and bulk cooling experiments are in good agreement. The continuous cooling transformation (CCT) diagram of the blast furnace slag was constructed according to crystallization kinetic model and experimental data. The obtained CCT diagram characterizes with two crystallization noses at different temperature ranges.

  12. Implementation of the reduced charge state method of calculating impurity transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crume, E.C. Jr.; Arnurius, D.E.

    1982-07-01

    A recent review article by Hirshman and Sigmar includes expressions needed to calculate the parallel friction coefficients, the essential ingredients of the plateau-Pfirsch-Schluter transport coefficients, using the method of reduced charge states. These expressions have been collected and an expanded notation introduced in some cases to facilitate differentiation between reduced charge state and full charge state quantities. A form of the Coulomb logarithm relevant to the method of reduced charge states is introduced. This method of calculating the f/sub ij//sup ab/ has been implemented in the impurity transport simulation code IMPTAR and has resulted in an overall reduction in computationmore » time of approximately 25% for a typical simulation of impurity transport in the Impurity Study Experiment (ISX-B). Results obtained using this treatment are almost identical to those obtained using an earlier approximate theory of Hirshman.« less

  13. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  14. Legionella in water samples: how can you interpret the results obtained by quantitative PCR?

    PubMed

    Ditommaso, Savina; Ricciardi, Elisa; Giacomuzzi, Monica; Arauco Rivera, Susan R; Zotti, Carla M

    2015-02-01

    Evaluation of the potential risk associated with Legionella has traditionally been determined from culture-based methods. Quantitative polymerase chain reaction (qPCR) is an alternative tool that offers rapid, sensitive and specific detection of Legionella in environmental water samples. In this study we compare the results obtained by conventional qPCR (iQ-Check™ Quanti Legionella spp.; Bio-Rad) and by culture method on artificial samples prepared in Page's saline by addiction of Legionella pneumophila serogroup 1 (ATCC 33152) and we analyse the selective quantification of viable Legionella cells by the qPCR-PMA method. The amount of Legionella DNA (GU) determined by qPCR was 28-fold higher than the load detected by culture (CFU). Applying the qPCR combined with PMA treatment we obtained a reduction of 98.5% of the qPCR signal from dead cells. We observed a dissimilarity in the ability of PMA to suppress the PCR signal in samples with different amounts of bacteria: the effective elimination of detection signals by PMA depended on the concentration of GU and increasing amounts of cells resulted in higher values of reduction. Using the results from this study we created an algorithm to facilitate the interpretation of viable cell level estimation with qPCR-PMA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  16. Systematization method for distinguishing plastic groups by using NIR spectroscopy.

    PubMed

    Kaihara, Mikio; Satoh, Minami; Satoh, Minoru

    2007-07-01

    A systematic classification method for polymers is not yet available in case of using near infrared spectra (NIR). That is why we have been searching for a systematic method. Because raw NIR spectra usually have few obvious peaks, NIR spectra have been pretreated by 2nd derivation for taking well modulated spectra. After the pretreatment, we applied classification and regression trees (CART) to the discrimination between the spectra and the species of polymers. As a result, we obtained a relatively simple classification tree. Judging from the obtained splitting conditions and the classified polymers, we concluded that obtained knowledge on the chemical function groups estimated by the important wavelength regions is not always applicable to this classification tree. However, we clarified the splitting rules for polymer species from the NIR spectral point of view.

  17. Spin related transport in two pyrene and Triphenylene graphene nanodisks using NEGF method

    NASA Astrophysics Data System (ADS)

    Taghilou, Hamed; Fathi, Davood

    2018-07-01

    The present study is conducted to evaluate the spin polarization in two pyrene and Triphenylene graphene nanoflakes. All calculations are performed using non-equilibrium Green's function (NEGF) method. The obtained results show that, graphene has no magnetic property and using Pyrene nanoflake results in a better spin switching at extreme magnetic fields. On the contrary, when applying magnetized electrodes, depending on the direction of magnetization of the two electrodes (either parallel or anti-parallel), different spin polarization diagrams are obtained. In this situation, it is observed that, in the case of electrodes magnetization in Triphenylene nanoflake a better spin switching is reached.

  18. Lunar crater depths from orbiter IV long-focus photographs

    USGS Publications Warehouse

    Arthur, D.W.G.

    1974-01-01

    The paper presents method and results for the determination of the depths of more than 1900 small lunar craters from measures of shadows on the long-focus pictures obtained by Lunar Orbiter IV. The method for converting the measured shadow length into the true length in nature of the shadow hypotenuse is new and is applicable to other planetary bodies provided comparable spacecraft ephemerides are available. The measures were made with a simple surveyor's plotting scale on the standard Orbiter IV photographic enlargements. The results indicate that the smaller lunar (D < 30 km) craters are appreciably deeper than is indicated by earlier work using imagery obtained at terrestrial observatories. ?? 1974.

  19. A Comparison of Methods to Measure the Magnetic Moment of Magnetotactic Bacteria through Analysis of Their Trajectories in External Magnetic Fields

    PubMed Central

    Fradin, Cécile

    2013-01-01

    Magnetotactic bacteria possess organelles called magnetosomes that confer a magnetic moment on the cells, resulting in their partial alignment with external magnetic fields. Here we show that analysis of the trajectories of cells exposed to an external magnetic field can be used to measure the average magnetic dipole moment of a cell population in at least five different ways. We apply this analysis to movies of Magnetospirillum magneticum AMB-1 cells, and compare the values of the magnetic moment obtained in this way to that obtained by direct measurements of magnetosome dimension from electron micrographs. We find that methods relying on the viscous relaxation of the cell orientation give results comparable to that obtained by magnetosome measurements, whereas methods relying on statistical mechanics assumptions give systematically lower values of the magnetic moment. Since the observed distribution of magnetic moments in the population is not sufficient to explain this discrepancy, our results suggest that non-thermal random noise is present in the system, implying that a magnetotactic bacterial population should not be considered as similar to a paramagnetic material. PMID:24349185

  20. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

Top