NASA Astrophysics Data System (ADS)
Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko
Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.
Laser notching ceramics for reliable fracture toughness testing
Barth, Holly D.; Elmer, John W.; Freeman, Dennis C.; ...
2015-09-19
A new method for notching ceramics was developed using a picosecond laser for fracture toughness testing of alumina samples. The test geometry incorporated a single-edge-V-notch that was notched using picosecond laser micromachining. This method has been used in the past for cutting ceramics, and is known to remove material with little to no thermal effect on the surrounding material matrix. This study showed that laser-assisted-machining for fracture toughness testing of ceramics was reliable, quick, and cost effective. In order to assess the laser notched single-edge-V-notch beam method, fracture toughness results were compared to results from other more traditional methods, specificallymore » surface-crack in flexure and the chevron notch bend tests. Lastly, the results showed that picosecond laser notching produced precise notches in post-failure measurements, and that the measured fracture toughness results showed improved consistency compared to traditional fracture toughness methods.« less
A Runge-Kutta discontinuous finite element method for high speed flows
NASA Technical Reports Server (NTRS)
Bey, Kim S.; Oden, J. T.
1991-01-01
A Runge-Kutta discontinuous finite element method is developed for hyperbolic systems of conservation laws in two space variables. The discontinuous Galerkin spatial approximation to the conservation laws results in a system of ordinary differential equations which are marched in time using Runge-Kutta methods. Numerical results for the two-dimensional Burger's equation show that the method is (p+1)-order accurate in time and space, where p is the degree of the polynomial approximation of the solution within an element and is capable of capturing shocks over a single element without oscillations. Results for this problem also show that the accuracy of the solution in smooth regions is unaffected by the local projection and that the accuracy in smooth regions increases as p increases. Numerical results for the Euler equations show that the method captures shocks without oscillations and with higher resolution than a first-order scheme.
A new evaluation method research for fusion quality of infrared and visible images
NASA Astrophysics Data System (ADS)
Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda
2017-03-01
In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.
ERIC Educational Resources Information Center
Walberg, Herbert J.
2010-01-01
This book summarizes the major research findings that show how to substantially increase student achievement. This book draws on a number of investigators who have statistically synthesized many studies. A new education method showing superior results in 90% of the studies concerning it has more credibility than a method that shows results in only…
NASA Astrophysics Data System (ADS)
Su, Yi; Xu, Lei; Liu, Ningning; Huang, Wei; Xu, Xiaojing
2016-10-01
Purpose to find an efficient, non-destructive examining method for showing the disappearing words after writing with automatic disappearance pen. Method Using the imaging spectrometer to show the potential disappearance words on paper surface according to different properties of reflection absorbed by various substances in different bands. Results the disappeared words by using different disappearance pens to write on the same paper or the same disappearance pen to write on different papers, both can get good show results through the use of the spectral imaging examining methods. Conclusion Spectral imaging technology can show the disappearing words after writing by using the automatic disappearance pen.
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.
2001-01-01
This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
NASA Astrophysics Data System (ADS)
Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro
2001-06-01
We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.
Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry
NASA Astrophysics Data System (ADS)
Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei
2018-04-01
In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.
Unconditionally stable finite-difference time-domain methods for modeling the Sagnac effect
NASA Astrophysics Data System (ADS)
Novitski, Roman; Scheuer, Jacob; Steinberg, Ben Z.
2013-02-01
We present two unconditionally stable finite-difference time-domain (FDTD) methods for modeling the Sagnac effect in rotating optical microsensors. The methods are based on the implicit Crank-Nicolson scheme, adapted to hold in the rotating system reference frame—the rotating Crank-Nicolson (RCN) methods. The first method (RCN-2) is second order accurate in space whereas the second method (RCN-4) is fourth order accurate. Both methods are second order accurate in time. We show that the RCN-4 scheme is more accurate and has better dispersion isotropy. The numerical results show good correspondence with the expression for the classical Sagnac resonant frequency splitting when using group refractive indices of the resonant modes of a microresonator. Also we show that the numerical results are consistent with the perturbation theory for the rotating degenerate microcavities. We apply our method to simulate the effect of rotation on an entire Coupled Resonator Optical Waveguide (CROW) consisting of a set of coupled microresonators. Preliminary results validate the formation of a rotation-induced gap at the center of a transfer function of a CROW.
Zhang, Zhijun; Zhu, Meihua; Ashraf, Muhammad; Broberg, Craig S; Sahn, David J; Song, Xubo
2014-12-01
Quantitative analysis of right ventricle (RV) motion is important for study of the mechanism of congenital and acquired diseases. Unlike left ventricle (LV), motion estimation of RV is more difficult because of its complex shape and thin myocardium. Although attempts of finite element models on MR images and speckle tracking on echocardiography have shown promising results on RV strain analysis, these methods can be improved since the temporal smoothness of the motion is not considered. The authors have proposed a temporally diffeomorphic motion estimation method in which a spatiotemporal transformation is estimated by optimization of a registration energy functional of the velocity field in their earlier work. The proposed motion estimation method is a fully automatic process for general image sequences. The authors apply the method by combining with a semiautomatic myocardium segmentation method to the RV strain analysis of three-dimensional (3D) echocardiographic sequences of five open-chest pigs under different steady states. The authors compare the peak two-point strains derived by their method with those estimated from the sonomicrometry, the results show that they have high correlation. The motion of the right ventricular free wall is studied by using segmental strains. The baseline sequence results show that the segmental strains in their methods are consistent with results obtained by other image modalities such as MRI. The image sequences of pacing steady states show that segments with the largest strain variation coincide with the pacing sites. The high correlation of the peak two-point strains of their method and sonomicrometry under different steady states demonstrates that their RV motion estimation has high accuracy. The closeness of the segmental strain of their method to those from MRI shows the feasibility of their method in the study of RV function by using 3D echocardiography. The strain analysis of the pacing steady states shows the potential utility of their method in study on RV diseases.
NASA Astrophysics Data System (ADS)
Gallagher, H. G.; Sherwood, J. N.; Vrcelj, R. M.
2017-10-01
An examination has been made of the defect structure of crystals of the energetic material β-cyclotetramethylene-tetranitramine (HMX) using both Laboratory (Lang method) and Synchrotron (Bragg Reflection and Laue method) techniques. The results of the three methods are compared with particular attention to the influence of potential radiation damage caused to the samples by the latter, more energetic, technique. The comparison shows that both techniques can be confidently used to evaluate the defect structures yielding closely similar results. The results show that, even under the relatively casual preparative methods used (slow evaporation of unstirred solutions at constant temperature), HMX crystals of high perfection can be produced. The crystals show well defined bulk defect structures characteristic of organic materials in general: growth dislocations, twins, growth sector boundaries, growth banding and solvent inclusions. The distribution of the defects in specific samples is correlated with the morphological variation of the grown crystals. The results show promise for the further evaluation and characterisation of the structure and properties of dislocations and other defects and their involvement in mechanical and energetic processes in this material.
Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R
2016-04-01
Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.
Connan, O; Maro, D; Hébert, D; Solier, L; Caldeira Ideas, P; Laguionie, P; St-Amant, N
2015-10-01
The behaviour of tritium in the environment is linked to the water cycle. We compare three methods of calculating the tritium evapotranspiration flux from grassland cover. The gradient and eddy covariance methods, together with a method based on the theoretical Penmann-Monteith model were tested in a study carried out in 2013 in an environment characterised by high levels of tritium activity. The results show that each of the three methods gave similar results. The various constraints applying to each method are discussed. The results show a tritium evapotranspiration flux of around 15 mBq m(-2) s(-1) in this environment. These results will be used to improve the entry parameters for the general models of tritium transfers in the environment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T
2015-01-01
We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].
Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T.
2015-01-01
We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover, we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging. PMID:25786703
NASA Astrophysics Data System (ADS)
Latisma D, L.; Kurniawan, W.; Seprima, S.; Nirbayani, E. S.; Ellizar, E.; Hardeli, H.
2018-04-01
The purpose of this study was to see which method are well used with the Chemistry Triangle-oriented learning media. This quasi experimental research involves first grade of senior high school students in six schools namely each two SMA N in Solok city, in Pasaman and two SMKN in Pariaman. The sampling technique was done by Cluster Random Sampling. Data were collected by test and analyzed by one-way anova and Kruskall Wallish test. The results showed that the high school students in Solok learning taught by cooperative method is better than the results of student learning taught by conventional and Individual methods, both for students who have high initial ability and low-ability. Research in SMK showed that the overall student learning outcomes taught by conventional method is better than the student learning outcomes taught by cooperative and individual methods. Student learning outcomes that have high initial ability taught by individual method is better than student learning outcomes that are taught by cooperative method and for students who have low initial ability, there is no difference in student learning outcomes taught by cooperative, individual and conventional methods. Learning in high school in Pasaman showed no significant difference in learning outcomes of the three methods undertaken.
Comparison of analytical methods for the determination of histamine in reference canned fish samples
NASA Astrophysics Data System (ADS)
Jakšić, S.; Baloš, M. Ž.; Mihaljev, Ž.; Prodanov Radulović, J.; Nešić, K.
2017-09-01
Two screening methods for histamine in canned fish, an enzymatic test and a competitive direct enzyme-linked immunosorbent assay (CD-ELISA), were compared with the reversed-phase liquid chromatography (RP-HPLC) standard method. For enzymatic and CD-ELISA methods, determination was conducted according to producers’ manuals. For RP-HPLC, histamine was derivatized with dansyl-chloride, followed by RP-HPLC and diode array detection. Results of analysis of canned fish, supplied as reference samples for proficiency testing, showed good agreement when histamine was present at higher concentrations (above 100 mg kg-1). At a lower level (16.95 mg kg-1), the enzymatic test produced some higher results. Generally, analysis of four reference samples according to CD-ELISA and RP-HPLC showed good agreement for histamine determination (r=0.977 in concentration range 16.95-216 mg kg-1) The results show that the applied enzymatic test and CD-ELISA appeared to be suitable screening methods for the determination of histamine in canned fish.
Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems
NASA Astrophysics Data System (ADS)
Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding
2007-09-01
In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.
Detecting Diseases in Medical Prescriptions Using Data Mining Tools and Combining Techniques.
Teimouri, Mehdi; Farzadfar, Farshad; Soudi Alamdari, Mahsa; Hashemi-Meshkini, Amir; Adibi Alamdari, Parisa; Rezaei-Darzi, Ehsan; Varmaghani, Mehdi; Zeynalabedini, Aysan
2016-01-01
Data about the prevalence of communicable and non-communicable diseases, as one of the most important categories of epidemiological data, is used for interpreting health status of communities. This study aims to calculate the prevalence of outpatient diseases through the characterization of outpatient prescriptions. The data used in this study is collected from 1412 prescriptions for various types of diseases from which we have focused on the identification of ten diseases. In this study, data mining tools are used to identify diseases for which prescriptions are written. In order to evaluate the performances of these methods, we compare the results with Naïve method. Then, combining methods are used to improve the results. Results showed that Support Vector Machine, with an accuracy of 95.32%, shows better performance than the other methods. The result of Naive method, with an accuracy of 67.71%, is 20% worse than Nearest Neighbor method which has the lowest level of accuracy among the other classification algorithms. The results indicate that the implementation of data mining algorithms resulted in a good performance in characterization of outpatient diseases. These results can help to choose appropriate methods for the classification of prescriptions in larger scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor-Pashow, K.; Fondeur, F.; White, T.
Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less
A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry
NASA Astrophysics Data System (ADS)
Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.
2018-03-01
Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.
A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd
2017-08-01
Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.
Significant Returns in Engagement and Performance with a Free Teaching App
ERIC Educational Resources Information Center
Green, Alan
2016-01-01
Pedagogical research shows that teaching methods other than traditional lectures may result in better outcomes. However, lecture remains the dominant method in economics, likely due to high implementation costs of methods shown to be effective in the literature. In this article, the author shows significant benefits of using a teaching app for…
Financial time series analysis based on information categorization method
NASA Astrophysics Data System (ADS)
Tian, Qiang; Shang, Pengjian; Feng, Guochen
2014-12-01
The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.
2014-01-01
Background The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. Methods We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2–20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of “positive” (statistically significant at p < 0.05) findings using empirical data of recent meta-analyses with > = 3 studies of interventions from the Cochrane Database of Systematic Reviews. Results The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Conclusions Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes. PMID:24548571
A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows
NASA Astrophysics Data System (ADS)
Bijleveld, H. A.; Veldman, A. E. P.
2014-12-01
A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.
Marjanovič, Igor; Kandušer, Maša; Miklavčič, Damijan; Keber, Mateja Manček; Pavlin, Mojca
2014-12-01
In this study, we compared three different methods used for quantification of gene electrotransfer efficiency: fluorescence microscopy, flow cytometry and spectrofluorometry. We used CHO and B16 cells in a suspension and plasmid coding for GFP. The aim of this study was to compare and analyse the results obtained by fluorescence microscopy, flow cytometry and spectrofluorometry and in addition to analyse the applicability of spectrofluorometry for quantifying gene electrotransfer on cells in a suspension. Our results show that all the three methods detected similar critical electric field strength, around 0.55 kV/cm for both cell lines. Moreover, results obtained on CHO cells showed that the total fluorescence intensity and percentage of transfection exhibit similar increase in response to increase electric field strength for all the three methods. For B16 cells, there was a good correlation at low electric field strengths, but at high field strengths, flow cytometer results deviated from results obtained by fluorescence microscope and spectrofluorometer. Our study showed that all the three methods detected similar critical electric field strengths and high correlations of results were obtained except for B16 cells at high electric field strengths. The results also demonstrated that flow cytometry measures higher values of percentage transfection compared to microscopy. Furthermore, we have demonstrated that spectrofluorometry can be used as a simple and consistent method to determine gene electrotransfer efficiency on cells in a suspension.
NASA Astrophysics Data System (ADS)
Hakim, A. A.; Rajagukguk, T. O.; Sumardi, S.
2018-01-01
Along with developing necessities of metal materials, these rise demands of quality improvements and material protections especially the mechanical properties of the material. This research used hot dip galvanizing coating method. The objectives of this research were to find out Rockwell hardness (HRb), layer thickness, micro structure and observation with Scanning Electron Microscope (SEM) from result of coating by using Hot Dip Galvanizing coating method with immersion time of 3, 6, 9, and 12 minutes at 460°C. The result shows that Highest Rockwell hardness test (HRb) was at 3 minutes immersion time with 76.012 HRb. Highest thickness result was 217.3 μm at 12 minutes immersion. Microstructure test result showed that coating was formed at eta, zeta, delta and gamma phases, while Scanning Electron Microscope (SEM) showed Fe, Zn, Mn, Si and S elements at the specimens after coating.
Sport fishing: a comparison of three indirect methods for estimating benefits.
Darrell L. Hueth; Elizabeth J. Strong; Roger D. Fight
1988-01-01
Three market-based methods for estimating values of sport fishing were compared by using a common data base. The three approaches were the travel-cost method, the hedonic travel-cost method, and the household-production method. A theoretical comparison of the resulting values showed that the results were not fully comparable in several ways. The comparison of empirical...
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
Davoren, Jon; Vanek, Daniel; Konjhodzić, Rijad; Crews, John; Huffine, Edwin; Parsons, Thomas J.
2007-01-01
Aim To quantitatively compare a silica extraction method with a commonly used phenol/chloroform extraction method for DNA analysis of specimens exhumed from mass graves. Methods DNA was extracted from twenty randomly chosen femur samples, using the International Commission on Missing Persons (ICMP) silica method, based on Qiagen Blood Maxi Kit, and compared with the DNA extracted by the standard phenol/chloroform-based method. The efficacy of extraction methods was compared by real time polymerase chain reaction (PCR) to measure DNA quantity and the presence of inhibitors and by amplification with the PowerPlex 16 (PP16) multiplex nuclear short tandem repeat (STR) kit. Results DNA quantification results showed that the silica-based method extracted on average 1.94 ng of DNA per gram of bone (range 0.25-9.58 ng/g), compared with only 0.68 ng/g by the organic method extracted (range 0.0016-4.4880 ng/g). Inhibition tests showed that there were on average significantly lower levels of PCR inhibitors in DNA isolated by the organic method. When amplified with PP16, all samples extracted by silica-based method produced 16 full loci profiles, while only 75% of the DNA extracts obtained by organic technique amplified 16 loci profiles. Conclusions The silica-based extraction method showed better results in nuclear STR typing from degraded bone samples than a commonly used phenol/chloroform method. PMID:17696302
Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
NASA Technical Reports Server (NTRS)
Prost, L.; Pauillac, A.
1978-01-01
Experience has shown that different methods of analysis of SiC products give different results. Methods identified as AFNOR, FEPA, and manufacturer P, currently used to detect SiC, free C, free Si, free Fe, and SiO2 are reviewed. The AFNOR method gives lower SiC content, attributed to destruction of SiC by grinding. Two products sent to independent labs for analysis by the AFNOR and FEPA methods showed somewhat different results, especially for SiC, SiO2, and Al2O3 content, whereas an X-ray analysis showed a SiC content approximately 10 points lower than by chemical methods.
Forest Herbicide Washoff From Foliar Applications
J.L. Michael; Kevin L. Talley; H.C. Fishburn
1992-01-01
Field and laboratory experiments were conducted to develop and test methods for determining washoff of foliar applied herbicides typically used in forestry in the South.Preliminary results show good agreement between results of laboratory methods used and observations from field experiments on actual precipitation events. Methods included application of...
SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Elder, E; Roper, J
2015-06-15
Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less
Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image
NASA Astrophysics Data System (ADS)
He, Xingwu; You, Junchen
2018-03-01
Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe; Camilla, S.
The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the referencemore » material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.« less
Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina
2018-01-01
Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.
Denoising Medical Images using Calculus of Variations
Kohan, Mahdi Nakhaie; Behnam, Hamid
2011-01-01
We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674
3-D surface profilometry based on modulation measurement by applying wavelet transform method
NASA Astrophysics Data System (ADS)
Zhong, Min; Chen, Feng; Xiao, Chao; Wei, Yongchao
2017-01-01
A new analysis of 3-D surface profilometry based on modulation measurement technique by the application of Wavelet Transform method is proposed. As a tool excelling for its multi-resolution and localization in the time and frequency domains, Wavelet Transform method with good localized time-frequency analysis ability and effective de-noizing capacity can extract the modulation distribution more accurately than Fourier Transform method. Especially for the analysis of complex object, more details of the measured object can be well remained. In this paper, the theoretical derivation of Wavelet Transform method that obtains the modulation values from a captured fringe pattern is given. Both computer simulation and elementary experiment are used to show the validity of the proposed method by making a comparison with the results of Fourier Transform method. The results show that the Wavelet Transform method has a better performance than the Fourier Transform method in modulation values retrieval.
Wang, Chih-Hao; Fang, Te-Hua; Cheng, Po-Chien; Chiang, Chia-Chin; Chao, Kuan-Chi
2015-06-01
This paper used numerical and experimental methods to investigate the mechanical properties of amorphous NiAl alloys during the nanoindentation process. A simulation was performed using the many-body tight-binding potential method. Temperature, plastic deformation, elastic recovery, and hardness were evaluated. The experimental method was based on nanoindentation measurements, allowing a precise prediction of Young's modulus and hardness values for comparison with the simulation results. The indentation simulation results showed a significant increase of NiAl hardness and elastic recovery with increasing Ni content. Furthermore, the results showed that hardness and Young's modulus increase with increasing Ni content. The simulation results are in good agreement with the experimental results. Adhesion test of amorphous NiAl alloys at room temperature is also described in this study.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.
A new method for calculating ecological flow: Distribution flow method
NASA Astrophysics Data System (ADS)
Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei
2018-04-01
A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.
NASA Technical Reports Server (NTRS)
Lee, Sam; Addy, Harold E. Jr.; Broeren, Andy P.; Orchard, David M.
2017-01-01
A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two new scaling methods based on Weber number were compared against a method based on Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel where the three methods of scaling were also tested and compared along with reference (altitude) icing conditions. In those tests, the Weber number-based scaling methods yielded results much closer to those observed at the reference icing conditions than the Reynolds number-based icing conditions. The test in the NASA IRT used a much larger, asymmetric airfoil with an ice protection system that more closely resembled designs used in commercial aircraft. Following the trends observed during the AIWT tests, the Weber number based scaling methods resulted in smaller runback ice than the Reynolds number based scaling, and the ice formed farther upstream. The results show that the new Weber number based scaling methods, particularly the Weber number with water loading scaling, continue to show promise for ice protection system development and evaluation in atmospheric icing tunnels.
Zhao, S M; Leach, J; Gong, L Y; Ding, J; Zheng, B Y
2012-01-02
The effect of atmosphere turbulence on light's spatial structure compromises the information capacity of photons carrying the Orbital Angular Momentum (OAM) in free-space optical (FSO) communications. In this paper, we study two aberration correction methods to mitigate this effect. The first one is the Shack-Hartmann wavefront correction method, which is based on the Zernike polynomials, and the second is a phase correction method specific to OAM states. Our numerical results show that the phase correction method for OAM states outperforms the Shark-Hartmann wavefront correction method, although both methods improve significantly purity of a single OAM state and the channel capacities of FSO communication link. At the same time, our experimental results show that the values of participation functions go down at the phase correction method for OAM states, i.e., the correction method ameliorates effectively the bad effect of atmosphere turbulence.
NASA Astrophysics Data System (ADS)
Wang, Pan; Zhang, Yi; Yan, Dong
2018-05-01
Ant Colony Algorithm (ACA) is a powerful and effective algorithm for solving the combination optimization problem. Moreover, it was successfully used in traveling salesman problem (TSP). But it is easy to prematurely converge to the non-global optimal solution and the calculation time is too long. To overcome those shortcomings, a new method is presented-An improved self-adaptive Ant Colony Algorithm based on genetic strategy. The proposed method adopts adaptive strategy to adjust the parameters dynamically. And new crossover operation and inversion operation in genetic strategy was used in this method. We also make an experiment using the well-known data in TSPLIB. The experiment results show that the performance of the proposed method is better than the basic Ant Colony Algorithm and some improved ACA in both the result and the convergence time. The numerical results obtained also show that the proposed optimization method can achieve results close to the theoretical best known solutions at present.
Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.
Passive Seismic for Hydrocarbon Indicator : Between Expectation and Reality
NASA Astrophysics Data System (ADS)
Pandito, Riky H. B.
2018-03-01
In between 5 – 10 years, in our country, passive seismic method became more popular to finding hydrocarbon. Low price, nondestructive acquisition and easy to mobilization is the best reason for choose the method. But in the other part, some people are pessimistically to deal with the result. Instrument specification, data condition and processing methods is several points which influence characteristic and interpretation passive seismic result. In 2010 one prospect in East Java Basin has been measurement constist of 112 objective points and several calibration points. Data measurement results indicate a positive response. Furthermore, in 2013 exploration drliing conducted on the prospect. Drill steam test showes 22 MMCFD in objective zone, upper – late oligocene. In 2015, remeasurement taken in objective area and show consistent responses with previous measurement. Passive seismic is unique method, sometimes will have difference results on dry, gas and oil area, in field production and also temporary suspend area with hidrocarbon content.
Ensemble Methods for MiRNA Target Prediction from Expression Data
Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong
2015-01-01
Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448
Improving Non-Destructive Concrete Strength Tests Using Support Vector Machines
Shih, Yi-Fan; Wang, Yu-Ren; Lin, Kuo-Liang; Chen, Chin-Wen
2015-01-01
Non-destructive testing (NDT) methods are important alternatives when destructive tests are not feasible to examine the in situ concrete properties without damaging the structure. The rebound hammer test and the ultrasonic pulse velocity test are two popular NDT methods to examine the properties of concrete. The rebound of the hammer depends on the hardness of the test specimen and ultrasonic pulse travelling speed is related to density, uniformity, and homogeneity of the specimen. Both of these two methods have been adopted to estimate the concrete compressive strength. Statistical analysis has been implemented to establish the relationship between hammer rebound values/ultrasonic pulse velocities and concrete compressive strength. However, the estimated results can be unreliable. As a result, this research proposes an Artificial Intelligence model using support vector machines (SVMs) for the estimation. Data from 95 cylinder concrete samples are collected to develop and validate the model. The results show that combined NDT methods (also known as SonReb method) yield better estimations than single NDT methods. The results also show that the SVMs model is more accurate than the statistical regression model. PMID:28793627
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice
2014-05-07
Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
A comparative study of cultural methods for the detection of Salmonella in feed and feed ingredients
Koyuncu, Sevinc; Haggblom, Per
2009-01-01
Background Animal feed as a source of infection to food producing animals is much debated. In order to increase our present knowledge about possible feed transmission it is important to know that the present isolation methods for Salmonella are reliable also for feed materials. In a comparative study the ability of the standard method used for isolation of Salmonella in feed in the Nordic countries, the NMKL71 method (Nordic Committee on Food Analysis) was compared to the Modified Semisolid Rappaport Vassiliadis method (MSRV) and the international standard method (EN ISO 6579:2002). Five different feed materials were investigated, namely wheat grain, soybean meal, rape seed meal, palm kernel meal, pellets of pig feed and also scrapings from a feed mill elevator. Four different levels of the Salmonella serotypes S. Typhimurium, S. Cubana and S. Yoruba were added to each feed material, respectively. For all methods pre-enrichment in Buffered Peptone Water (BPW) were carried out followed by enrichments in the different selective media and finally plating on selective agar media. Results The results obtained with all three methods showed no differences in detection levels, with an accuracy and sensitivity of 65% and 56%, respectively. However, Müller-Kauffmann tetrathionate-novobiocin broth (MKTTn), performed less well due to many false-negative results on Brilliant Green agar (BGA) plates. Compared to other feed materials palm kernel meal showed a higher detection level with all serotypes and methods tested. Conclusion The results of this study showed that the accuracy, sensitivity and specificity of the investigated cultural methods were equivalent. However, the detection levels for different feed and feed ingredients varied considerably. PMID:19192298
[Optimization of cluster analysis based on drug resistance profiles of MRSA isolates].
Tani, Hiroya; Kishi, Takahiko; Gotoh, Minehiro; Yamagishi, Yuka; Mikamo, Hiroshige
2015-12-01
We examined 402 methicillin-resistant Staphylococcus aureus (MRSA) strains isolated from clinical specimens in our hospital between November 19, 2010 and December 27, 2011 to evaluate the similarity between cluster analysis of drug susceptibility tests and pulsed-field gel electrophoresis (PFGE). The results showed that the 402 strains tested were classified into 27 PFGE patterns (151 subtypes of patterns). Cluster analyses of drug susceptibility tests with the cut-off distance yielding a similar classification capability showed favorable results--when the MIC method was used, and minimum inhibitory concentration (MIC) values were used directly in the method, the level of agreement with PFGE was 74.2% when 15 drugs were tested. The Unweighted Pair Group Method with Arithmetic mean (UPGMA) method was effective when the cut-off distance was 16. Using the SIR method in which susceptible (S), intermediate (I), and resistant (R) were coded as 0, 2, and 3, respectively, according to the Clinical and Laboratory Standards Institute (CLSI) criteria, the level of agreement with PFGE was 75.9% when the number of drugs tested was 17, the method used for clustering was the UPGMA, and the cut-off distance was 3.6. In addition, to assess the reproducibility of the results, 10 strains were randomly sampled from the overall test and subjected to cluster analysis. This was repeated 100 times under the same conditions. The results indicated good reproducibility of the results, with the level of agreement with PFGE showing a mean of 82.0%, standard deviation of 12.1%, and mode of 90.0% for the MIC method and a mean of 80.0%, standard deviation of 13.4%, and mode of 90.0% for the SIR method. In summary, cluster analysis for drug susceptibility tests is useful for the epidemiological analysis of MRSA.
Network-Induced Classification Kernels for Gene Expression Profile Analysis
Dror, Gideon; Shamir, Ron
2012-01-01
Abstract Computational classification of gene expression profiles into distinct disease phenotypes has been highly successful to date. Still, robustness, accuracy, and biological interpretation of the results have been limited, and it was suggested that use of protein interaction information jointly with the expression profiles can improve the results. Here, we study three aspects of this problem. First, we show that interactions are indeed relevant by showing that co-expressed genes tend to be closer in the network of interactions. Second, we show that the improved performance of one extant method utilizing expression and interactions is not really due to the biological information in the network, while in another method this is not the case. Finally, we develop a new kernel method—called NICK—that integrates network and expression data for SVM classification, and demonstrate that overall it achieves better results than extant methods while running two orders of magnitude faster. PMID:22697242
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
NASA Astrophysics Data System (ADS)
Dupuis, Hélène; Weill, Alain; Katsaros, Kristina; Taylor, Peter K.
1995-10-01
Heat flux estimates obtained using the inertial dissipation method, and the profile method applied to radiosonde soundings, are assessed with emphasis on the parameterization of the roughness lengths for temperature and specific humidity. Results from the inertial dissipation method show a decrease of the temperature and humidity roughness lengths for increasing neutral wind speed, in agreement with previous studies. The sensible heat flux estimates were obtained using the temperature estimated from the speed of sound determined by a sonic anemometer. This method seems very attractive for estimating heat fluxes over the ocean. However allowance must be made in the inertial dissipation method for non-neutral stratification. The SOFIA/ASTEX and SEMAPHORE results show that, in unstable stratification, a term due to the transport terms in the turbulent kinetic energy budget, has to be included in order to determine the friction velocity with better accuracy. Using the profile method with radiosonde data, the roughness length values showed large scatter. A reliable estimate of the temperature roughness length could not be obtained. The humidity roughness length values were compatible with those found using the inertial dissipation method.
Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather
2015-01-01
A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast, quantitative tools that can be used to provide rapid, reliable information about the leachability of metals and other constituents of concern, and the acid-generating potential of metal mining waste.
CNV-seq, a new method to detect copy number variation using high-throughput sequencing.
Xie, Chao; Tammi, Martti T
2009-03-06
DNA copy number variation (CNV) has been recognized as an important source of genetic variation. Array comparative genomic hybridization (aCGH) is commonly used for CNV detection, but the microarray platform has a number of inherent limitations. Here, we describe a method to detect copy number variation using shotgun sequencing, CNV-seq. The method is based on a robust statistical model that describes the complete analysis procedure and allows the computation of essential confidence values for detection of CNV. Our results show that the number of reads, not the length of the reads is the key factor determining the resolution of detection. This favors the next-generation sequencing methods that rapidly produce large amount of short reads. Simulation of various sequencing methods with coverage between 0.1x to 8x show overall specificity between 91.7 - 99.9%, and sensitivity between 72.2 - 96.5%. We also show the results for assessment of CNV between two individual human genomes.
Spiric, Aurelija; Trbovic, Dejana; Vranic, Danijela; Djinovic, Jasna; Petronijevic, Radivoj; Matekalo-Sverak, Vesna
2010-07-05
Studies performed on lipid extraction from animal and fish tissues do not provide information on its influence on fatty acid composition of the extracted lipids as well as on cholesterol content. Data presented in this paper indicate the impact of extraction procedures on fatty acid profile of fish lipids extracted by the modified Soxhlet and ASE (accelerated solvent extraction) procedure. Cholesterol was also determined by direct saponification method, too. Student's paired t-test used for comparison of the total fat content in carp fish population obtained by two extraction methods shows that differences between values of the total fat content determined by ASE and modified Soxhlet method are not statistically significant. Values obtained by three different methods (direct saponification, ASE and modified Soxhlet method), used for determination of cholesterol content in carp, were compared by one-way analysis of variance (ANOVA). The obtained results show that modified Soxhlet method gives results which differ significantly from the results obtained by direct saponification and ASE method. However the results obtained by direct saponification and ASE method do not differ significantly from each other. The highest quantities for cholesterol (37.65 to 65.44 mg/100 g) in the analyzed fish muscle were obtained by applying direct saponification method, as less destructive one, followed by ASE (34.16 to 52.60 mg/100 g) and modified Soxhlet extraction method (10.73 to 30.83 mg/100 g). Modified Soxhlet method for extraction of fish lipids gives higher values for n-6 fatty acids than ASE method (t(paired)=3.22 t(c)=2.36), while there is no statistically significant difference in the n-3 content levels between the methods (t(paired)=1.31). The UNSFA/SFA ratio obtained by using modified Soxhlet method is also higher than the ratio obtained using ASE method (t(paired)=4.88 t(c)=2.36). Results of Principal Component Analysis (PCA) showed that the highest positive impact to the second principal component (PC2) is recorded by C18:3 n-3, and C20:3 n-6, being present in a higher amount in the samples treated by the modified Soxhlet extraction, while C22:5 n-3, C20:3 n-3, C22:1 and C20:4, C16 and C18 negatively influence the score values of the PC2, showing significantly increased level in the samples treated by ASE method. Hotelling's paired T-square test used on the first three principal components for confirmation of differences in individual fatty acid content obtained by ASE and Soxhlet method in carp muscle showed statistically significant difference between these two data sets (T(2)=161.308, p<0.001). Copyright 2010 Elsevier B.V. All rights reserved.
Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.
Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui
2018-02-01
In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.
The intervals method: a new approach to analyse finite element outputs using multivariate statistics
De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep
2017-01-01
Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107
Non destructive testing of works of art by terahertz analysis
NASA Astrophysics Data System (ADS)
Bodnar, Jean-Luc; Metayer, Jean-Jacques; Mouhoubi, Kamel; Detalle, Vincent
2013-11-01
Improvements in technologies and the growing security needs in airport terminals lead to the development of non destructive testing devices using terahertz waves. Indeed, these waves have the advantage of being, on one hand, relatively penetrating. They also have the asset of not being ionizing. It is thus potentially an interesting contribution in the non destructive testing field. With the help of the VISIOM Company, the possibilities of this new industrial analysis method in assisting the restoration of works of art were then approached. The results obtained within this framework are presented here and compared with those obtained by infrared thermography. The results obtained show first that the THZ method, like the stimulated infrared thermography allows the detection of delamination located in murals paintings or in marquetries. They show then that the THZ method seems to allow detecting defects located relatively deeply (10 mm) and defects potentially concealed by other defects. It is an advantage compared to the stimulated infra-red thermography which does not make it possible to obtain these results. Furthermore, they show that the method does not seem sensitive to the various pigments constituting the pictorial layer, to the presence of a layer of "Japan paper" and to the presence of a layer of whitewash. It is not the case of the stimulated infrared thermography. It is another advantage of the THZ method. Finally, they show that the THZ method is limited in the detection of low-size defects. It is a disadvantage compared to the stimulated infrared thermography.
Advanced Guidance and Control Methods for Reusable Launch Vehicles: Test Results
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.; Krupp, Don R.; Fogle, Frank R. (Technical Monitor)
2002-01-01
There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety/reliability and reducing the cost. In this paper, we examine some of these methods and compare the results. We briefly introduce the various methods under test, list the test cases used to demonstrate that the desired results are achieved, show an automated test scoring method that greatly reduces the evaluation effort required, and display results of the tests. Results are shown for the algorithms that have entered testing so far.
Topological soliton solutions for three shallow water waves models
NASA Astrophysics Data System (ADS)
Liu, Jiangen; Zhang, Yufeng; Wang, Yan
2018-07-01
In this article, we investigate three distinct physical structures for shallow water waves models by the improved ansatz method. The method was improved and can be used to obtain more generalized form topological soliton solutions than the original method. As a result, some new exact solutions of the shallow water equations are successfully established and the obtained results are exhibited graphically. The results showed that the improved ansatz method can be applied to solve other nonlinear differential equations arising from mathematical physics.
Research on the calibration methods of the luminance parameter of radiation luminance meters
NASA Astrophysics Data System (ADS)
Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei
2017-10-01
This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.
Fernández-Pérez, Rocío; Torres, Carmen; Sanz, Susana; Ruiz-Larrea, Fernanda
2010-12-01
Strain typing of 103 acetic acid bacteria isolates from vinegars elaborated by the submerged method from ciders, wines and spirit ethanol, was carried on in this study. Two different molecular methods were utilised: pulsed field gel electrophoresis (PFGE) of total DNA digests with a number of restriction enzymes, and enterobacterial repetitive intergenic consensus (ERIC) - PCR analysis. The comparative study of both methods showed that restriction fragment PFGE of SpeI digests of total DNA was a suitable method for strain typing and for determining which strains were present in vinegar fermentations. Results showed that strains of the species Gluconacetobacter europaeus were the most frequent leader strains of fermentations by the submerged method in the studied vinegars, and among them strain R1 was the predominant one. Results showed as well that mixed populations (at least two different strains) occurred in vinegars from cider and wine, whereas unique strains were found in spirit vinegars, which offered the most stressing conditions for bacterial growth. Copyright © 2010 Elsevier Ltd. All rights reserved.
Anchoring quartet-based phylogenetic distances and applications to species tree reconstruction.
Sayyari, Erfan; Mirarab, Siavash
2016-11-11
Inferring species trees from gene trees using the coalescent-based summary methods has been the subject of much attention, yet new scalable and accurate methods are needed. We introduce DISTIQUE, a new statistically consistent summary method for inferring species trees from gene trees under the coalescent model. We generalize our results to arbitrary phylogenetic inference problems; we show that two arbitrarily chosen leaves, called anchors, can be used to estimate relative distances between all other pairs of leaves by inferring relevant quartet trees. This results in a family of distance-based tree inference methods, with running times ranging between quadratic to quartic in the number of leaves. We show in simulated studies that DISTIQUE has comparable accuracy to leading coalescent-based summary methods and reduced running times.
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA
2008-05-13
A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA
2012-03-06
A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
Universal multiplex PCR and CE for quantification of SMN1/SMN2 genes in spinal muscular atrophy.
Wang, Chun-Chi; Chang, Jan-Gowth; Jong, Yuh-Jyh; Wu, Shou-Mei
2009-04-01
We established a universal multiplex PCR and CE to calculate the copy number of survival motor neuron (SMN1 and SMN2) genes for clinical screening of spinal muscular atrophy (SMA). In this study, one universal fluorescent primer was designed and applied for multiplex PCR of SMN1, SMN2 and two internal standards (CYBB and KRIT1). These amplicons were separated by conformation sensitive CE. Mixture of hydroxyethyl cellulose and hydroxypropyl cellulose were used in this CE system. Our method provided the potential to separate two 390-bp PCR products that differ in a single nucleotide. Differentiation and quantification of SMN1 and SMN2 are essential for clinical screening of SMA patients and carriers. The DNA samples included 22 SMA patients, 45 parents of SMA patients (obligatory carriers) and 217 controls. For evaluating accuracy, those 284 samples were blind-analyzed by this method and denaturing high pressure liquid chromatography (DHPLC). Eight of the total samples showed different results. Among them, two samples were diagnosed as having only SMN2 gene by DHPLC, however, they contained both SMN1 and SMN2 by our method. They were further confirmed by DNA sequencing. Our method showed good agreement with the DNA sequencing. The multiplex ligation-dependent probe amplification (MLPA) was used for confirming the other five samples, and showed the same results with our CE method. For only one sample, our CE showed different results with MLPA and DNA sequencing. One out of 284 samples (0.35%) belonged to mismatching. Our method provided a better accurate method and convenient method for clinical genotyping of SMA disease.
Wang, Hsiaoling; Levi, Mark S; Del Grosso, Alfred V; McCormick, William M; Bhattacharyya, Lokesh
2017-05-10
Size exclusion (SE) high performance liquid chromatography (HPLC) is widely used for the molecular size distribution (MSD) analyses of various therapeutic proteins. We report development and validation of a SE-HPLC method for MSD analyses of immunoglobulin G (IgG) in products using a TSKgel SuperSW3000 column and eluting it with 0.4M NaClO 4 , a chaotropic salt, in 40mM phosphate buffer, pH 6.8. The chromatograms show distinct peaks of aggregates, tetramer, and two dimers, as well as the monomer and fragment peaks. In addition, the method offers about half the run time (12min), better peak resolution, improved peak shape and more stable base-line compared to HPLC methods reported in the literature, including that in the European Pharmacopeia (EP). A comparison of MSD analysis results between our method and the EP method shows interactions between the protein and the stationary phase and partial adsorption of aggregates and tetramer on the stationary phase, when the latter method is used. Thus, the EP method shows lower percent of aggregates and tetramer than are actually present in the products. In view of the fact that aggregates have been attributed to playing a critical role in adverse reactions due to IgG products, our observation raises a major concern regarding the actual aggregate content in these products since the EP method is widely used for MSD analyses of IgG products. Our method eliminates (or substantially reduces) the interactions between the proteins and stationary phase as well as the adsorption of proteins onto the column. Our results also show that NaClO 4 in the eluent is more effective in overcoming the protein/column interactions compared to Arg-HCl, another chaotropic salt. NaClO 4 is shown not to affect the molecular size and relative distribution of different molecular forms of IgG. The method validated as per ICH Q2(R1) guideline using IgG products, shows good specificity, accuracy, precision and a linear concentration dependence of peak areas for different molecular forms. In summary, our method gives more reliable results than the SE-HPLC methods for MSD analyses of IgG reported in the literature, including the EP, particularly for aggregates and tetramer. The results are interpreted in terms of ionic (polar) and hydrophobic interactions between the stationary phase and the IgG protein. Published by Elsevier B.V.
Takács, Péter
2016-01-01
We compared the repeatability, reproducibility (intra- and inter-measurer similarity), separative power and subjectivity (measurer effect on results) of four morphometric methods frequently used in ichthyological research, the “traditional” caliper-based (TRA) and truss-network (TRU) distance methods and two geometric methods that compare landmark coordinates on the body (GMB) and scales (GMS). In each case, measurements were performed three times by three measurers on the same specimen of three common cyprinid species (roach Rutilus rutilus (Linnaeus, 1758), bleak Alburnus alburnus (Linnaeus, 1758) and Prussian carp Carassius gibelio (Bloch, 1782)) collected from three closely-situated sites in the Lake Balaton catchment (Hungary) in 2014. TRA measurements were made on conserved specimens using a digital caliper, while TRU, GMB and GMS measurements were undertaken on digital images of the bodies and scales. In most cases, intra-measurer repeatability was similar. While all four methods were able to differentiate the source populations, significant differences were observed in their repeatability, reproducibility and subjectivity. GMB displayed highest overall repeatability and reproducibility and was least burdened by measurer effect. While GMS showed similar repeatability to GMB when fish scales had a characteristic shape, it showed significantly lower reproducability (compared with its repeatability) for each species than the other methods. TRU showed similar repeatability as the GMS. TRA was the least applicable method as measurements were obtained from the fish itself, resulting in poor repeatability and reproducibility. Although all four methods showed some degree of subjectivity, TRA was the only method where population-level detachment was entirely overwritten by measurer effect. Based on these results, we recommend a) avoidance of aggregating different measurer’s datasets when using TRA and GMS methods; and b) use of image-based methods for morphometric surveys. Automation of the morphometric workflow would also reduce any measurer effect and eliminate measurement and data-input errors. PMID:27327896
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
2016-11-16
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
Selecting foils for identification lineups: matching suspects or descriptions?
Tunnicliff, J L; Clark, S E
2000-04-01
Two experiments directly compare two methods of selecting foils for identification lineups. The suspect-matched method selects foils based on their match to the suspect, whereas the description-matched method selects foils based on their match to the witness's description of the perpetrator. Theoretical analyses and previous results predict an advantage for description-matched lineups both in terms of correctly identifying the perpetrator and minimizing false identification of innocent suspects. The advantage for description-matched lineups should be particularly pronounced if the foils selected in suspect-matched lineups are too similar to the suspect. In Experiment 1, the lineups were created by trained police officers, and in Experiment 2, the lineups were constructed by undergraduate college students. The results of both experiments showed higher suspect-to-foil similarity for suspect-matched lineups than for description-matched lineups. However, neither experiment showed a difference in correct or false identification rates. Both experiments did, however, show that there may be an advantage for suspect-matched lineups in terms of no-pick and rejection responses. From these results, the endorsement of one method over the other seems premature.
A modified form of conjugate gradient method for unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa
2016-06-01
Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.
Ensemble Methods for MiRNA Target Prediction from Expression Data.
Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong
2015-01-01
microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.
Engagement with physics across diverse festival audiences
NASA Astrophysics Data System (ADS)
Roche, Joseph; Stanley, Jessica; Davis, Nicola
2016-07-01
Science shows provide a method of introducing large public audiences to physics concepts in a nonformal learning environment. While these shows have the potential to provide novel means of educational engagement, it is often difficult to measure that engagement. We present a method of producing an interactive physics show that seeks to provide effective and measurable audience engagement. We share our results from piloting this method at a leading music and arts festival as well as a science festival. This method also facilitated the collection of opinions and feedback directly from the audience which helps explore the benefits and limitations of this type of nonformal engagement in physics education.
Hiremath, Mallayya C; Srivastava, Pooja
2016-01-01
The purpose of this in vitro study was to compare four methods of root canal obturation in primary teeth using conventional radiography. A total of 96 root canals of primary molars were prepared and obturated with zinc oxide eugenol. Obturation methods compared were endodontic pressure syringe, insulin syringe, jiffy tube, and local anesthetic syringe. The root canal obturations were evaluated by conventional radiography for the length of obturation and presence of voids. The obtained data were analyzed using Chi-square test. The results showed significant differences between the four groups for the length of obturation (P < 0.05). The endodontic pressure syringe showed the best results (98.5% optimal fillings) and jiffy tube showed the poor results (37.5% optimal fillings) for the length of obturation. The insulin syringe (79.2% optimal fillings) and local anesthetic syringe (66.7% optimal fillings) showed acceptable results for the length of root canal obturation. However, minor voids were present in all the four techniques used. Endodontic pressure syringe produced the best results in terms of length of obturation and controlling paste extrusion from the apical foramen. However, insulin syringe and local anesthetic syringe can be used as effective alternative methods.
Alcohol Dehydrogenase of Bacillus strain for Measuring Alcohol Electrochemically
NASA Astrophysics Data System (ADS)
Iswantini, D.; Nurhidayat, N.; Ferit, H.
2017-03-01
Alcohol dehydrogenase (ADH) was applied to produce alcohol biosensor. The enzyme was collected from cultured Bacillus sp. in solid media. From 6 tested isolates, bacteria from fermented rice grain (TST.A) showed the highest oxidation current which was further applied as the bioreceptor. Various ethanol concentrations was measured based on the increase of maximum oxidation current value. However, a reduction value was happened when the ethanol concentration was higher than 5%. Comparing the result of spectrophotometry measurement, R2 value obtained from the biosensor measurement method was higher. The new proposed method resulted a wider detection range, from 0.1-5% of ethanol concentration. The result showed that biosensor method has big potency to be used as alcohol detector in foods or bevearages.
Level-set-based reconstruction algorithm for EIT lung images: first clinical results.
Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy
2012-05-01
We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.
The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.
Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie
2016-01-01
In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).
NASA Astrophysics Data System (ADS)
Chapuis, P.; Montgomery, P. C.; Anstotz, F.; Leong-Hoï, A.; Gauthier, C.; Baschnagel, J.; Reiter, G.; McKenna, G. B.; Rubin, A.
2017-09-01
Glass formation and glassy behavior remain as the important areas of investigation in soft matter physics with many aspects which are still not completely understood, especially at the nanometer size-scale. In the present work, we show an extension of the "nanobubble inflation" method developed by O'Connell and McKenna [Rev. Sci. Instrum. 78, 013901 (2007)] which uses an interferometric method to measure the topography of a large array of 5 μ m sized nanometer thick films subjected to constant inflation pressures during which the bubbles grow or creep with time. The interferometric method offers the possibility of making measurements on multiple bubbles at once as well as having the advantage over the AFM methods of O'Connell and McKenna of being a true non-contact method. Here we demonstrate the method using ultra-thin films of both poly(vinyl acetate) (PVAc) and polystyrene (PS) and discuss the capabilities of the method relative to the AFM method, its advantages and disadvantages. Furthermore we show that the results from experiments on PVAc are consistent with the prior work on PVAc, while high stress results with PS show signs of a new non-linear response regime that may be related to the plasticity of the ultra-thin film.
NASA Astrophysics Data System (ADS)
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-01
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.
Examining mixing methods in an evaluation of a smoking cessation program.
Betzner, Anne; Lawrenz, Frances P; Thao, Mao
2016-02-01
Three different methods were used in an evaluation of a smoking cessation study: surveys, focus groups, and phenomenological interviews. The results of each method were analyzed separately and then combined using both a pragmatic and dialectic stance to examine the effects of different approaches to mixing methods. Results show that the further apart the methods are philosophically, the more diverse the findings. Comparisons of decision maker opinions and costs of the different methods are provided along with recommendations for evaluators' uses of different methods. Copyright © 2015. Published by Elsevier Ltd.
Pirogow's Amputation: A Modification of the Operation Method
Bueschges, M.; Muehlberger, T.; Mauss, K. L.; Bruck, J. C.; Ottomann, C.
2013-01-01
Introduction. Pirogow's amputation at the ankle presents a valuable alternative to lower leg amputation for patients with the corresponding indications. Although this method offers the ability to stay mobile without the use of a prosthesis, it is rarely performed. This paper proposes a modification regarding the operation method of the Pirogow amputation. The results of the modified operation method on ten patients were objectified 12 months after the operation using a patient questionnaire (Ankle Score). Material and Methods. We modified the original method by rotating the calcaneus. To fix the calcaneus to the tibia, Kirschner wire and a 3/0 spongiosa tension screw as well as a Fixateur externe were used. Results. 70% of those questioned who were amputated following the modified Pirogow method indicated an excellent or very good result in total points whereas in the control group (original Pirogow's amputation) only 40% reported excellent or very good result. In addition, the level of pain experienced one year after the completed operation showed different results in favour of the group being operated with the modified way. Furthermore, patients in both groups showed differences in radiological results, postoperative leg length difference, and postoperative mobility. Conclusion. The modified Pirogow amputation presents a valuable alternative to the original amputation method for patients with the corresponding indications. The benefits are found in the significantly reduced pain, difference in reduced radiological complications, the increase in mobility without a prosthesis, and the reduction of postoperative leg length difference. PMID:23606976
DBS-LC-MS/MS assay for caffeine: validation and neonatal application.
Bruschettini, Matteo; Barco, Sebastiano; Romantsik, Olga; Risso, Francesco; Gennai, Iulian; Chinea, Benito; Ramenghi, Luca A; Tripodi, Gino; Cangemi, Giuliana
2016-09-01
DBS might be an appropriate microsampling technique for therapeutic drug monitoring of caffeine in infants. Nevertheless, its application presents several issues that still limit its use. This paper describes a validated DBS-LC-MS/MS method for caffeine. The results of the method validation showed an hematocrit dependence. In the analysis of 96 paired plasma and DBS clinical samples, caffeine levels measured in DBS were statistically significantly lower than in plasma but the observed differences were independent from hematocrit. These results clearly showed the need for extensive validation with real-life samples for DBS-based methods. DBS-LC-MS/MS can be considered to be a good alternative to traditional methods for therapeutic drug monitoring or PK studies in preterm infants.
Acoustic pressure measurement of pulsed ultrasound using acousto-optic diffraction
NASA Astrophysics Data System (ADS)
Jia, Lecheng; Chen, Shili; Xue, Bin; Wu, Hanzhong; Zhang, Kai; Yang, Xiaoxia; Zeng, Zhoumo
2018-01-01
Compared with continuous ultrasound wave, pulsed ultrasound has been widely used in ultrasound imaging. The aim of this work is to show the applicability of acousto-optic diffraction on pulsed ultrasound transducer. In this paper, acoustic pressure of two ultrasound transducers is measured based on Raman-Nath diffraction. The frequencies of transducers are 5MHz and 10MHz. The pulse-echo method and simulation data are used to evaluate the results. The results show that the proposed method is capable to measure the absolute sound pressure. We get a sectional view of acoustic pressure using a displacement platform as an auxiliary. Compared with the traditional sound pressure measurement methods, the proposed method is non-invasive with high sensitivity and spatial resolution.
Evaluating the interaction of a tracheobronchial stent in an ovine in-vivo model.
McGrath, Donnacha J; Thiebes, Anja Lena; Cornelissen, Christian G; O'Brien, Barry; Jockenhoevel, Stefan; Bruzzi, Mark; McHugh, Peter E
2018-04-01
Tracheobronchial stents are used to restore patency to stenosed airways. However, these devices are associated with many complications such as stent migration, granulation tissue formation, mucous plugging and stent strut fracture. Of these, granulation tissue formation is the complication that most frequently requires costly secondary interventions. In this study a biomechanical lung modelling framework recently developed by the authors to capture the lung in-vivo stress state under physiological loading is employed in conjunction with ovine pre-clinical stenting results and device experimental data to evaluate the effect of stent interaction on granulation tissue formation. Stenting is simulated using a validated model of a prototype covered laser-cut tracheobronchial stent in a semi-specific biomechanical lung model, and physiological loading is performed. Two computational methods are then used to predict possible granulation tissue formation: the standard method which utilises the increase in maximum principal stress change, and a newly proposed method which compares the change in contact pressure over a respiratory cycle. These computational predictions of granulation tissue formation are then compared to pre-clinical stenting observations after a 6-week implantation period. Experimental results of the pre-clinical stent implantation showed signs of granulation tissue formation both proximally and distally, with a greater proximal reaction. The standard method failed to show a correlation with the experimental results. However, the contact change method showed an apparent correlation with granulation tissue formation. These results suggest that this new method could be used as a tool to improve future device designs.
Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K
2013-04-22
A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.
NASA Astrophysics Data System (ADS)
Li, Cuiping; Yu, Huahua; Feng, Jinhua; Chen, Xiaolin; Li, Pengcheng
2009-02-01
In this study, several methods were compared for the efficiency to concentrate venom from the tentacles of jellyfish Rhopilema esculentum Kishinouye. The results show that the methods using either freezing-dry or gel absorption to remove water to concentrate venom are not applicable due to the low concentration of the compounds dissolved. Although the recovery efficiency and the total venom obtained using the dialysis dehydration method are high, some proteins can be lost during the concentrating process. Comparing to the lyophilization method, ultrafiltration is a simple way to concentrate the compounds at high percentage but the hemolytic activities of the proteins obtained by ultrafiltration appear to be lower. Our results suggest that overall lyophilization is the best and recommended method to concentrate venom from the tentacles of jellyfish. It shows not only the high recovery efficiency for the venoms but high hemolytic activities as well.
A robust two-way semi-linear model for normalization of cDNA microarray data
Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento
2005-01-01
Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789
Using recurrence plot analysis for software execution interpretation and fault detection
NASA Astrophysics Data System (ADS)
Mosdorf, M.
2015-09-01
This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.
Research on camera on orbit radial calibration based on black body and infrared calibration stars
NASA Astrophysics Data System (ADS)
Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng
2018-05-01
Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.
A method of emotion contagion for crowd evacuation
NASA Astrophysics Data System (ADS)
Cao, Mengxiao; Zhang, Guijuan; Wang, Mengsi; Lu, Dianjie; Liu, Hong
2017-10-01
The current evacuation model does not consider the impact of emotion and personality on crowd evacuation. Thus, there is large difference between evacuation results and the real-life behavior of the crowd. In order to generate more realistic crowd evacuation results, we present a method of emotion contagion for crowd evacuation. First, we combine OCEAN (Openness, Extroversion, Agreeableness, Neuroticism, Conscientiousness) model and SIS (Susceptible Infected Susceptible) model to construct the P-SIS (Personalized SIS) emotional contagion model. The P-SIS model shows the diversity of individuals in crowd effectively. Second, we couple the P-SIS model with the social force model to simulate emotional contagion on crowd evacuation. Finally, the photo-realistic rendering method is employed to obtain the animation of crowd evacuation. Experimental results show that our method can simulate crowd evacuation realistically and has guiding significance for crowd evacuation in the emergency circumstances.
Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients
Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan
2016-01-01
Aim: The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. Material and Methods: The study was conducted at the Be’sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Results: Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Conclusion: Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable. PMID:27703289
Galy, Bertrand; Lan, André
2018-03-01
Among the many occupational risks construction workers encounter every day falling from a height is the most dangerous. The objective of this article is to propose a simple analytical design method for horizontal lifelines (HLLs) that considers anchorage flexibility. The article presents a short review of the standards and regulations/acts/codes concerning HLLs in Canada the USA and Europe. A static analytical approach is proposed considering anchorage flexibility. The analytical results are compared with a series of 42 dynamic fall tests and a SAP2000 numerical model. The experimental results show that the analytical method is a little conservative and overestimates the line tension in most cases with a maximum of 17%. The static SAP2000 results show a maximum 2.1% difference with the analytical method. The analytical method is accurate enough to safely design HLLs and quick design abaci are provided to allow the engineer to make quick on-site verification if needed.
Ringing Artefact Reduction By An Efficient Likelihood Improvement Method
NASA Astrophysics Data System (ADS)
Fuderer, Miha
1989-10-01
In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..
Boar taint detection: A comparison of three sensory protocols.
Trautmann, Johanna; Meier-Dinkel, Lisa; Gertheiss, Jan; Mörlein, Daniel
2016-01-01
While recent studies state an important role of human sensory methods for daily routine control of so-called boar taint, the evaluation of different heating methods is still incomplete. This study investigated three common heating methods (microwave (MW), hot-water (HW), hot-iron (HI)) for boar fat evaluation. The comparison was carried out on 72 samples with a 10-person sensory panel. The heating method significantly affected the probability of a deviant rating. Compared to an assumed 'gold standard' (chemical analysis), the performance was best for HI when both sensitivity and specificity were considered. The results show the superiority of the panel result compared to individual assessors. However, the consistency of the individual sensory ratings was not significantly different between MW, HW, and HI. The three protocols showed only fair to moderate agreement. Concluding from the present results, the hot-iron method appears to be advantageous for boar taint evaluation as compared to microwave and hot-water. Copyright © 2015. Published by Elsevier Ltd.
Production analysis of two tree-bucking and product-sorting methods for hardwoods
John E. Baumgras; Chris B. LeDoux
1989-01-01
This paper documents the results of a study to determine the cost and productivity of two tree-bucking and product-sorting methods used by West Virginia loggers harvesting three to four types of roundwood products. The methods include manual chainsaw bucking and bucking with a hydraulically powered chainsaw slasher. Results show that chain saw bucking of trees...
Junge, Benjamin; Berghof-Jäger, Kornelia
2006-01-01
A method was developed for the detection of L. monocytogenes in food based on real-time polymerase chain reaction (PCR). This advanced PCR method was designed to reduce the time needed to achieve results from PCR reactions and to enable the user to monitor the amplification of the PCR product simultaneously, in real-time. After DNA isolation using the Roche/BIOTECON Diagnostics ShortPrep foodproof II Kit (formerly called Listeria ShortPrep Kit) designed for the rapid preparation of L. monocytogenes DNA for direct use in PCR, the real-time detection of L. monocytogenes DNA is performed by using the Roche/BIOTECON Diagnostics LightCycler foodproof L. monocytogenes Detection Kit. This kit provides primers and hybridization probes for sequence-specific detection, convenient premixed reagents, and different controls for reliable interpretation of results. For repeatability studies, 20 different foods, covering the 15 food groups recommended from the AOAC Research Institute (AOAC RI) for L. monocytogenes detection were analyzed: raw meats, fresh produce/vegetables, processed meats, seafood, egg and egg products, dairy (cultured/noncultured), spices, dry foods, fruit/juices, uncooked pasta, nuts, confectionery, pet food, food dyes and colorings, and miscellaneous. From each food 20, samples were inoculated with a low level (1-10 colony-forming units (CFU)/25 g) and 20 samples with a high level (10-50 CFU/25 g) of L. monocytogenes. Additionally, 5 uninoculated samples were prepared from each food. The food samples were examined with the test kits and in correlation with the cultural methods according to U.S. Food and Drug Administration (FDA) Bacteriological Analytical Manual (BAM) or U.S. Department of Agriculture (USDA)/Food Safety and Inspection Service (FSIS) Microbiology Laboratory Guidebook. After 48 h of incubation, the PCR method in all cases showed equal or better results than the reference cultural FDA/BAM or USDA/FSIS methods. Fifteen out of 20 tested food types gave exactly the same amount of positive samples for both methods in both inoculation levels. For 5 out of 20 foodstuffs, the PCR method resulted in more positives than the reference method after 48 h of incubation. Following AOAC RI definition, these were false positives because they were not confirmed by the reference method (false-positive rate for low inoculated foodstuffs: 5.4%; for high inoculated foodstuffs: 7.1%). Without calculating these unconfirmed positives, the PCR method showed equal sensitivity results compared to the alternative method. With the unconfirmed PCR-positives included into the calculations, the alternative PCR method showed a higher sensitivity than the microbiological methods (low inoculation level: 100 vs 98.0%; sensitivity rate: 1; high inoculation level: 99.7 vs 97.7%; sensitivity rate, 1). All in-house and independently tested uninoculated food samples were negative for L. monocytogenes. The ruggedness testing of both ShortPrep foodproof II Kit and Roche/BIOTECON LightCycler foodproof L. monocytogenes Detection Kit showed no noteworthy influences to any variation of the parameters component concentration, apparatus comparison, tester comparison, and sample volumes. In total, 102 L. monocytogenes isolates (cultures and pure DNA) were tested and detected for the inclusivity study, including all isolates claimed by the AOAC RI. The exclusivity study included 60 non-L. monocytogenes bacteria. None of the tested isolates gave a false-positive result; specificity was 100%. Three different lots were tested in the lot-to-lot study. All 3 lots gave equal results. The stability study was subdivided into 3 parts: long-term study, stress test, and freeze-defrost test. Three lots were tested in 4 time intervals within a period of 13 months. They all gave comparable results for all test intervals. For the stress test, LightCycler L. monocytogenes detection mixes were stored at different temperatures and tested at different time points during 1 month. Stable results were produced at all storage temperatures. The freeze-defrost analysis showed no noteworthy aggravation of test results. The independent validation study examined by Campden and Chorleywood Food Research Association Group (CCFRA) demonstrated again that the LightCycler L. monocytogenes detection system shows a comparable sensitivity to reference methods. With both the LightCycler PCR and BAM methods, 19 out of 20 inoculated food samples were detected. The 24 h PCR results generated by the LightCycler system corresponded directly with the FDA/BAM culture results. However, the 48 h PCR results did not relate exactly to the FDA/BAM results, as one sample found to be positive by the 48 h PCR could not be culturally confirmed and another sample which was negative by the 48 h PCR was culturally positive.
An advanced analysis method of initial orbit determination with too short arc data
NASA Astrophysics Data System (ADS)
Li, Binzhe; Fang, Li
2018-02-01
This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.
An ultra-wideband microwave tomography system: preliminary results.
Gilmore, Colin; Mojabi, Puyan; Zakaria, Amer; Ostadrahimi, Majid; Kaye, Cam; Noghanian, Sima; Shafai, Lotfollah; Pistorius, Stephen; LoVetri, Joe
2009-01-01
We describe a 2D wide-band multi-frequency microwave imaging system intended for biomedical imaging. The system is capable of collecting data from 2-10 GHz, with 24 antenna elements connected to a vector network analyzer via a 2 x 24 port matrix switch. Through the use of two different nonlinear reconstruction schemes: the Multiplicative-Regularized Contrast Source Inversion method and an enhanced version of the Distorted Born Iterative Method, we show preliminary imaging results from dielectric phantoms where data were collected from 3-6 GHz. The early inversion results show that the system is capable of quantitatively reconstructing dielectric objects.
NASA Astrophysics Data System (ADS)
Chang, Q.; Jiao, W.
2017-12-01
Phenology is a sensitive and critical feature of vegetation change that has regarded as a good indicator in climate change studies. So far, variety of remote sensing data sources and phenology extraction methods from satellite datasets have been developed to study the spatial-temporal dynamics of vegetation phenology. However, the differences between vegetation phenology results caused by the varies satellite datasets and phenology extraction methods are not clear, and the reliability for different phenology results extracted from remote sensing datasets is not verified and compared using the ground observation data. Based on three most popular remote sensing phenology extraction methods, this research calculated the Start of the growing season (SOS) for each pixels in the Northern Hemisphere for two kinds of long time series satellite datasets: GIMMS NDVIg (SOSg) and GIMMS NDVI3g (SOS3g). The three methods used in this research are: maximum increase method, dynamic threshold method and midpoint method. Then, this study used SOS calculated from NEE datasets (SOS_NEE) monitored by 48 eddy flux tower sites in global flux website to validate the reliability of six phenology results calculated from remote sensing datasets. Results showed that both SOSg and SOS3g extracted by maximum increase method are not correlated with ground observed phenology metrics. SOSg and SOS3g extracted by the dynamic threshold method and midpoint method are both correlated with SOS_NEE significantly. Compared with SOSg extracted by the dynamic threshold method, SOSg extracted by the midpoint method have a stronger correlation with SOS_NEE. And, the same to SOS3g. Additionally, SOSg showed stronger correlation with SOS_NEE than SOS3g extracted by the same method. SOS extracted by the midpoint method from GIMMS NDVIg datasets seemed to be the most reliable results when validated with SOS_NEE. These results can be used as reference for data and method selection in future's phenology study.
An accurate method for solving a class of fractional Sturm-Liouville eigenvalue problems
NASA Astrophysics Data System (ADS)
Kashkari, Bothayna S. H.; Syam, Muhammed I.
2018-06-01
This article is devoted to both theoretical and numerical study of the eigenvalues of nonsingular fractional second-order Sturm-Liouville problem. In this paper, we implement a fractional-order Legendre Tau method to approximate the eigenvalues. This method transforms the Sturm-Liouville problem to a sparse nonsingular linear system which is solved using the continuation method. Theoretical results for the considered problem are provided and proved. Numerical results are presented to show the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Saito, Toru; Nishihara, Satomichi; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2010-10-01
Mukherjee's type of multireference coupled-cluster (MkMRCC), approximate spin-projected spin-unrestricted CC (APUCC), and AP spin-unrestricted Brueckner's (APUBD) methods were applied to didehydronated ethylene, allyl cation, cis-butadiene, and naphthalene. The focus is on descriptions of magnetic properties for these diradical species such as S-T gaps and diradical characters. Several types of orbital sets were examined as reference orbitals for MkMRCC calculations, and it was found that the change of orbital sets do not give significant impacts on computational results for these species. Comparison of MkMRCC results with APUCC and APUBD results show that these two types of methods yield similar results. These results show that the quantum spin corrected UCC and UBD methods can effectively account for both nondynamical and dynamical correlation effects that are covered by the MkMRCC methods. It was also shown that appropriately parameterized hybrid density functional theory (DFT) with AP corrections (APUDFT) calculations yielded very accurate data that qualitatively agree with those of MRCC and APUBD methods. This hierarchy of methods, MRCC, APUCC, and APUDFT, is expected to constitute a series of standard ab initio approaches towards radical systems, among which we could choose one of them, depending on the size of the systems and the required accuracy.
Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.
2017-01-01
Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)’s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete (“grab”) samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples. PMID:28759607
Utilization of Shrimp Skin Waste (Sea Lobster) As Raw Material for the Membrane Filtration
NASA Astrophysics Data System (ADS)
Nyoman Rupiasih, Ni; Sumadiyasa, Made; Suyanto, Hery; Windari, Putri
2017-05-01
In view of the increasing littering of the sea banks by shells of crustaceans, a study was carried out to investigate the extraction and characterization of chitosan from skin waste of sea lobster i.e. ‘Bamboo Lobster’ (Panulirus versicolor). Chitosan was extracted using conventional methods such as pretreatment, demineralization, deprotienization, and deacetylation. The result showed that the degree of deacetylation of chitosan obtained is 70.02%. The FTIR spectra of the chitosan gave a characteristic of -NH2 band at 3447 cm-1 and carbonyl group band at 1655 cm-1. This chitosan has been used to prepare membrane. The chitosan membrane 2% has been prepared using phase inversion method with precipitation by solvent evaporation. The membranes were characterized by FTIR spectrophotometer, Nova 1200e using BJH method, and filtration method. The results show that thickness of the membrane is about 134 μm. The FTIR spectra show that functional groups present in the membrane are -NH, -CH, C=O, and -OH. Using BJH method obtained that the pore diameter is 3.382 nm with pore density is 8.95 x 105 pores/m3. By filtration method obtained that pure water flux (PWF) of the membrane are 386.662 and 489.627 1/m2.h at pressure 80-85 kPa and 90-100 kPa, respectively. These results show that skin waste of sea lobster was discovered as a raw material to prepare chitosan membrane. The membrane obtained is belonged to mesoporous group which may use in microfiltration process.
Macro elemental analysis of food samples by nuclear analytical technique
NASA Astrophysics Data System (ADS)
Syahfitri, W. Y. N.; Kurniawati, S.; Adventini, N.; Damastuti, E.; Lestiani, D. D.
2017-06-01
Energy-dispersive X-ray fluorescence (EDXRF) spectrometry is a non-destructive, rapid, multi elemental, accurate, and environment friendly analysis compared with other detection methods. Thus, EDXRF spectrometry is applicable for food inspection. The macro elements calcium and potassium constitute important nutrients required by the human body for optimal physiological functions. Therefore, the determination of Ca and K content in various foods needs to be done. The aim of this work is to demonstrate the applicability of EDXRF for food analysis. The analytical performance of non-destructive EDXRF was compared with other analytical techniques; neutron activation analysis and atomic absorption spectrometry. Comparison of methods performed as cross checking results of the analysis and to overcome the limitations of the three methods. Analysis results showed that Ca found in food using EDXRF and AAS were not significantly different with p-value 0.9687, whereas p-value of K between EDXRF and NAA is 0.6575. The correlation between those results was also examined. The Pearson correlations for Ca and K were 0.9871 and 0.9558, respectively. Method validation using SRM NIST 1548a Typical Diet was also applied. The results showed good agreement between methods; therefore EDXRF method can be used as an alternative method for the determination of Ca and K in food samples.
Using well casing as an electrical source to monitor hydraulic fracture fluid injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilt, Michael; Nieuwenhuis, Greg; MacLennan, Kris
2016-03-09
The depth to surface resistivity (DSR) method transmits current from a source located in a cased or openhole well to a distant surface return electrode while electric field measurements are made at the surface over the target of interest. This paper presents both numerical modelling results and measured data from a hydraulic fracturing field test where conductive water was injected into a resistive shale reservoir during a hydraulic fracturing operation. Modelling experiments show that anomalies due to hydraulic fracturing are small but measureable with highly sensitive sensor technology. The field measurements confirm the model results,showing that measured differences in themore » surface fields due to hydraulic fracturing have been detected above the noise floor. Our results show that the DSR method is sensitive to the injection of frac fluids; they are detectable above the noise floor in a commercially active hydraulic fracturing operation, and therefore this method can be used for monitoring fracture fluid movement.« less
Gu, Hai Ting; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Abrupt change is an important manifestation of hydrological process with dramatic variation in the context of global climate change, the accurate recognition of which has great significance to understand hydrological process changes and carry out the actual hydrological and water resources works. The traditional method is not reliable at both ends of the samples. The results of the methods are often inconsistent. In order to solve the problem, we proposed a comprehensive weighted recognition method for hydrological abrupt change based on weighting by comparing of 12 commonly used methods for testing change points. The reliability of the method was verified by Monte Carlo statistical test. The results showed that the efficiency of the 12 methods was influenced by the factors including coefficient of variation (Cv), deviation coefficient (Cs) before the change point, mean value difference coefficient, Cv difference coefficient and Cs difference coefficient, but with no significant relationship with the mean value of the sequence. Based on the performance of each method, the weight of each test method was given following the results from statistical test. The sliding rank sum test method and the sliding run test method had the highest weight, whereas the RS test method had the lowest weight. By this means, the change points with the largest comprehensive weight could be selected as the final result when the results of the different methods were inconsistent. This method was used to analyze the daily maximum sequence of Jiajiu station in the lower reaches of the Lancang River (1-day, 3-day, 5-day, 7-day and 1-month). The results showed that each sequence had obvious jump variation in 2004, which was in agreement with the physical causes of hydrological process change and water conservancy construction. The rationality and reliability of the proposed method was verified.
Contemporary computerized methods for logging planning
Chris B. LeDoux
1988-01-01
Contemporary harvest planning graphic software is highlighted with practical applications. Planning results from a production study of the Clearwater Cable Yarder are summarized. Application of the planning methods to evaluation of proposed silvicultural treatments is included. Results show that 3-dimensional graphic analysis of proposed harvesting or silvicultural...
Key frame extraction based on spatiotemporal motion trajectory
NASA Astrophysics Data System (ADS)
Zhang, Yunzuo; Tao, Ran; Zhang, Feng
2015-05-01
Spatiotemporal motion trajectory can accurately reflect the changes of motion state. Motivated by this observation, this letter proposes a method for key frame extraction based on motion trajectory on the spatiotemporal slice. Different from the well-known motion related methods, the proposed method utilizes the inflexions of the motion trajectory on the spatiotemporal slice of all the moving objects. Experimental results show that although a similar performance is achieved in the single-objective screen, by comparing the proposed method to that achieved with the state-of-the-art methods based on motion energy or acceleration, the proposed method shows a better performance in a multiobjective video.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yost, Shane R.; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2016-08-07
In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the numbermore » of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.« less
A Novel Preparation Method of Two Polymer Dyes with Low Cytotoxicity
Lv, Dongjun; Zhang, Mingjie; Cui, Jin; Li, Weixue; Zhu, Guohua
2017-01-01
A new preparation method of polymer dyes was developed to improve both the grafting degree of the azo dyes onto O-carboxymethyl chitosan (OMCS) and the water solubility of prepared polymer dyes. Firstly, the coupling compound of two azo edible colorants, sunset yellow (SY) and allura red (AR), was grafted onto OMCS, and then coupled with their diazonium salt. The chemical structure of prepared polymer dyes was determined by Fourier transform-infrared spectroscopy and 1H-NMR, and the results showed that the two azo dyes were successfully grafted onto OMCS. The grafting degree onto OMCS and the water solubility of polymer dyes were tested, and the results showed that they were both improved as expected. The UV-vis spectra analysis results showed that the prepared polymer dyes showed similar color performance with the original azo dyes. Eventually, the cytotoxicity of prepared polymer dyes was tested and compared with the original azo dyes by a cytotoxicity test on human liver cell lines LO2, and the results showed that their grafting onto OMCS significantly reduced the cytotoxicity. PMID:28772583
Studying Some of Electrical and Mechanical Properties for Kevlar Fiber Reinforced Epoxy
NASA Astrophysics Data System (ADS)
Rafeeq, Sewench N.; Hussein, Samah M.
2011-12-01
As ordinary known the ability of synthesizing electrical conducting polymer composites is possible but with poor mechanical properties, for the solution of this problem, we carried out this study in order to obtain that both properties. Three methods were applied for preparing the conductive polyaniline (PANI) composites using Kevlar fiber fabric as substrate for the deposition of the PANI at one time and the prepared composite (EP/Kevlar fiber) at others. The chemical oxidative method was adopted for polymerization of the aniline and simultaneously protonated of PANI with a hydrochloric acid at concentration (1M). Two kinds of oxidation agents (FeCl3.6H2O) and ((NH4)2S2O8) were used. The electrical measurements indicate the effect of each preparation method, kind of oxidant agent and the kind of mat erial which PANI deposited on the electrical results. The conductivity results showed that the prepared composites lie within semiconductors region. Temperature—dependence of electric conductivity results showed semiconductors and conductors behavior of this material within the applied temperature ranges. The mechan ical property (tensile strength) was studied. X-ray diffraction study showed the crystalline structure for EP/Kevlar fiber/PANI composites prepared by the three methods. These results gave optimism to the synthesis of conductive polymer composites with excellent mechanical properties..
NASA Astrophysics Data System (ADS)
Purwaningsih, Hariyati; Pratiwi, Vania Mitha; Purwana, Siti Annisa Bani; Nurdiansyah, Haniffudin; Rahmawati, Yenny; Susanti, Diah
2018-04-01
Rice husk is an agricultural waste that is potentially used as natural silica resources. Natural silica claimed to be safe in handling, cheap and can be generate from cheap resource. In this study mesoporous silica was synthesized using sodium silicate extracted from rice husk ash. This research's aim are to study the optimization of silica extraction from rice husk, characterizing mesoporous silica from sol-gel method and surfactant templating from rice husk and the effect of hydrothermal temperature on mesoporous silica nanoparticle (MSNp) formation. In this research, rice husk was extracted with sol-gel method and was followed by hydrothermal treatment; several of hydrothermal temperatures were 85°C, 100°C, 115°C, 130°C and 145° for 24 hours. X-ray diffraction analysis was identified of α-SiO2 phase and NaCl compound impurities. Scherer's analysis method for crystallite size have resulted 6.27-40.3 nm. FTIR results of silica from extraction and MSNp indicated Si-O-Si bonds on the sample. SEM result showed the morphology of the sample that has spherical shape and smooth surface. TEM result showed particle size ranged between 69,69-84,42 nm. BET showed that the pore size classified as mesoporous with pore diameter size is 19,29 nm.
On the numerical calculation of hydrodynamic shock waves in atmospheres by an FCT method
NASA Astrophysics Data System (ADS)
Schmitz, F.; Fleck, B.
1993-11-01
The numerical calculation of vertically propagating hydrodynamic shock waves in a plane atmosphere by the ETBFCT-version of the Flux Corrected Transport (FCT) method by Boris and Book is discussed. The results are compared with results obtained by a characteristic method with shock fitting. We show that the use of the internal energy density as a dependent variable instead of the total energy density can give very inaccurate results. Consequent discretization rules for the gravitational source terms are derived. The improvement of the results by an additional iteration step is discussed. It appears that the FCT method is an excellent method for the accurate calculation of shock waves in an atmosphere.
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-05
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.
Sun, Liping; Bai, Xue; Zhuang, Yongliang
2014-11-01
The influences of cooking methods (steaming, pressure-cooking, microwaving, frying and boiling) on total phenolic contents and antioxidant activities of fruit body of Boletus mushrooms (B. aereus, B. badius, B. pinophilus and B. edulis) have been evaluated. The results showed that microwaving was better in retention of total phenolics than other cooking methods, while boiling significantly decreased the contents of total phenolics in samples under study. Effects of different cooking methods on phenolic acids profiles of Boletus mushrooms showed varieties with both the species of mushroom and the cooking method. Effects of cooking treatments on antioxidant activities of Boletus mushrooms were evaluated by in vitro assays of hydroxyl radical (OH·) -scavenging activity, reducing power and 1, 1-diphenyl-2-picrylhydrazyl radicals (DPPH·) -scavenging activity. Results indicated the changes of antioxidant activities of four Boletus mushrooms were different in five cooking methods. This study could provide some information to encourage food industry to recommend particular cooking methods.
Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng
2018-03-01
Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.
Accelerating separable footprint (SF) forward and back projection on GPU
NASA Astrophysics Data System (ADS)
Xie, Xiaobin; McGaffin, Madison G.; Long, Yong; Fessler, Jeffrey A.; Wen, Minhua; Lin, James
2017-03-01
Statistical image reconstruction (SIR) methods for X-ray CT can improve image quality and reduce radiation dosages over conventional reconstruction methods, such as filtered back projection (FBP). However, SIR methods require much longer computation time. The separable footprint (SF) forward and back projection technique simplifies the calculation of intersecting volumes of image voxels and finite-size beams in a way that is both accurate and efficient for parallel implementation. We propose a new method to accelerate the SF forward and back projection on GPU with NVIDIA's CUDA environment. For the forward projection, we parallelize over all detector cells. For the back projection, we parallelize over all 3D image voxels. The simulation results show that the proposed method is faster than the acceleration method of the SF projectors proposed by Wu and Fessler.13 We further accelerate the proposed method using multiple GPUs. The results show that the computation time is reduced approximately proportional to the number of GPUs.
Intercomparison of Lab-Based Soil Water Extraction Methods for Stable Water Isotope Analysis
NASA Astrophysics Data System (ADS)
Pratt, D.; Orlowski, N.; McDonnell, J.
2016-12-01
The effect of pore water extraction technique on resultant isotopic signature is poorly understood. Here we present results of an intercomparison of five common lab-based soil water extraction techniques: high pressure mechanical squeezing, centrifugation, direct vapor equilibration, microwave extraction, and cryogenic extraction. We applied five extraction methods to two physicochemically different standard soil types (silty sand and clayey loam) that were oven-dried and rewetted with water of known isotopic composition at three different gravimetric water contents (8, 20, and 30%). We tested the null hypothisis that all extraction techniques would provide the same isotopic result independent from soil type and water content. Our results showed that the extraction technique had a significant effect on the soil water isotopic composition. Each method exhibited deviations from spiked reference water, with soil type and water content showing a secondary effect. Cryogenic extraction showed the largest deviations from the reference water, whereas mechanical squeezing and centrifugation provided the closest match to the reference water for both soil types. We also compared results for each extraction technique that produced liquid water on both an OA-ICOS and IRMS; differences between them were negligible.
Determination of elastic modulus of ceramics using ultrasonic testing
NASA Astrophysics Data System (ADS)
Sasmita, Firmansyah; Wibisono, Gatot; Judawisastra, Hermawan; Priambodo, Toni Agung
2018-04-01
Elastic modulus is important material property on structural ceramics application. However, bending test as a common method for determining this property require particular specimen preparation. Furthermore, elastic modulus of ceramics could vary because it depends on porosity content. For structural ceramics industry, such as ceramic tiles, this property is very important. This drives the development of new method to improve effectivity or verification method as well. In this research, ultrasonic testing was conducted to determine elastic modulus of soda lime glass and ceramic tiles. The experiment parameter was frequency of probe (1, 2, 4 MHz). Characterization of density and porosity were also done for analysis. Results from ultrasonic testing were compared with elastic modulus resulted from bending test. Elastic modulus of soda-lime glass based on ultrasonic testing showed excellent result with error 2.69% for 2 MHz probe relative to bending test result. Testing on red and white ceramic tiles were still contained error up to 41% and 158%, respectively. The results for red ceramic tile showed trend that 1 MHz probe gave better accuracy in determining elastic modulus. However, testing on white ceramic tile showed different trend. It was due to the presence of porosity and near field effect.
Time delayed Ensemble Nudging Method
NASA Astrophysics Data System (ADS)
An, Zhe; Abarbanel, Henry
Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.
Kim, Won Kuel; Seo, Kyung Mook; Kang, Si Hyun
2014-01-01
Objective To determine the reliability and validity of hand-held dynamometer (HHD) depending on its fixation in measuring isometric knee extensor strength by comparing the results with an isokinetic dynamometer. Methods Twenty-seven healthy female volunteers participated in this study. The subjects were tested in seated and supine position using three measurement methods: isometric knee extension by isokinetic dynamometer, non-fixed HHD, and fixed HHD. During the measurement, the knee joints of subjects were fixed at a 35° angle from the extended position. The fixed HHD measurement was conducted with the HHD fixed to distal tibia with a Velcro strap; non-fixed HHD was performed with a hand-held method without Velcro fixation. All the measurements were repeated three times and among them, the maximum values of peak torque were used for the analysis. Results The data from the fixed HHD method showed higher validity than the non-fixed method compared with the results of the isokinetic dynamometer. Pearson correlation coefficients (r) between fixed HHD and isokinetic dynamometer method were statistically significant (supine-right: r=0.806, p<0.05; seating-right: r=0.473, p<0.05; supine-left: r=0.524, p<0.05), whereas Pearson correlation coefficients between non-fixed dynamometer and isokinetic dynamometer methods were not statistically significant, except for the result of the supine position of the left leg (r=0.384, p<0.05). Both fixed and non-fixed HHD methods showed excellent inter-rater reliability. However, the fixed HHD method showed a higher reliability than the non-fixed HHD method by considering the intraclass correlation coefficient (fixed HHD, 0.952-0.984; non-fixed HHD, 0.940-0.963). Conclusion Fixation of HHD during measurement in the supine position increases the reliability and validity in measuring the quadriceps strength. PMID:24639931
Efficient method of image edge detection based on FSVM
NASA Astrophysics Data System (ADS)
Cai, Aiping; Xiong, Xiaomei
2013-07-01
For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.
NASA Astrophysics Data System (ADS)
Lim, J. H.; Ratnam, M. M.; Azid, I. A.; Mutharasu, D.
2011-11-01
Young's moduli of various epoxy coated polyethylene terephthalate (PET) micro-cantilevers were determined from the deflection results obtained using the phase-shift shadow moiré (PSSM) method. The filler materials for epoxy coatings were aluminum and graphite powders that were mixed with epoxy at various percentages. Young's moduli were calculated from theory based on the deflection results. The PET micro-cantilever coated with aluminum-epoxy coating showed increasing value of Young's modulus when the ratios of the aluminum-epoxy were increased. The graphite-epoxy coating on the PET micro-cantilever also showed the same trend. The experimental results also show that Young's modulus of the graphite-epoxy coating is higher than aluminum-epoxy coating in comparison at the same mixing ratio.
Evaluation of substrate noise suppression method to mitigate crosstalk among trough-silicon vias
NASA Astrophysics Data System (ADS)
Araga, Yuuki; Kikuchi, Katsuya; Aoyagi, Masahiro
2018-04-01
Substrate noise from a single through-silicon via (TSV) and the noise attenuation by a substrate tap and a guard ring are clarified. A CMOS test vehicle is designed, and 6-µm-diameter TSVs are manufactured on a 20-µm-thick silicon substrate by the via-last method. An on-chip waveform-capturing circuitry is embedded in the test vehicle to capture transient waveforms of substrate noise. The embedded waveform-capturing circuitry demonstrates small and local noise propagation. Experimental results show increased substrate noise level induced by TSVs and the effectiveness of the substrate tap and guard ring for mitigating the crosstalk among TSVs. An analytical model to explain substrate noise propagation is developed to validate experimental results. Results obtained using the substrate model with a multilayer mesh shows good consistency with experimental results, indicating that the model can be used for examination of noise suppression methods.
Establishment of an efficient transformation system for Pleurotus ostreatus.
Lei, Min; Wu, Xiangli; Zhang, Jinxia; Wang, Hexiang; Huang, Chenyang
2017-11-21
Pleurotus ostreatus is widely cultivated worldwide, but the lack of an efficient transformation system regarding its use restricts its genetic research. The present study developed an improved and efficient Agrobacterium tumefaciens-mediated transformation method in P. ostreatus. Four parameters were optimized to obtain the most efficient transformation method. The strain LBA4404 was the most suitable for the transformation of P. ostreatus. A bacteria-to-protoplast ratio of 100:1, an acetosyringone (AS) concentration of 0.1 mM, and 18 h of co-culture showed the best transformation efficiency. The hygromycin B phosphotransferase gene (HPH) was used as the selective marker, and EGFP was used as the reporter gene in this study. Southern blot analysis combined with EGFP fluorescence assay showed positive results, and mitotic stability assay showed that more than 75% transformants were stable after five generations. These results showed that our transformation method is effective and stable and may facilitate future genetic studies in P. ostreatus.
Yamada, Kageto; Kashiwa, Machiko; Arai, Katsumi; Nagano, Noriyuki; Saito, Ryoichi
2016-09-01
We compared three screening methods for carbapenemase-producing Enterobacteriaceae. While the Modified-Hodge test and Carba NP test produced false-negative results for OXA-48-like and mucoid NDM producers, the carbapenem inactivation method (CIM) showed positive results for these isolates. Although the CIM required cultivation time, it is well suited for general clinical laboratories. Copyright © 2016 Elsevier B.V. All rights reserved.
Demirezer, Lütfiye Ömür; Gürbüz, Perihan; Kelicen Uğur, Emine Pelin; Bodur, Mine; Özenver, Nadire; Uz, Ayse; Güvenalp, Zühal
2015-01-01
To evaluate acetylcholinesterase (AChE) inhibitory activity and antioxidant capacity of the major molecule from Salvia sp., rosmarinic acid, as a drug candidate molecule for treatment of Alzheimer disease (AD). The AChE inhibitory activity of different extracts from Salvia trichoclada, Salvia verticillata, and Salvia fruticosa was determined by the Ellman and isolated guinea pig ileum methods, and the antioxidant capacity was determined with DPPH. The AChE inhibitory activity of the major molecule rosmarinic acid was determined by in silico docking and isolated guinea pig ileum methods. The methanol extract of Salvia trichoclada showed the highest inhibition on AChE. The same extract and rosmarinic acid showed significant contraction responses on isolated guinea pig ileum. All the extracts and rosmarinic acid showed high radical scavenging capacities. Docking results of rosmarinic acid showed high affinity to the selected target, AChE. In this study in vitro and ex vivo studies and in silico docking research of rosmarinic acid were used simultaneously for the first time. Rosmarinic acid showed promising results in all the methods tested.
Evaluation of the Urine Protein/Creatinine Ratio Measured with the Dipsticks Clinitek Atlas PRO 12.
Hermida, Fernando J; Soto, Sonia; Benitez, Alfonso J
2016-01-01
Screening for urine proteins is recommended for the detection of albuminuria in high risk groups. The aim of this study was to compare the Clinitek Atlas PRO12 reagent urine strip with quantitative methods for the determination of protein/creatinine ratio and to evaluate the usefulness of the semi-quantitative Clinitek Atlas PRO12 reagent urine strip as a tool in the early detection of albuminuria among the general population. Six hundred first morning urine specimens were collected from outpatients with various clinical conditions. The results showed that the test data for the urine dipstick Clinitek Atlas PRO12 show good agreement with the quantitative measurement of protein, creatinine and protein/creatinine ratio. In addition, this study shows that 97.2% of the samples which gave "normal" protein/creatinine ratios by the semi-quantitative method, showed albumin/creatinine ratio < 30 mg/g by the quantitative methods. Our results show that Clinitek Atlas PRO12 reagent strips can be used for the purposes of albuminuria screening in the general population.
Comparison of three methods for evaluation of work postures in a truck assembly plant.
Zare, Mohsen; Biau, Sophie; Brunet, Rene; Roquelaure, Yves
2017-11-01
This study compared the results of three risk assessment tools (self-reported questionnaire, observational tool, direct measurement method) for the upper limbs and back in a truck assembly plant at two cycle times (11 and 8 min). The weighted Kappa factor showed fair agreement between the observational and direct measurement method for the arm (0.39) and back (0.47). The weighted Kappa factor for these methods was poor for the neck (0) and wrist (0) but the observed proportional agreement (P o ) was 0.78 for the neck and 0.83 for the wrist. The weighted Kappa factor between questionnaire and direct measurement showed poor or slight agreement (0) for different body segments in both cycle times. The results revealed moderate agreement between the observational tool and the direct measurement method, and poor agreement between the self-reported questionnaire and direct measurement. Practitioner Summary: This study provides risk exposure measurement by different common ergonomic methods in the field. The results help to develop valid measurements and improve exposure evaluation. Hence, the ergonomist/practitioners should apply the methods with caution, or at least knowing what the issues/errors are.
Ruiz, J E; Paciornik, S; Pinto, L D; Ptak, F; Pires, M P; Souza, P L
2018-01-01
An optimized method of digital image processing to interpret quantum dots' height measurements obtained by atomic force microscopy is presented. The method was developed by combining well-known digital image processing techniques and particle recognition algorithms. The properties of quantum dot structures strongly depend on dots' height, among other features. Determination of their height is sensitive to small variations in their digital image processing parameters, which can generate misleading results. Comparing the results obtained with two image processing techniques - a conventional method and the new method proposed herein - with the data obtained by determining the height of quantum dots one by one within a fixed area, showed that the optimized method leads to more accurate results. Moreover, the log-normal distribution, which is often used to represent natural processes, shows a better fit to the quantum dots' height histogram obtained with the proposed method. Finally, the quantum dots' height obtained were used to calculate the predicted photoluminescence peak energies which were compared with the experimental data. Again, a better match was observed when using the proposed method to evaluate the quantum dots' height. Copyright © 2017 Elsevier B.V. All rights reserved.
A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-02-08
Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.
A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination
2018-01-01
Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759
Altitude Effects on Thermal Ice Protection System Performance; a Study of an Alternative Approach
NASA Technical Reports Server (NTRS)
Addy, Harold E., Jr.; Orchard, David; Wright, William B.; Oleskiw, Myron
2016-01-01
Research has been conducted to better understand the phenomena involved during operation of an aircraft's thermal ice protection system under running wet icing conditions. In such situations, supercooled water striking a thermally ice-protected surface does not fully evaporate but runs aft to a location where it freezes. The effects of altitude, in terms of air pressure and density, on the processes involved were of particular interest. Initial study results showed that the altitude effects on heat energy transfer were accurately modeled using existing methods, but water mass transport was not. Based upon those results, a new method to account for altitude effects on thermal ice protection system operation was proposed. The method employs a two-step process where heat energy and mass transport are sequentially matched, linked by matched surface temperatures. While not providing exact matching of heat and mass transport to reference conditions, the method produces a better simulation than other methods. Moreover, it does not rely on the application of empirical correction factors, but instead relies on the straightforward application of the primary physics involved. This report describes the method, shows results of testing the method, and discusses its limitations.
Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi
2017-01-01
Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.
Evaluating fMRI methods for assessing hemispheric language dominance in healthy subjects.
Baciu, Monica; Juphard, Alexandra; Cousin, Emilie; Bas, Jean François Le
2005-08-01
We evaluated two methods for quantifying the hemispheric language dominance in healthy subjects, by using a rhyme detection (deciding whether couple of words rhyme) and a word fluency (generating words starting with a given letter) task. One of methods called "flip method" (FM) was based on the direct statistical comparison between hemispheres' activity. The second one, the classical lateralization indices method (LIM), was based on calculating lateralization indices by taking into account the number of activated pixels within hemispheres. The main difference between methods is the statistical assessment of the inter-hemispheric difference: while FM shows if the difference between hemispheres' activity is statistically significant, LIM shows only that if there is a difference between hemispheres. The robustness of LIM and FM was assessed by calculating correlation coefficients between LIs obtained with each of these methods and manual lateralization indices MLI obtained with Edinburgh inventory. Our results showed significant correlation between LIs provided by each method and the MIL, suggesting that both methods are robust for quantifying hemispheric dominance for language in healthy subjects. In the present study we also evaluated the effect of spatial normalization, smoothing and "clustering" (NSC) on the intra-hemispheric location of activated regions and inter-hemispheric asymmetry of the activation. Our results have shown that NSC did not affect the hemispheric specialization but increased the value of the inter-hemispheric difference.
A Novel Quasi-3D Method for Cascade Flow Considering Axial Velocity Density Ratio
NASA Astrophysics Data System (ADS)
Chen, Zhiqiang; Zhou, Ming; Xu, Quanyong; Huang, Xudong
2018-03-01
A novel quasi-3D Computational Fluid Dynamics (CFD) method of mid-span flow simulation for compressor cascades is proposed. Two dimension (2D) Reynolds-Averaged Navier-Stokes (RANS) method is shown facing challenge in predicting mid-span flow with a unity Axial Velocity Density Ratio (AVDR). Three dimension (3D) RANS solution also shows distinct discrepancies if the AVDR is not predicted correctly. In this paper, 2D and 3D CFD results discrepancies are analyzed and a novel quasi-3D CFD method is proposed. The new quasi-3D model is derived by reducing 3D RANS Finite Volume Method (FVM) discretization over a one-spanwise-layer structured mesh cell. The sidewall effect is considered by two parts. The first part is explicit interface fluxes of mass, momentum and energy as well as turbulence. The second part is a cell boundary scaling factor representing sidewall boundary layer contraction. The performance of the novel quasi-3D method is validated on mid-span pressure distribution, pressure loss and shock prediction of two typical cascades. The results show good agreement with the experiment data on cascade SJ301-20 and cascade AC6-10 at all test condition. The proposed quasi-3D method shows superior accuracy over traditional 2D RANS method and 3D RANS method in performance prediction of compressor cascade.
Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul
2016-01-01
Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm -1 with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R 2 ) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
NASA Astrophysics Data System (ADS)
Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili
2012-04-01
In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.
Preti, Robert A; Chan, Wai Shun; Kurtzberg, Joanne; Dornsife, Ronna E.; Wallace, Paul K.; Furlange, Rosemary; Lin, Anna; Omana-Zapata, Imelda; Bonig, Halvard; Tonn, Thorsten
2018-01-01
Background Evaluation of the BD™ Stem Cell Enumeration (SCE) Kit was conducted at four clinical sites with flow cytometry CD34+ enumeration, to assess agreement between two investigational methods, the BD FACSCanto™ II and BD FACSCalibur™ systems, and the predicate method (Beckman Coulter Stem-Kit™ reagents). Methods Leftover and delinked specimens (n = 1,032) from clinical flow cytometry testing were analyzed on the BD FACSCanto II (n = 918) and BD FACSCalibur (n = 905) in normal and mobilized blood, frozen and thawed bone marrow, and leucopheresis and cord blood anticoagulated with CPD, ACD-A, heparin, and EDTA alone or in combination. Fresh leucopheresis analysis addressed site equivalency for sample preparation, testing, and analysis. Results The mean relative bias showed agreement within predefined parameters for the BD FACSCanto II (−2.81 to 4.31 ±7.1) and BD FACSCalibur (−2.69 to 5.2 ±7.9). Results are reported as absolute and relative differences compared to the predicate for viable CD34+, percentage of CD34+ in CD45+, and viable CD45+ populations (or gates). Bias analyses of the distribution of the predicate low, mid, and high bin values were done using BD FACSCanto II optimal gating and BD FACSCalibur manual gating for viable CD34+, percentage of CD34+ in CD45+, and viable CD45+. Bias results from both investigational methods show agreement. Deming regression analyses showed a linear relationship with R2 >0.92 for both investigational methods. Discussion In conclusion, the results from both investigational methods demonstrated agreement and equivalence with the predicate method for enumeration of absolute viable CD34+, percentage of viable CD34+ in CD45+, and absolute viable CD45+ populations. PMID:24927716
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
A survey of the broadband shock associated noise prediction methods
NASA Technical Reports Server (NTRS)
Kim, Chan M.; Krejsa, Eugene A.; Khavaran, Abbas
1992-01-01
Several different prediction methods to estimate the broadband shock associated noise of a supersonic jet are introduced and compared with experimental data at various test conditions. The nozzle geometries considered for comparison include a convergent and a convergent-divergent nozzle, both axisymmetric. Capabilities and limitations of prediction methods in incorporating the two nozzle geometries, flight effect, and temperature effect are discussed. Predicted noise field shows the best agreement for a convergent nozzle geometry under static conditions. Predicted results for nozzles in flight show larger discrepancies from data and more dependable flight data are required for further comparison. Qualitative effects of jet temperature, as observed in experiment, are reproduced in predicted results.
Comparison of Numerical Modeling Methods for Soil Vibration Cutting
NASA Astrophysics Data System (ADS)
Jiang, Jiandong; Zhang, Enguang
2018-01-01
In this paper, we studied the appropriate numerical simulation method for vibration soil cutting. Three numerical simulation methods, commonly used for uniform speed soil cutting, Lagrange, ALE and DEM, are analyzed. Three models of vibration soil cutting simulation model are established by using ls-dyna.The applicability of the three methods to this problem is analyzed in combination with the model mechanism and simulation results. Both the Lagrange method and the DEM method can show the force oscillation of the tool and the large deformation of the soil in the vibration cutting. Lagrange method shows better effect of soil debris breaking. Because of the poor stability of ALE method, it is not suitable to use soil vibration cutting problem.
New Computational Methods for the Prediction and Analysis of Helicopter Noise
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.
Analysis of de-noising methods to improve the precision of the ILSF BPM electronic readout system
NASA Astrophysics Data System (ADS)
Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.
2016-12-01
In order to have optimum operation and precise control system at particle accelerators, it is required to measure the beam position with the precision of sub-μm. We developed a BPM electronic readout system at Iranian Light Source Facility and it has been experimentally tested at ALBA accelerator facility. The results show the precision of 0.54 μm in beam position measurements. To improve the precision of this beam position monitoring system to sub-μm level, we have studied different de-noising methods such as principal component analysis, wavelet transforms, filtering by FIR, and direct averaging method. An evaluation of the noise reduction was given to testify the ability of these methods. The results show that the noise reduction based on Daubechies wavelet transform is better than other algorithms, and the method is suitable for signal noise reduction in beam position monitoring system.
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Reconstruction method for running shape of rotor blade considering nonlinear stiffness and loads
NASA Astrophysics Data System (ADS)
Wang, Yongliang; Kang, Da; Zhong, Jingjun
2017-10-01
The aerodynamic and centrifugal loads acting on the rotating blade make the blade configuration deformed comparing to its shape at rest. Accurate prediction of the running blade configuration plays a significant role in examining and analyzing turbomachinery performance. Considering nonlinear stiffness and loads, a reconstruction method is presented to address transformation of a rotating blade from cold to hot state. When calculating blade deformations, the blade stiffness and load conditions are updated simultaneously as blade shape varies. The reconstruction procedure is iterated till a converged hot blade shape is obtained. This method has been employed to determine the operating blade shapes of a test rotor blade and the Stage 37 rotor blade. The calculated results are compared with the experiments. The results show that the proposed method used for blade operating shape prediction is effective. The studies also show that this method can improve precision of finite element analysis and aerodynamic performance analysis.
Evaluation of bearing capacity of piles from cone penetration test data.
DOT National Transportation Integrated Search
2007-12-01
A statistical analysis and ranking criteria were used to compare the CPT methods and the conventional alpha design method. Based on the results, the de Ruiter/Beringen and LCPC methods showed the best capability in predicting the measured load carryi...
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Amanat, B; Kardan, M R; Faghihi, R; Hosseini Pooya, S M
2013-01-01
Background: Radon and its daughters are amongst the most important sources of natural exposure in the world. Soil is one of the significant sources of radon/thoron due to both radium and thorium so that the emanated thoron from it may cause increased uncertainties in radon measurements. Recently, a diffusion chamber has been designed and optimized for passive discriminative measurements of radon/thoron concentrations in soil. Objective: In order to evaluate the capability of the passive method, some comparative measurements (with active methods) have been performed. Method: The method is based upon measurements by a diffusion chamber, including two Lexan polycarbonate SSNTDs, which can discriminate the emanated radon/thorn from the soil by delay method. The comparative measurements have been done in ten selected points of HLNRA of Ramsar in Iran. The linear regression and correlation between the results of two methods have been studied. Results: The results show that the radon concentrations are within the range of 12.1 to 165 kBq/m3 values. The correlation between the results of active and passive methods was measured by 0.99 value. As well, the thoron concentrations have been measured between 1.9 to 29.5 kBq/m3 values at the points. Conclusion: The sensitivity as well as the strong correlation with active measurements shows that the new low-cost passive method is appropriate for accurate seasonal measurements of radon and thoron concentration in soil. PMID:25505760
Reconstructing baryon oscillations: A Lagrangian theory perspective
NASA Astrophysics Data System (ADS)
Padmanabhan, Nikhil; White, Martin; Cohn, J. D.
2009-03-01
Recently Eisenstein and collaborators introduced a method to “reconstruct” the linear power spectrum from a nonlinearly evolved galaxy distribution in order to improve precision in measurements of baryon acoustic oscillations. We reformulate this method within the Lagrangian picture of structure formation, to better understand what such a method does, and what the resulting power spectra are. We show that reconstruction does not reproduce the linear density field, at second order. We however show that it does reduce the damping of the oscillations due to nonlinear structure formation, explaining the improvements seen in simulations. Our results suggest that the reconstructed power spectrum is potentially better modeled as the sum of three different power spectra, each dominating over different wavelength ranges and with different nonlinear damping terms. Finally, we also show that reconstruction reduces the mode-coupling term in the power spectrum, explaining why miscalibrations of the acoustic scale are reduced when one considers the reconstructed power spectrum.
Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine
2017-11-07
Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.
Ramanujam, Nedunchelian; Kaliappan, Manivannan
2016-01-01
Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971
Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification
NASA Astrophysics Data System (ADS)
Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang
2017-12-01
To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.
Reproducibility of techniques using Archimedes' principle in measuring cancellous bone volume.
Zou, L; Bloebaum, R D; Bachus, K N
1997-01-01
Researchers have been interested in developing techniques to accurately and reproducibly measure the volume fraction of cancellous bone. Historically bone researchers have used Archimedes' principle with water to measure the volume fraction of cancellous bone. Preliminary results in our lab suggested that the calibrated water technique did not provide reproducible results. Because of this difficulty, it was decided to compare the conventional water method to a water with surfactant and a helium method using a micropycnometer. The water/surfactant and the helium methods were attempts to improve the fluid penetration into the small voids present in the cancellous bone structure. In order to compare the reproducibility of the new methods with the conventional water method, 16 cancellous bone specimens were obtained from femoral condyles of human and greyhound dog femora. The volume fraction measurements on each specimen were repeated three times with all three techniques. The results showed that the helium displacement method was more than an order of magnitudes more reproducible than the two other water methods (p < 0.05). Statistical analysis also showed that the conventional water method produced the lowest reproducibility (p < 0.05). The data from this study indicate that the helium displacement technique is a very useful, rapid and reproducible tool for quantitatively characterizing anisotropic porous tissue structures such as cancellous bone.
Maritime Search and Rescue via Multiple Coordinated UAS
2017-06-12
performed by a set of UAS. Our investigation covers the detection of multiple mobile objects by a heterogeneous collection of UAS. Three methods (two...account for contingencies such as airspace deconfliction. Results are produced using simulation to verify the capability of the proposed method and to...compare the various par- titioning methods . Results from this simulation show that great gains in search efficiency can be made when the search space is
Persona, Marek; Kutarov, Vladimir V; Kats, Boris M; Persona, Andrzej; Marczewska, Barbara
2007-01-01
The paper describes the new prediction method of octanol-water partition coefficient, which is based on molecular graph theory. The results obtained using the new method are well correlated with experimental values. These results were compared with the ones obtained by use of ten other structure correlated methods. The comparison shows that graph theory can be very useful in structure correlation research.
NASA Astrophysics Data System (ADS)
Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.
2017-10-01
This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution
Predicting chaos in memristive oscillator via harmonic balance method.
Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai
2012-12-01
This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.
Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M
2017-05-01
Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.
Tomuta, Ioan; Iovanov, Rares; Bodoki, Ede; Vonica, Loredana
2014-04-01
Near-Infrared (NIR) spectroscopy is an important component of a Process Analytical Technology (PAT) toolbox and is a key technology for enabling the rapid analysis of pharmaceutical tablets. The aim of this research work was to develop and validate NIR-chemometric methods not only for the determination of active pharmaceutical ingredients content but also pharmaceutical properties (crushing strength, disintegration time) of meloxicam tablets. The development of the method for active content assay was performed on samples corresponding to 80%, 90%, 100%, 110% and 120% of meloxicam content and the development of the methods for pharmaceutical characterization was performed on samples prepared at seven different compression forces (ranging from 7 to 45 kN) using NIR transmission spectra of intact tablets and PLS as a regression method. The results show that the developed methods have good trueness, precision and accuracy and are appropriate for direct active content assay in tablets (ranging from 12 to 18 mg/tablet) and also for predicting crushing strength and disintegration time of intact meloxicam tablets. The comparative data show that the proposed methods are in good agreement with the reference methods currently used for the characterization of meloxicam tablets (HPLC-UV methods for the assay and European Pharmacopeia methods for determining the crushing strength and disintegration time). The results show the possibility to predict both chemical properties (active content) and physical/pharmaceutical properties (crushing strength and disintegration time) directly, without any sample preparation, from the same NIR transmission spectrum of meloxicam tablets.
Dynamical downscaling inter-comparison for high resolution climate reconstruction
NASA Astrophysics Data System (ADS)
Ferreira, J.; Rocha, A.; Castanheira, J. M.; Carvalho, A. C.
2012-04-01
In the scope of the project: "High-resolution Rainfall EroSivity analysis and fORecasTing - RESORT", an evaluation of various methods of dynamic downscaling is presented. The methods evaluated range from the classic method of nesting a regional model results in a global model, in this case the ECMWF reanalysis, to more recently proposed methods, which consist in using Newtonian relaxation methods in order to nudge the results of the regional model to the reanalysis. The method with better results involves using a system of variational data assimilation to incorporate observational data with results from the regional model. The climatology of a simulation of 5 years using this method is tested against observations on mainland Portugal and the ocean in the area of the Portuguese Continental Shelf, which shows that the method developed is suitable for the reconstruction of high resolution climate over continental Portugal.
Motsa, S. S.; Magagula, V. M.; Sibanda, P.
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252
Motsa, S S; Magagula, V M; Sibanda, P
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics
Chen, Wenan; Larrabee, Beth R.; Ovsyannikova, Inna G.; Kennedy, Richard B.; Haralambieva, Iana H.; Poland, Gregory A.; Schaid, Daniel J.
2015-01-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564
Mark A. Bradford; Matthew D. Wallenstein; Steven D. Allison; Kathleen K. Treseder; Serita D. Frey; Brian W. Watts; Christian A. Davies; Thomas R. Maddox; Jerry M. Melillo; Jacqueline E. Mohan; James F. Reynolds
2009-01-01
Hartley et al. question whether reduction in Rmass, under experimental warming, arises because of the biomass method. We show the method they treat as independent yields the same result. We describe why the substrate-depletion hypothesis may not...
Water stress assessment of cork oak leaves and maritime pine needles based on LIF spectra
NASA Astrophysics Data System (ADS)
Lavrov, A.; Utkin, A. B.; Marques da Silva, J.; Vilar, Rui; Santos, N. M.; Alves, B.
2012-02-01
The aim of the present work was to develop a method for the remote assessment of the impact of fire and drought stress on Mediterranean forest species such as the cork oak ( Quercus suber) and maritime pine ( Pinus pinaster). The proposed method is based on laser induced fluorescence (LIF): chlorophyll fluorescence is remotely excited by frequency-doubled YAG:Nd laser radiation pulses and collected and analyzed using a telescope and a gated high sensitivity spectrometer. The plant health criterion used is based on the I 685/ I 740 ratio value, calculated from the fluorescence spectra. The method was benchmarked by comparing the results achieved with those obtained by conventional, continuous excitation fluorometric method and water loss gravimetric measurements. The results obtained with both methods show a strong correlation between them and with the weight-loss measurements, showing that the proposed method is suitable for fire and drought impact assessment on these two species.
Testing an automated method to estimate ground-water recharge from streamflow records
Rutledge, A.T.; Daniel, C.C.
1994-01-01
The computer program, RORA, allows automated analysis of streamflow hydrographs to estimate ground-water recharge. Output from the program, which is based on the recession-curve-displacement method (often referred to as the Rorabaugh method, for whom the program is named), was compared to estimates of recharge obtained from a manual analysis of 156 years of streamflow record from 15 streamflow-gaging stations in the eastern United States. Statistical tests showed that there was no significant difference between paired estimates of annual recharge by the two methods. Tests of results produced by the four workers who performed the manual method showed that results can differ significantly between workers. Twenty-two percent of the variation between manual and automated estimates could be attributed to having different workers perform the manual method. The program RORA will produce estimates of recharge equivalent to estimates produced manually, greatly increase the speed od analysis, and reduce the subjectivity inherent in manual analysis.
Ammonia synthesis using magnetic induction method (MIM)
NASA Astrophysics Data System (ADS)
Puspitasari, P.; Razak, J. Abd; Yahya, N.
2012-09-01
The most challenging issues for ammonia synthesis is to get the high yield. New approach of ammonia synthesis by using Magnetic Induction Method (MIM) and the Helmholtz Coils has been proposed. The ammonia detection was done by using Kjeldahl Method and FTIR. The system was designed by using Autocad software. The magnetic field of MIM was vary from 100mT-200mT and the magnetic field for the Helmholtz coils was 14mT. The FTIR result shows that ammonia has been successfully formed at stretching peaks 1097,1119,1162,1236, 1377, and 1464 cm-1. UV-VIS result shows the ammonia bond at 195nm of wavelength. The ammonia yield was increase to 244.72μmole/g.h by using the MIM and six pairs of Helmholtz coils. Therefore this new method will be a new promising method to achieve the high yield ammonia at ambient condition (at 25δC and 1atm), under the Magnetic Induction Method (MIM).
Preparation of samples for leaf architecture studies, a method for mounting cleared leaves1
Vasco, Alejandra; Thadeo, Marcela; Conover, Margaret; Daly, Douglas C.
2014-01-01
• Premise of the study: Several recent waves of interest in leaf architecture have shown an expanding range of approaches and applications across a number of disciplines. Despite this increased interest, examination of existing archives of cleared and mounted leaves shows that current methods for mounting, in particular, yield unsatisfactory results and deterioration of samples over relatively short periods. Although techniques for clearing and staining leaves are numerous, published techniques for mounting leaves are scarce. • Methods and Results: Here we present a complete protocol and recommendations for clearing, staining, and imaging leaves, and, most importantly, a method to permanently mount cleared leaves. • Conclusions: The mounting protocol is faster than other methods, inexpensive, and straightforward; moreover, it yields clear and permanent samples that can easily be imaged, scanned, and stored. Specimens mounted with this method preserve well, with leaves that were mounted more than 35 years ago showing no signs of bubbling or discoloration. PMID:25225627
Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora
2009-01-01
This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
NASA Astrophysics Data System (ADS)
Ferrini, Silvia; Schaafsma, Marije; Bateman, Ian
2014-06-01
Benefit transfer (BT) methods are becoming increasingly important for environmental policy, but the empirical findings regarding transfer validity are mixed. A novel valuation survey was designed to obtain both stated preference (SP) and revealed preference (RP) data concerning river water quality values from a large sample of households. Both dichotomous choice and payment card contingent valuation (CV) and travel cost (TC) data were collected. Resulting valuations were directly compared and used for BT analyses using both unit value and function transfer approaches. WTP estimates are found to pass the convergence validity test. BT results show that the CV data produce lower transfer errors, below 20% for both unit value and function transfer, than TC data especially when using function transfer. Further, comparison of WTP estimates suggests that in all cases, differences between methods are larger than differences between study areas. Results show that when multiple studies are available, using welfare estimates from the same area but based on a different method consistently results in larger errors than transfers across space keeping the method constant.
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
Utility-preserving anonymization for health data publishing.
Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn
2017-07-11
Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.
Flip-avoiding interpolating surface registration for skull reconstruction.
Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye
2018-03-30
Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.
Buried Man-made Structure Imaging using 2-D Resistivity Inversion
NASA Astrophysics Data System (ADS)
Anderson Bery, Andy; Nordiana, M. M.; El Hidayah Ismail, Noer; Jinmin, M.; Nur Amalina, M. K. A.
2018-04-01
This study is carried out with the objective to determine the suitable resistivity inversion method for buried man-made structure (bunker). This study was carried out with two stages. The first stage is suitable array determination using 2-D computerized modeling method. One suitable array is used for the infield resistivity survey to determine the dimension and location of the target. The 2-D resistivity inversion results showed that robust inversion method is suitable to resolve the top and bottom part of the buried bunker as target. In addition, the dimension of the buried bunker is successfully determined with height of 7 m and length of 20 m. The location of this target is located at -10 m until 10 m of the infield resistivity survey line. The 2-D resistivity inversion results obtained in this study showed that the parameters selection is important in order to give the optimum results. These parameters are array type, survey geometry and inversion method used in data processing.
Broadband impedance boundary conditions for the simulation of sound propagation in the time domain.
Bin, Jonghoon; Yousuff Hussaini, M; Lee, Soogab
2009-02-01
An accurate and practical surface impedance boundary condition in the time domain has been developed for application to broadband-frequency simulation in aeroacoustic problems. To show the capability of this method, two kinds of numerical simulations are performed and compared with the analytical/experimental results: one is acoustic wave reflection by a monopole source over an impedance surface and the other is acoustic wave propagation in a duct with a finite impedance wall. Both single-frequency and broadband-frequency simulations are performed within the framework of linearized Euler equations. A high-order dispersion-relation-preserving finite-difference method and a low-dissipation, low-dispersion Runge-Kutta method are used for spatial discretization and time integration, respectively. The results show excellent agreement with the analytical/experimental results at various frequencies. The method accurately predicts both the amplitude and the phase of acoustic pressure and ensures the well-posedness of the broadband time-domain impedance boundary condition.
Decoding of quantum dots encoded microbeads using a hyperspectral fluorescence imaging method.
Liu, Yixi; Liu, Le; He, Yonghong; Zhu, Liang; Ma, Hui
2015-05-19
We presented a decoding method of quantum dots encoded microbeads with its fluorescence spectra using line scan hyperspectral fluorescence imaging (HFI) method. A HFI method was developed to attain both the spectra of fluorescence signal and the spatial information of the encoded microbeads. A decoding scheme was adopted to decode the spectra of multicolor microbeads acquired by the HFI system. Comparison experiments between the HFI system and the flow cytometer were conducted. The results showed that the HFI system has higher spectrum resolution; thus, more channels in spectral dimension can be used. The HFI system detection and decoding experiment with the single-stranded DNA (ssDNA) immobilized multicolor beads was done, and the result showed the efficiency of the HFI system. Surface modification of the microbeads by use of the polydopamine was characterized by the scanning electron microscopy and ssDNA immobilization was characterized by the laser confocal microscope. These results indicate that the designed HFI system can be applied to practical biological and medical applications.
Methodenvergleich zur Bestimmung der hydraulischen Durchlässigkeit
NASA Astrophysics Data System (ADS)
Storz, Katharina; Steger, Hagen; Wagner, Valentin; Bayer, Peter; Blum, Philipp
2017-06-01
Knowing the hydraulic conductivity (K) is a precondition for understanding groundwater flow processes in the subsurface. Numerous laboratory and field methods for the determination of hydraulic conductivity exist, which can lead to significantly different results. In order to quantify the variability of these various methods, the hydraulic conductivity was examined for an industrial silica sand (Dorsilit) using four different methods: (1) grain-size analysis, (2) Kozeny-Carman approach, (3) permeameter tests and (4) flow rate experiments in large-scale tank experiments. Due to the large volume of the artificially built aquifer, the tank experiment results are assumed to be the most representative. Hydraulic conductivity values derived from permeameter tests show only minor deviation, while results of the empirically evaluated grain-size analysis are about one magnitude higher and show great variances. The latter was confirmed by the analysis of several methods for the determination of K-values found in the literature, thus we generally question the suitability of grain-size analyses and strongly recommend the use of permeameter tests.
Method for computing energy release rate using the elastic work factor approach
NASA Astrophysics Data System (ADS)
Rhee, K. Y.; Ernst, H. A.
1992-01-01
The elastic work factor eta(el) concept was applied to composite structures for the calculation of total energy release rate by using a single specimen. Cracked lap shear specimens with four different unidirectional fiber orientation were used to examine the dependence of eta(el) on the material properties. Also, three different thickness ratios (lap/strap) were used to determine how geometric conditions affect eta(el). The eta(el) values were calculated in two different ways: compliance method and crack closure method. The results show that the two methods produce comparable eta(el) values and, while eta(el) is affected significantly by geometric conditions, it is reasonably independent of material properties for the given geometry. The results also showed that the elastic work factor can be used to calculate total energy release rate using a single specimen.
Character Recognition Method by Time-Frequency Analyses Using Writing Pressure
NASA Astrophysics Data System (ADS)
Watanabe, Tatsuhito; Katsura, Seiichiro
With the development of information and communication technology, personal verification becomes more and more important. In the future ubiquitous society, the development of terminals handling personal information requires the personal verification technology. The signature is one of the personal verification methods; however, the number of characters is limited in the case of the signature and therefore false signature is used easily. Thus, personal identification is difficult from handwriting. This paper proposes a “haptic pen” that extracts the writing pressure, and shows a character recognition method by time-frequency analyses. Although the figures of characters written by different amanuenses are similar, the differences appear in the time-frequency domain. As a result, it is possible to use the proposed character recognition for personal identification more exactly. The experimental results showed the viability of the proposed method.
[Lateral chromatic aberrations correction for AOTF imaging spectrometer based on doublet prism].
Zhao, Hui-Jie; Zhou, Peng-Wei; Zhang, Ying; Li, Chong-Chong
2013-10-01
An user defined surface function method was proposed to model the acousto-optic interaction of AOTF based on wave-vector match principle. Assessment experiment result shows that this model can achieve accurate ray trace of AOTF diffracted beam. In addition, AOTF imaging spectrometer presents large residual lateral color when traditional chromatic aberrations correcting method is adopted. In order to reduce lateral chromatic aberrations, a method based on doublet prism is proposed. The optical material and angle of the prism are optimized automatically using global optimization with the help of user defined AOTF surface. Simulation result shows that the proposed method provides AOTF imaging spectrometer with great conveniences, which reduces the lateral chromatic aberration to less than 0.000 3 degrees and improves by one order of magnitude, with spectral image shift effectively corrected.
The Ronnie Gardiner Rhythm and Music Method - a feasibility study in Parkinson's disease.
Pohl, Petra; Dizdar, Nil; Hallert, Eva
2013-01-01
To assess the feasibility of the novel intervention, Ronnie Gardiner Rhythm and Music (RGRM™) Method compared to a control group for patients with Parkinson's disease (PD). Eighteen patients, mean age 68, participating in a disability study within a neurological rehabilitation centre, were randomly allocated to intervention group (n = 12) or control group (n = 6). Feasibility was assessed by comparing effects of the intervention on clinical outcome measures (primary outcome: mobility as assessed by two-dimensional motion analysis, secondary outcomes: mobility, cognition, quality of life, adherence, adverse events and eligibility). Univariable analyses showed no significant differences between groups following intervention. However, analyses suggested that patients in the intervention group improved more on mobility (p = 0.006), cognition and quality of life than patients in the control group. There were no adverse events and a high level of adherence to therapy was observed. In this disability study, the use of the RGRM™ Method showed promising results in the intervention group and the adherence level was high. Our results suggest that most assessments chosen are eligible to use in a larger randomized controlled study for patients with PD. The RGRM™ Method appeared to be a useful and safe method that showed promising results in both motor and cognitive functions as well as quality of life in patients with moderate PD. The RGRM™ Method can be used by physiotherapists, occupational, speech and music therapists in neurological rehabilitation. Most measurements were feasible except for Timed-Up-and-Go.
Wavelet analysis in ecology and epidemiology: impact of statistical tests
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-01-01
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892
Wavelet analysis in ecology and epidemiology: impact of statistical tests.
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-02-06
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.
Test Results for Entry Guidance Methods for Space Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.
2004-01-01
There are a number of approaches to advanced guidance and control that have the potential for achieving the goals of significantly increasing reusable launch vehicle (or any space vehicle that enters an atmosphere) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future vehicle concepts.
Test Results for Entry Guidance Methods for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Hanson, John M.; Jones, Robert E.
2003-01-01
There are a number of approaches to advanced guidance and control (AG&C) that have the potential for achieving the goals of significantly increasing reusable launch vehicle (RLV) safety and reliability, and reducing the cost. This paper examines some approaches to entry guidance. An effort called Integration and Testing of Advanced Guidance and Control Technologies (ITAGCT) has recently completed a rigorous testing phase where these algorithms faced high-fidelity vehicle models and were required to perform a variety of representative tests. The algorithm developers spent substantial effort improving the algorithm performance in the testing. This paper lists the test cases used to demonstrate that the desired results are achieved, shows an automated test scoring method that greatly reduces the evaluation effort required, and displays results of the tests. Results show a significant improvement over previous guidance approaches. The two best-scoring algorithm approaches show roughly equivalent results and are ready to be applied to future reusable vehicle concepts.
Methods of Teaching Reading to EFL Learners: A Case Study
ERIC Educational Resources Information Center
Sanjaya, Dedi; Rahmah; Sinulingga, Johan; Lubis, Azhar Aziz; Yusuf, Muhammad
2014-01-01
Methods of teaching reading skill are not the same in different countries. It depends on the condition and situation of the learners. Observing the method of teaching in Malaysia was the purpose of this study and the result of the study shows that there are 5 methods that are applied in classroom activities namely Grammar Translation Method (GTM),…
Albertini, Beatrice; Cavallari, Cristina; Passerini, Nadia; Voinovich, Dario; González-Rodríguez, Marisa L; Magarotto, Lorenzo; Rodriguez, Lorenzo
2004-02-01
The aim of this study was to prepare and to investigate acetaminophen taste-masked granules obtained in a high-shear mixer using three different wet granulation methods (method A: water granulation, method B: granulation with a polyvinylpyrrolidone (PVP) binding solution and method C: steam granulation). The studied formulation was: acetaminophen 15%, alpha-lactose monohydrate 30%, cornstarch 45%, polyvinylpyrrolidone K30 5% and orange flavour 5% (w/w). In vitro dissolution studies, performed at pH 6.8, showed that steam granules enabled the lower dissolution rate in comparison to the water and binding solution granules; these results were then confirmed by their lower surface reactivity (D(R)) during the dissolution process. Moreover, the results of the gustatory sensation test performed by six volunteers confirmed the taste-masking effects of the granules, especially steam granules (P<0.001). Morphological, fractal and porosity analysis were then performed to explain the dissolution profiles and the results of the gustatory sensation test. Scanning electron microscopy (SEM) analysis revealed the smoother and the more regular surface of steam granules with respect to the samples obtained using methods A and B; these results were also confirmed by their lower fractal dimension (D(s)) and porosity values. Finally, differential scanning calorimetry (DSC) results showed a shift of the melting point of the drug, which was due to the simple mixing of the components and not to the granulation processes. In conclusion, the steam granulation technique resulted a suitable method to comply the purpose of this work, without modifying the availability of the drug.
Risović, Dubravko; Pavlović, Zivko
2013-01-01
Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.
Xu, Lijuan; Liu, Zijian; Li, Yang; Yin, Chao; Hu, Yachen; Xie, Xiaolei; Li, Qiuchun; Jiao, Xinan
2018-06-01
Salmonella enterica serovar Gallinarum biovar Pullorum (S. Pullorum) is the pathogen of pullorum disease, which leads to severe economic losses in many developing countries. Traditional methods to identify S. enterica have relied on biochemical reactions and serotyping, which are time-consuming with accurate identification if properly carried out. In this study, we developed a rapid polymerase chain reaction (PCR) method targeting the specific gene ipaJ to detect S. Pullorum. Among the 650 S. Pullorum strains isolated from 1962 to 2016 all over China, 644 strains were identified to harbour ipaJ gene in the plasmid pSPI12, accounting for a detection rate of 99.08%. Six strains were ipaJ negative because pSPI12 was not found in these strains according to whole genome sequencing results. There was no cross-reaction with other Salmonella serotypes, including Salmonella enterica serovar Gallinarum biovar Gallinarum (S. Gallinarum), which show a close genetic relationship with S. Pullorum. This shows that the PCR method could distinguish S. Gallinarum from S. Pullorum in one-step PCR without complicated biochemical identification. The limit of detection of this PCR method was as low as 90 fg/μl or 10 2 CFU, which shows a high sensitivity. Moreover, this method was applied to identify Salmonella isolated from the chicken farm and the results were consistent with what we obtained from biochemical reactions and serotyping. Together, all the results demonstrated that this one-step PCR method is simple and feasible to efficiently identify S. Pullorum.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
ABO Mistyping of cis-AB Blood Group by the Automated Microplate Technique.
Chun, Sejong; Ryu, Mi Ra; Cha, Seung-Yeon; Seo, Ji-Young; Cho, Duck
2018-01-01
The cis -AB phenotype, although rare, is the relatively most frequent of ABO subgroups in Koreans. To prevent ABO mistyping of cis -AB samples, our hospital has applied a combination of the manual tile method with automated devices. Herein, we report cases of ABO mistyping detected by the combination testing system. Cases that showed discrepant results by automated devices and the manual tile method were evaluated. These samples were also tested by the standard tube method. The automated devices used in this study were a QWALYS-3 and Galileo NEO. Exons 6 and 7 of the ABO gene were sequenced. 13 cases that had the cis -AB allele showed results suggestive of the cis -AB subgroup by manual methods, but were interpreted as AB by either automated device. This happened in 87.5% of these cases by QWALYS-3 and 70.0% by Galileo NEO. Genotyping results showed that 12 cases were ABO*cis-AB01/ABO*O01 or ABO*cis-AB01/ABO*O02 , and one case was ABO*cis-AB01/ ABO*A102. Cis -AB samples were mistyped as AB by the automated microplate technique in some cases. We suggest that the manual tile method can be a simple supplemental test for the detection of the cis -AB phenotype, especially in countries with relatively high cis- AB prevalence.
NASA Astrophysics Data System (ADS)
Fernández, Francisco M.
2018-06-01
We show that the kinetic-energy partition method (KEP) is a particular example of the well known Rayleigh-Ritz variational method. We discuss some of the KEP results and compare them with those coming from other approaches.
The Development of a Noncontact Letter Input Interface “Fingual” Using Magnetic Dataset
NASA Astrophysics Data System (ADS)
Fukushima, Taishi; Miyazaki, Fumio; Nishikawa, Atsushi
We have newly developed a noncontact letter input interface called “Fingual”. Fingual uses a glove mounted with inexpensive and small magnetic sensors. Using the glove, users can input letters to form the finger alphabets, a kind of sign language. The proposed method uses some dataset which consists of magnetic field and the corresponding letter information. In this paper, we show two recognition methods using the dataset. First method uses Euclidean norm, and second one additionally uses Gaussian function as a weighting function. Then we conducted verification experiments for the recognition rate of each method in two situations. One of the situations is that subjects used their own dataset; the other is that they used another person's dataset. As a result, the proposed method could recognize letters with a high rate in both situations, even though it is better to use their own dataset than to use another person's dataset. Though Fingual needs to collect magnetic dataset for each letter in advance, its feature is the ability to recognize letters without the complicated calculations such as inverse problems. This paper shows results of the recognition experiments, and shows the utility of the proposed system “Fingual”.
[Optimized application of nested PCR method for detection of malaria].
Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C
2017-04-28
Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.
Improved modified energy ratio method using a multi-window approach for accurate arrival picking
NASA Astrophysics Data System (ADS)
Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun
2017-04-01
To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.
Validation of the ANSR Listeria method for detection of Listeria spp. in environmental samples.
Wendorf, Michael; Feldpausch, Emily; Pinkava, Lisa; Luplow, Karen; Hosking, Edan; Norton, Paul; Biswas, Preetha; Mozola, Mark; Rice, Jennifer
2013-01-01
ANSR Listeria is a new diagnostic assay for detection of Listeria spp. in sponge or swab samples taken from a variety of environmental surfaces. The method is an isothermal nucleic acid amplification assay based on the nicking enzyme amplification reaction technology. Following single-step sample enrichment for 16-24 h, the assay is completed in 40 min, requiring only simple instrumentation. In inclusivity testing, 48 of 51 Listeria strains tested positive, with only the three strains of L. grayi producing negative results. Further investigation showed that L. grayi is reactive in the ANSR assay, but its ability to grow under the selective enrichment conditions used in the method is variable. In exclusivity testing, 32 species of non-Listeria, Gram-positive bacteria all produced negative ANSR assay results. Performance of the ANSR method was compared to that of the U.S. Department of Agriculture-Food Safety and Inspection Service reference culture procedure for detection of Listeria spp. in sponge or swab samples taken from inoculated stainless steel, plastic, ceramic tile, sealed concrete, and rubber surfaces. Data were analyzed using Chi-square and probability of detection models. Only one surface, stainless steel, showed a significant difference in performance between the methods, with the ANSR method producing more positive results. Results of internal trials were supported by findings from independent laboratory testing. The ANSR Listeria method can be used as an accurate, rapid, and simple alternative to standard culture methods for detection of Listeria spp. in environmental samples.
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2011-01-01
The purpose of this study was to assess some calculation methods for quantifying the relationships of bearing geometry, material properties, load, deflection, stiffness, and stress. The scope of the work was limited to two-dimensional modeling of straight cylindrical roller bearings. Preparations for studies of dynamic response of bearings with damaged surfaces motivated this work. Studies were selected to exercise and build confidence in the numerical tools. Three calculation methods were used in this work. Two of the methods were numerical solutions of the Hertz contact approach. The third method used was a combined finite element surface integral method. Example calculations were done for a single roller loaded between an inner and outer raceway for code verification. Next, a bearing with 13 rollers and all-steel construction was used as an example to do additional code verification, including an assessment of the leading order of accuracy of the finite element and surface integral method. Results from that study show that the method is at least first-order accurate. Those results also show that the contact grid refinement has a more significant influence on precision as compared to the finite element grid refinement. To explore the influence of material properties, the 13-roller bearing was modeled as made from Nitinol 60, a material with very different properties from steel and showing some potential for bearing applications. The codes were exercised to compare contact areas and stress levels for steel and Nitinol 60 bearings operating at equivalent power density. As a step toward modeling the dynamic response of bearings having surface damage, static analyses were completed to simulate a bearing with a spall or similar damage.
Task 2 Report: Algorithm Development and Performance Analysis
1993-07-01
separated peaks ............................................. 39 7-16 Example ILGC data for schedule 3 phosphites showing an analysis method which integrates...more closely follows the baseline ................. 40 7-18 Example R.GC data for schedule 3 phosphites showing an analysis method resulting in unwanted...much of the ambiguity that can arise in GC/MS with trace environmental samples, for example. Correlated chromatography, on the other hand, separates the
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Using string invariants for prediction searching for optimal parameters
NASA Astrophysics Data System (ADS)
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
Sudo, Hirotaka; O'driscoll, Michael; Nishiwaki, Kenji; Kawamoto, Yuji; Gammell, Philip; Schramm, Gerhard; Wertli, Toni; Prinz, Heino; Mori, Atsuhide; Sako, Kazuhiro
2012-01-01
The application of a head space analyzer for oxygen concentration was examined to develop a novel ampoule leak test method. Studies using ampoules filled with ethanol-based solution and with nitrogen in the headspace demonstrated that the head space analysis (HSA) method showed sufficient sensitivity in detecting an ampoule crack. The proposed method is the use of HSA in conjunction with the pretreatment of an overpressurising process known as bombing to facilitate the oxygen flow through the crack in the ampoule. The method was examined in comparative studies with a conventional dye ingress method, and the results showed that the HSA method exhibits sensitivity superior to the dye method. The results indicate that the HSA method in combination with the bombing treatment provides potential application as a leak test for the detection of container defects not only for ampoule products with ethanol-based solutions, but also for testing lyophilized products in vials with nitrogen in the head space. The application of a head space analyzer for oxygen concentration was examined to develop a novel ampoule leak test method. The proposed method is the use of head space analysis (HSA) in conjunction with the pretreatment of an overpressurising process known as bombing to facilitate oxygen flow through the crack in the ampoule for use in routine production. The result of the comparative study with a conventional dye leak test method indicates that the HSA method in combination with the bombing treatment can be used as a leak test method, enabling detection of container defects.
Photodynamic method used for the treatment of malignant melanoma and Merkel cell carcinoma
NASA Astrophysics Data System (ADS)
Domaniecki, Janusz; Stanowski, Edward; Graczyk, Alfreda; Kalczak, M.; Struzyna, Jerzy; Kwasny, Miroslaw; Mierczyk, Zygmunt; Najdecki, M.; Krupa, J.
1997-10-01
A photodynamic method is successfully applied for tumor diagnosis treatment. We used such a method for a dozen or so patients with primary and metastatic skin tumors. As photosensitizers HpD (Arg)2 and PP(Ala)2(Arg)2 were used in concentration of 1 mg/ml. The photosensitizers wee administered directly into tumors with a does of 0.1 divided by 0.2 ml. As a result of tumor irradiation by means of the He-Cd laser, a tumor intensively luminates what makes it possible to determine accurately its size and shape. Next, we applied a series of irradiation by means of He-Ne laser during the successive three days and patients received full dose of 150 J/cm2 per tumor surface. For extensive tumors such an irradiation series was repeated after 7 days. The patients were divided into three groups depending on tumor size. The first group of patients with tumors of 0.5 cm of diameter showed very good treatment results just after the first series of irradiation. The second group of patients with tumors of 0.5-1.5 cm showed very good treatment results after two series of irradiation. The third group of patients with tumors of diameter over 1.0 cm showed acceptable treatment results after two series of irradiations, determined as sufficient ones. The patients from the third group wee operated on after 7 divided by 10 days from the time of the completed irradiation. The photodynamic method can be used as a method for tumor diagnosis and skin tumor treatment as well as a supplementary method for surgical intervention.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
Steady and Unsteady Nozzle Simulations Using the Conservation Element and Solution Element Method
NASA Technical Reports Server (NTRS)
Friedlander, David Joshua; Wang, Xiao-Yen J.
2014-01-01
This paper presents results from computational fluid dynamic (CFD) simulations of a three-stream plug nozzle. Time-accurate, Euler, quasi-1D and 2D-axisymmetric simulations were performed as part of an effort to provide a CFD-based approach to modeling nozzle dynamics. The CFD code used for the simulations is based on the space-time Conservation Element and Solution Element (CESE) method. Steady-state results were validated using the Wind-US code and a code utilizing the MacCormack method while the unsteady results were partially validated via an aeroacoustic benchmark problem. The CESE steady-state flow field solutions showed excellent agreement with solutions derived from the other methods and codes while preliminary unsteady results for the three-stream plug nozzle are also shown. Additionally, a study was performed to explore the sensitivity of gross thrust computations to the control surface definition. The results showed that most of the sensitivity while computing the gross thrust is attributed to the control surface stencil resolution and choice of stencil end points and not to the control surface definition itself.Finally, comparisons between the quasi-1D and 2D-axisymetric solutions were performed in order to gain insight on whether a quasi-1D solution can capture the steady and unsteady nozzle phenomena without the cost of a 2D-axisymmetric simulation. Initial results show that while the quasi-1D solutions are similar to the 2D-axisymmetric solutions, the inability of the quasi-1D simulations to predict two dimensional phenomena limits its accuracy.
Measurement of metabolizable energy in poultry feeds by an in vitro system.
Valdes, E V; Leeson, S
1992-09-01
A two-stage in vitro system (IVDE) for estimating AMEn in poultry feeds was investigated. For 71 diets ranging from 2.2 to 3.4 kcal/g, the average AMEn was 2.889 kcal/g and the mean IVDE value was 3.005 kcal/g. From the 71 diets, 30 (42.2%) showed differences between AMEn and IVDE of less than .100 kcal/g and represented diets across the AMEn range of values. The statistical analysis of the data showed a standard error of the estimate (SEE) of .152 kcal/g for the 71 diets assayed. No clear differences in accuracy of AMEn among the diets, as related to the composition and proportion of ingredients, were observed. Thus, the IVDE method gave different AMEn for diets of similar composition. The application of the IVDE system to selected ingredients showed that the AMEn of corn was underestimated by the method. However the AMEn of roasted, extruded soybeans and oats was estimated accurately by the IVDE method. Other ingredients were greatly overestimated by the in vitro technique (soybean meal, corn gluten meal, and barley). The results of applying the IVDE method for estimating AMEn showed the limitations of this technique with regard to the universality of its application. Although the method was successful in estimating AMEn values of diets and ingredients, for many samples the IVDE technique did not give acceptable results.
Somasundaram, Karuppanagounder; Ezhilarasan, Kamalanathan
2015-01-01
To develop an automatic skull stripping method for magnetic resonance imaging (MRI) of human head scans. The proposed method is based on gray scale transformation and morphological operations. The proposed method has been tested with 20 volumes of normal T1-weighted images taken from Internet Brain Segmentation Repository. Experimental results show that the proposed method gives better results than the popular skull stripping methods Brain Extraction Tool and Brain Surface Extractor. The average value of Jaccard and Dice coefficients are 0.93 and 0.962 respectively. In this article, we have proposed a novel skull stripping method using intensity transformation and morphological operations. This is a low computational complexity method but gives competitive or better results than that of the popular skull stripping methods Brain Surface Extractor and Brain Extraction Tool.
NASA Astrophysics Data System (ADS)
Arasoglu, Tülin; Derman, Serap; Mansuroglu, Banu
2016-01-01
The aim of the present study was to evaluate the antimicrobial activity of nanoparticle and free formulations of the CAPE compound using different methods and comparing the results in the literature for the first time. In parallel with this purpose, encapsulation of CAPE with the PLGA nanoparticle system (CAPE-PLGA-NPs) and characterization of nanoparticles were carried out. Afterwards, antimicrobial activity of free CAPE and CAPE-PLGA-NPs was determined using agar well diffusion, disk diffusion, broth microdilution and reduction percentage methods. P. aeroginosa, E. coli, S. aureus and methicillin-resistant S. aureus (MRSA) were chosen as model bacteria since they have different cell wall structures. CAPE-PLGA-NPs within the range of 214.0 ± 8.80 nm particle size and with an encapsulation efficiency of 91.59 ± 4.97% were prepared using the oil-in-water (o-w) single-emulsion solvent evaporation method. The microbiological results indicated that free CAPE did not have any antimicrobial activity in any of the applied methods whereas CAPE-PLGA-NPs had significant antimicrobial activity in both broth dilution and reduction percentage methods. CAPE-PLGA-NPs showed moderate antimicrobial activity against S. aureus and MRSA strains particularly in hourly measurements at 30.63 and 61.25 μg ml-1 concentrations (both p < 0.05), whereas they failed to show antimicrobial activity against Gram-negative bacteria (P. aeroginosa and E. coli, p > 0.05). In the reduction percentage method, in which the highest results of antimicrobial activity were obtained, it was observed that the antimicrobial effect on S. aureus was more long-standing (3 days) and higher in reduction percentage (over 90%). The appearance of antibacterial activity of CAPE-PLGA-NPs may be related to higher penetration into cells due to low solubility of free CAPE in the aqueous medium. Additionally, the biocompatible and biodegradable PLGA nanoparticles could be an alternative to solvents such as ethanol, methanol or DMSO. Consequently, obtained results show that the method of selection is extremely important and will influence the results. Thus, broth microdilution and reduction percentage methods can be recommended as reliable and useful screening methods for determination of antimicrobial activity of PLGA nanoparticle formulations used particularly in drug delivery systems compared to both agar well and disk diffusion methods.
NASA Astrophysics Data System (ADS)
Kang, Ziho
This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.
Srisungsitthisunti, Pornsak; Ersoy, Okan K; Xu, Xianfan
2009-01-01
Light diffraction by volume Fresnel zone plates (VFZPs) is simulated by the Hankel transform beam propagation method (Hankel BPM). The method utilizes circularly symmetric geometry and small step propagation to calculate the diffracted wave fields by VFZP layers. It is shown that fast and accurate diffraction results can be obtained with the Hankel BPM. The results show an excellent agreement with the scalar diffraction theory and the experimental results. The numerical method allows more comprehensive studies of the VFZP parameters to achieve higher diffraction efficiency.
The Use of Fractionation Scales for Communication Audits.
ERIC Educational Resources Information Center
Barnett, George A.; And Others
A study investigated a new method of measuring organizational communication other than the audit methods currently in use. The method, which employs fractionation procedures, was used with workers from five different business groups within a large multinational corporation. The results showed that: (1) workers could use the scales reliably, (2)…
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
Projection-free approximate balanced truncation of large unstable systems
NASA Astrophysics Data System (ADS)
Flinois, Thibault L. B.; Morgans, Aimee S.; Schmid, Peter J.
2015-08-01
In this article, we show that the projection-free, snapshot-based, balanced truncation method can be applied directly to unstable systems. We prove that even for unstable systems, the unmodified balanced proper orthogonal decomposition algorithm theoretically yields a converged transformation that balances the Gramians (including the unstable subspace). We then apply the method to a spatially developing unstable system and show that it results in reduced-order models of similar quality to the ones obtained with existing methods. Due to the unbounded growth of unstable modes, a practical restriction on the final impulse response simulation time appears, which can be adjusted depending on the desired order of the reduced-order model. Recommendations are given to further reduce the cost of the method if the system is large and to improve the performance of the method if it does not yield acceptable results in its unmodified form. Finally, the method is applied to the linearized flow around a cylinder at Re = 100 to show that it actually is able to accurately reproduce impulse responses for more realistic unstable large-scale systems in practice. The well-established approximate balanced truncation numerical framework therefore can be safely applied to unstable systems without any modifications. Additionally, balanced reduced-order models can readily be obtained even for large systems, where the computational cost of existing methods is prohibitive.
A new method for water quality assessment: by harmony degree equation.
Zuo, Qiting; Han, Chunhui; Liu, Jing; Ma, Junxia
2018-02-22
Water quality assessment is an important basic work in the development, utilization, management, and protection of water resources, and also a prerequisite for water safety. In this paper, the harmony degree equation (HDE) was introduced into the research of water quality assessment, and a new method for water quality assessment was proposed according to the HDE: by harmony degree equation (WQA-HDE). First of all, the calculation steps and ideas of this method were described in detail, and then, this method with some other important methods of water quality assessment (single factor assessment method, mean-type comprehensive index assessment method, and multi-level gray correlation assessment method) were used to assess the water quality of the Shaying River (the largest tributary of the Huaihe in China). For this purpose, 2 years (2013-2014) dataset of nine water quality variables covering seven monitoring sites, and approximately 189 observations were used to compare and analyze the characteristics and advantages of the new method. The results showed that the calculation steps of WQA-HDE are similar to the comprehensive assessment method, and WQA-HDE is more operational comparing with the results of other water quality assessment methods. In addition, this new method shows good flexibility by setting the judgment criteria value HD 0 of water quality; when HD 0 = 0.8, the results are closer to reality, and more realistic and reliable. Particularly, when HD 0 = 1, the results of WQA-HDE are consistent with the single factor assessment method, both methods are subject to the most stringent "one vote veto" judgment condition. So, WQA-HDE is a composite method that combines the single factor assessment and comprehensive assessment. This research not only broadens the research field of theoretical method system of harmony theory but also promotes the unity of water quality assessment method and can be used for reference in other comprehensive assessment.
Human Body 3D Posture Estimation Using Significant Points and Two Cameras
Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin
2014-01-01
This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422
NASA Astrophysics Data System (ADS)
Sakai, Naoki; Kawabe, Naoto; Hara, Masayuki; Toyoda, Nozomi; Yabuta, Tetsuro
This paper argues how a compact humanoid robot can acquire a giant-swing motion without any robotic models by using Q-Learning method. Generally, it is widely said that Q-Learning is not appropriated for learning dynamic motions because Markov property is not necessarily guaranteed during the dynamic task. However, we tried to solve this problem by embedding the angular velocity state into state definition and averaging Q-Learning method to reduce dynamic effects, although there remain non-Markov effects in the learning results. The result shows how the robot can acquire a giant-swing motion by using Q-Learning algorithm. The successful acquired motions are analyzed in the view point of dynamics in order to realize a functionally giant-swing motion. Finally, the result shows how this method can avoid the stagnant action loop at around the bottom of the horizontal bar during the early stage of giant-swing motion.
Using an Outranking Method Supporting the Acquisition of Military Equipment
2009-10-01
selection methodology, taking several criteria into account. We show to what extent the class of PROMETHEE methods is presenting these features. We...functions, the indifference and preference thresholds and some other technical parameters. Then we discuss the capabilities of the PROMETHEE methods to...discuss the interpretation of the results given by these PROMETHEE methods. INTRODUCTION Outranking methods for multicriteria decision aid belong
NASA Technical Reports Server (NTRS)
Mohn, L. W.
1975-01-01
The use of the Boeing TEA-230 Subsonic Flow Analysis method as a primary design tool in the development of cruise overwing nacelle configurations is presented. Surface pressure characteristics at 0.7 Mach number were determined by the TEA-230 method for a selected overwing flow-through nacelle configuration. Results of this analysis show excellent overall agreement with corresponding wind tunnel data. Effects of the presence of the nacelle on the wing pressure field were predicted accurately by the theoretical method. Evidence is provided that differences between theoretical and experimental pressure distributions in the present study would not result in significant discrepancies in the nacelle lines or nacelle drag estimates.
Numerical Manifold Method for the Forced Vibration of Thin Plates during Bending
Jun, Ding; Song, Chen; Wei-Bin, Wen; Shao-Ming, Luo; Xia, Huang
2014-01-01
A novel numerical manifold method was derived from the cubic B-spline basis function. The new interpolation function is characterized by high-order coordination at the boundary of a manifold element. The linear elastic-dynamic equation used to solve the bending vibration of thin plates was derived according to the principle of minimum instantaneous potential energy. The method for the initialization of the dynamic equation and its solution process were provided. Moreover, the analysis showed that the calculated stiffness matrix exhibited favorable performance. Numerical results showed that the generalized degrees of freedom were significantly fewer and that the calculation accuracy was higher for the manifold method than for the conventional finite element method. PMID:24883403
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, J.; Tolson, B.
2017-12-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
2013-01-01
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405
Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene
2013-03-25
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.
A multispectral photon-counting double random phase encoding scheme for image authentication.
Yi, Faliu; Moon, Inkyu; Lee, Yeon H
2014-05-20
In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.
Raknes, Guttorm; Hunskaar, Steinar
2014-01-01
We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.
Nachbar, Markus; El Deeb, Sami; Mozafari, Mona; Alhazmi, Hassan A; Preu, Lutz; Redweik, Sabine; Lehmann, Wolf Dieter; Wätzig, Hermann
2016-03-01
Strong, sequence-specific gas-phase bindings between proline-rich peptides and alkaline earth metal ions in nanoESI-MS experiments were reported by Lehmann et al. (Rapid Commun. Mass Spectrom. 2006, 20, 2404-2410), however its relevance for physiological-like aqueous phase is uncertain. Therefore, the complexes should also be studied in aqueous solution and the relevance of the MS method for binding studies be evaluated. A mobility shift ACE method was used for determining the binding between the small peptide GAPAGPLIVPY and various metal ions in aqueous solution. The findings were compared to the MS results and further explained using computational methods. While the MS data showed a strong alkaline earth ion binding, the ACE results showed nonsignificant binding. The proposed vacuum state complex also decomposed during a molecular dynamic simulation in aqueous solution. This study shows that the formed stable peptide-metal ion adducts in the gas phase by ESI-MS does not imply the existence of analogous adducts in the aqueous phase. Comparing peptide-metal ion interaction under the gaseous MS and aqueous ACE conditions showed huge difference in binding behavior. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dave, Mihika B; Dherai, Alpa J; Udani, Vrajesh P; Hegde, Anaita U; Desai, Neelu A; Ashavaid, Tester F
2018-01-01
Transferrin, a major glycoprotein has different isoforms depending on the number of sialic acid residues present on its oligosaccharide chain. Genetic variants of transferrin as well as the primary (CDG) & secondary glycosylation defects lead to an altered transferrin pattern. Isoform analysis methods are based on charge/mass variations. We aimed to compare the performance of commercially available capillary electrophoresis CDT kit for diagnosing congenital disorders of glycosylation with our in-house optimized HPLC method for transferrin isoform analysis. The isoform pattern of 30 healthy controls & 50 CDG-suspected patients was determined by CE using a Carbohydrate-Deficient Transferrin kit. The results were compared with in-house HPLC-based assay for transferrin isoforms. Transferrin isoform pattern for healthy individuals showed a predominant tetrasialo transferrin fraction followed by pentasialo, trisialo, and disialotransferrin. Two of 50 CDG-suspected patients showed the presence of asialylated isoforms. The results were comparable with isoform pattern obtained by HPLC. The commercial controls showed a <20% CV for each isoform. Bland Altman plot showed the difference plot to be within +1.96 with no systemic bias in the test results by HPLC & CE. The CE method is rapid, reproducible and comparable with HPLC and can be used for screening Glycosylation defects. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Nakhjavani, Maryam; Nikkhah, V.; Sarafraz, M. M.; Shoja, Saeed; Sarafraz, Marzieh
2017-10-01
In this paper, silver nanoparticles are produced via green synthesis method using green tea leaves. The introduced method is cost-effective and available, which provides condition to manipulate and control the average nanoparticle size. The produced particles were characterized using x-ray diffraction, scanning electron microscopic images, UV visualization, digital light scattering, zeta potential measurement and thermal conductivity measurement. Results demonstrated that the produced samples of silver nanoparticles are pure in structure (based on the x-ray diffraction test), almost identical in terms of morphology (spherical and to some extent cubic) and show longer stability when dispersed in deionized water. The UV-visualization showed a peak in 450 nm, which is in accordance with the previous studies reported in the literature. Results also showed that small particles have higher thermal and antimicrobial performance. As green tea leaves are used for extracting the silver nanoparticles, the method is eco-friendly. The thermal behaviour of silver nanoparticle was also analysed by dispersing the nanoparticles inside the deionized water. Results showed that thermal conductivity of the silver nano-fluid is higher than that of obtained for the deionized water. Activity of Ag nanoparticles against some bacteria was also examined to find the suitable antibacterial application for the produced particles.
NASA Astrophysics Data System (ADS)
Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo
We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.
Applications of the Lattice Boltzmann Method to Complex and Turbulent Flows
NASA Technical Reports Server (NTRS)
Luo, Li-Shi; Qi, Dewei; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
We briefly review the method of the lattice Boltzmann equation (LBE). We show the three-dimensional LBE simulation results for a non-spherical particle in Couette flow and 16 particles in sedimentation in fluid. We compare the LBE simulation of the three-dimensional homogeneous isotropic turbulence flow in a periodic cubic box of the size 1283 with the pseudo-spectral simulation, and find that the two results agree well with each other but the LBE method is more dissipative than the pseudo-spectral method in small scales, as expected.
Almqvist, M; Holm, A; Persson, H W; Lindström, K
2000-01-01
The aim of this work was to show the applicability of light diffraction tomography on airborne ultrasound in the frequency range 40 kHz-2 MHz. Seven different air-coupled transducers were measured to show the method's performance regarding linearity, absolute pressure measurements, phase measurements, frequency response, S/N ratio and spatial resolution. A calibrated microphone and the pulse-echo method were used to evaluate the results. The absolute measurements agreed within the calibrated microphone's uncertainty range. Pulse waveforms and corresponding FFT diagrams show the method's higher bandwidth compared with the microphone. Further, the method offers non-perturbing measurements with high spatial resolution, which was especially advantageous for measurements close to the transducer surfaces. The S/N ratio was higher than or in the same range as that of the two comparison methods.
Zhang, Honglei; Ding, Jincheng; Zhao, Zengdian
2012-11-01
The traditional heating and microwave assisted method for biodiesel production using cation ion-exchange resin particles (CERP)/PES catalytic membrane were comparatively studied to achieve economic and effective method for utilization of free fatty acids (FFAs) from waste cooking oil (WCO). The optimal esterification conditions of the two methods were investigated and the experimental results showed that microwave irradiation exhibited a remarkable enhanced effect for esterification compared with that of traditional heating method. The FFAs conversion of microwave assisted esterification reached 97.4% under the optimal conditions of reaction temperature 60°C, methanol/acidified oil mass ratio 2.0:1, catalytic membrane (annealed at 120°C) loading 3g, microwave power 360W and reaction time 90min. The study results showed that it is a fast, easy and green way to produce biodiesel applying microwave irradiation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kozakov, Dima; Hall, David R.; Napoleon, Raeanne L.; Yueh, Christine; Whitty, Adrian; Vajda, Sandor
2016-01-01
A powerful early approach to evaluating the druggability of proteins involved determining the hit rate in NMR-based screening of a library of small compounds. Here we show that a computational analog of this method, based on mapping proteins using small molecules as probes, can reliably reproduce druggability results from NMR-based screening, and can provide a more meaningful assessment in cases where the two approaches disagree. We apply the method to a large set of proteins. The results show that, because the method is based on the biophysics of binding rather than on empirical parameterization, meaningful information can be gained about classes of proteins and classes of compounds beyond those resembling validated targets and conventionally druglike ligands. In particular, the method identifies targets that, while not druggable by druglike compounds, may become druggable using compound classes such as macrocycles or other large molecules beyond the rule-of-five limit. PMID:26230724
Entrepreneur environment management behavior evaluation method derived from environmental economy.
Zhang, Lili; Hou, Xilin; Xi, Fengru
2013-12-01
Evaluation system can encourage and guide entrepreneurs, and impel them to perform well in environment management. An evaluation method based on advantage structure is established. It is used to analyze entrepreneur environment management behavior in China. Entrepreneur environment management behavior evaluation index system is constructed based on empirical research. Evaluation method of entrepreneurs is put forward, from the point of objective programming-theory to alert entrepreneurs concerned to think much of it, which means to take minimized objective function as comprehensive evaluation result and identify disadvantage structure pattern. Application research shows that overall behavior of Chinese entrepreneurs environmental management are good, specially, environment strategic behavior are best, environmental management behavior are second, cultural behavior ranks last. Application results show the efficiency and feasibility of this method. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.
Short-Chain Polysaccharide Analysis in Ethanol-Water Solutions.
Yan, Xun
2017-07-01
This study demonstrates that short-chain polysaccharides, or oligosaccharides, could be sufficiently separated with hydrophilic interaction LC (HILIC) conditions and quantified by evaporative light-scattering detection (ELSD). The multianalyte calibration approach improved the efficiency of calibrating the nonlinear detector response. The method allowed easy quantification of short-chain carbohydrates. Using the HILIC method, the oligosaccharide solubility and its profile in water/alcohol solutions at room temperature were able to be quantified. The results showed that the polysaccharide solubility in ethanol-water solutions decreased as ethanol content increased. The results also showed oligosaccharides to have minimal solubility in pure ethanol. In a saturated maltodextrin ethanol (80%) solution, oligosaccharide components with a degree of polymerization >12 were practically insoluble and contributed less than 0.2% to the total solute dry weight. The HILIC-ELSD method allows for the identification and quantification of low-MW carbohydrates individually and served as an alternative method to current gel permeation chromatography procedures.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
Fault detection of gearbox using time-frequency method
NASA Astrophysics Data System (ADS)
Widodo, A.; Satrijo, Dj.; Prahasto, T.; Haryanto, I.
2017-04-01
This research deals with fault detection and diagnosis of gearbox by using vibration signature. In this work, fault detection and diagnosis are approached by employing time-frequency method, and then the results are compared with cepstrum analysis. Experimental work has been conducted for data acquisition of vibration signal thru self-designed gearbox test rig. This test-rig is able to demonstrate normal and faulty gearbox i.e., wears and tooth breakage. Three accelerometers were used for vibration signal acquisition from gearbox, and optical tachometer was used for shaft rotation speed measurement. The results show that frequency domain analysis using fast-fourier transform was less sensitive to wears and tooth breakage condition. However, the method of short-time fourier transform was able to monitor the faults in gearbox. Wavelet Transform (WT) method also showed good performance in gearbox fault detection using vibration signal after employing time synchronous averaging (TSA).
Talari, Roya; Varshosaz, Jaleh; Mostafavi, Seyed Abolfazl; Nokhodchi, Ali
2009-01-01
The micronization using milling process to enhance dissolution rate is extremely inefficient due to a high energy input, and disruptions in the crystal lattice which can cause physical or chemical instability. Therefore, the aim of the present study is to use in situ micronization process through pH change method to produce micron-size gliclazide particles for fast dissolution hence better bioavailability. Gliclazide was recrystallized in presence of 12 different stabilizers and the effects of each stabilizer on micromeritic behaviors, morphology of microcrystals, dissolution rate and solid state of recrystallized drug particles were investigated. The results showed that recrystallized samples showed faster dissolution rate than untreated gliclazide particles and the fastest dissolution rate was observed for the samples recrystallized in presence of PEG 1500. Some of the recrystallized drug samples in presence of stabilizers dissolved 100% within the first 5 min showing at least 10 times greater dissolution rate than the dissolution rate of untreated gliclazide powders. Micromeritic studies showed that in situ micronization technique via pH change method is able to produce smaller particle size with a high surface area. The results also showed that the type of stabilizer had significant impact on morphology of recrystallized drug particles. The untreated gliclazide is rod or rectangular shape, whereas the crystals produced in presence of stabilizers, depending on the type of stabilizer, were very fine particles with irregular, cubic, rectangular, granular and spherical/modular shape. The results showed that crystallization of gliclazide in presence of stabilizers reduced the crystallinity of the samples as confirmed by XRPD and DSC results. In situ micronization of gliclazide through pH change method can successfully be used to produce micron-sized drug particles to enhance dissolution rate.
Analyzing students' attitudes towards science during inquiry-based lessons
NASA Astrophysics Data System (ADS)
Kostenbader, Tracy C.
Due to the logistics of guided-inquiry lesson, students learn to problem solve and develop critical thinking skills. This mixed-methods study analyzed the students' attitudes towards science during inquiry lessons. My quantitative results from a repeated measures survey showed no significant difference between student attitudes when taught with either structured-inquiry or guided-inquiry lessons. The qualitative results analyzed through a constant-comparative method did show that students generate positive interest, critical thinking and low level stress during guided-inquiry lessons. The qualitative research also gave insight into a teacher's transition to guided-inquiry. This study showed that with my students, their attitudes did not change during this transition according to the qualitative data however, the qualitative data did how high levels of excitement. The results imply that students like guided-inquiry laboratories, even though they require more work, just as much as they like traditional laboratories with less work and less opportunity for creativity.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. Copyright © 2015 by the Genetics Society of America.
NASA Astrophysics Data System (ADS)
Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan
2017-10-01
An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.
A flower image retrieval method based on ROI feature.
Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan
2004-07-01
Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).
Yu, Hong; Kong, Lingfeng; Li, Qi
2016-01-01
In this study, we evaluated the efficacy of 12 mitochondrial protein-coding genes from 238 mitochondrial genomes of 140 molluscan species as potential DNA barcodes for mollusks. Three barcoding methods (distance, monophyly and character-based methods) were used in species identification. The species recovery rates based on genetic distances for the 12 genes ranged from 70.83 to 83.33%. There were no significant differences in intra- or interspecific variability among the 12 genes. The monophyly and character-based methods provided higher resolution than the distance-based method in species delimitation. Especially in closely related taxa, the character-based method showed some advantages. The results suggested that besides the standard COI barcode, other 11 mitochondrial protein-coding genes could also be potentially used as a molecular diagnostic for molluscan species discrimination. Our results also showed that the combination of mitochondrial genes did not enhance the efficacy for species identification and a single mitochondrial gene would be fully competent.
NASA Astrophysics Data System (ADS)
Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun
2014-09-01
Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.
Theory and Computation of Optimal Low- and Medium- Thrust Orbit Transfers
NASA Technical Reports Server (NTRS)
Goodson, Troy D.; Chuang, Jason C. H.; Ledsinger, Laura A.
1996-01-01
This report presents new theoretical results which lead to new algorithms for the computation of fuel-optimal multiple-burn orbit transfers of low and medium thrust. Theoretical results introduced herein show how to add burns to an optimal trajectory and show that the traditional set of necessary conditions may be replaced with a much simpler set of equations. Numerical results are presented to demonstrate the utility of the theoretical results and the new algorithms. Two indirect methods from the literature are shown to be effective for the optimal orbit transfer problem with relatively small numbers of burns. These methods are the Minimizing Boundary Condition Method (MBCM) and BOUNDSCO. Both of these methods make use of the first-order necessary conditions exactly as derived by optimal control theory. Perturbations due to Earth's oblateness and atmospheric drag are considered. These perturbations are of greatest interest for transfers that take place between low Earth orbit altitudes and geosynchronous orbit altitudes. Example extremal solutions including these effects and computed by the aforementioned methods are presented. An investigation is also made into a suboptimal multiple-burn guidance scheme. The FORTRAN code developed for this study has been collected together in a package named ORBPACK. ORBPACK's user manual is provided as an appendix to this report.
Chaves, Gabriela Villaça; Pereira, Sílvia Elaine; Saboya, Carlos José; Cortes, Caroline; Ramalho, Rejane
2009-01-01
To evaluate the concordance between abdominal ultrasound and an MRI (Magnetic Resonance Imaging) in the diagnosis of non-alcoholic fatty liver disease (NAFLD), and concordance of these two methods with the histopathological exam. The population studied was comprised of 145 patients with morbid obesity (BMI > or = 40 Kg/m(2)), of both genders. NAFLD diagnosis was performed by MRI and Ultrasound. Liver biopsy was performed in a sub-sample (n=40). To evaluate the concordance of these two methods, the kappa coefficient was used. Concordance between both methods (MRI and Ultrasound) was poor and not significant (Kappa adjusted= 0.27; CI 95%= 0.07-0.39.) Nevertheless a slight concordance was found between diagnosis of NAFLD by ultrasound and the hepatic biopsy, with 83.,3% of concordant results and Kappa adjusted= 0.67.Results of an MRI and the histopathological exam were compared and results showed 53.6% of concordant results and kappa adjusted= 0.07. The concordance found in the diagnosis performed using the ultrasound method and the hepatic biopsy, shows a need to implement and perform more research on the use of ultrasound to validate and reconsider these methods. This would minimize the need to perform biopsies to detect and diagnose such disease.
NASA Astrophysics Data System (ADS)
Irwandi, Irwandi; Fashbir; Daryono
2018-04-01
Neo-Deterministic Seismic Hazard Assessment (NDSHA) method is a seismic hazard assessment method that has an advantage on realistic physical simulation of the source, propagation, and geological-geophysical structure. This simulation is capable on generating the synthetics seismograms at the sites that being observed. At the regional NDSHA scale, calculation of the strong ground motion is based on 1D modal summation technique because it is more efficient in computation. In this article, we verify the result of synthetic seismogram calculations with the result of field observations when Pidie Jaya earthquake on 7 December 2016 occurred with the moment magnitude of M6.5. Those data were recorded by broadband seismometers installed by BMKG (Indonesian Agency for Meteorology, Climatology and Geophysics). The result of the synthetic seismogram calculations verifies that some stations well show the suitability with observation while some other stations show the discrepancies with observation results. Based on the results of the observation of some stations, evidently 1D modal summation technique method has been well verified for thin sediment region (near the pre-tertiary basement), but less suitable for thick sediment region. The reason is that the 1D modal summation technique excludes the amplification effect of seismic wave occurring within thick sediment region. So, another approach is needed, e.g., 2D finite difference hybrid method, which is a part of local scale NDSHA method.
NASA Astrophysics Data System (ADS)
Nandiyanto, Asep Bayu Dani; Iskandar, Ferry; Okuyama, Kikuo
2011-12-01
Monodisperse spherical mesoporous silica nanoparticles were successfully synthesized using a liquid-phase synthesis method. The result showed particles with controllable pore size from several to tens nanometers with outer diameter of several tens nanometers. The ability in the control of pore size and outer diameter was altered by adjusting the precursor solution ratios. In addition, we have conducted the adsorption ability of the prepared particles. The result showed that large organic molecules were well-absorbed to the prepared silica porous particles, in which this result was not obtained when using commercial dense silica particle and/or hollow silica particle. With this result, the prepared mesoporous silica particles may be used efficiently in various applications, such as sensors, pharmaceuticals, environmentally sensitive pursuits, etc.
A new sampling method for fibre length measurement
NASA Astrophysics Data System (ADS)
Wu, Hongyan; Li, Xianghong; Zhang, Junying
2018-06-01
This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.
Kumar, K Vasanth; Sivanesan, S
2005-08-31
Comparison analysis of linear least square method and non-linear method for estimating the isotherm parameters was made using the experimental equilibrium data of safranin onto activated carbon at two different solution temperatures 305 and 313 K. Equilibrium data were fitted to Freundlich, Langmuir and Redlich-Peterson isotherm equations. All the three isotherm equations showed a better fit to the experimental equilibrium data. The results showed that non-linear method could be a better way to obtain the isotherm parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.
Quantification of mixed chimerism by real time PCR on whole blood-impregnated FTA cards.
Pezzoli, N; Silvy, M; Woronko, A; Le Treut, T; Lévy-Mozziconacci, A; Reviron, D; Gabert, J; Picard, C
2007-09-01
This study has investigated quantification of chimerism in sex-mismatched transplantations by quantitative real time PCR (RQ-PCR) using FTA paper for blood sampling. First, we demonstrate that the quantification of DNA from EDTA-blood which has been deposit on FTA card is accurate and reproducible. Secondly, we show that fraction of recipient cells detected by RQ-PCR was concordant between the FTA and salting-out method, reference DNA extraction method. Furthermore, the sensitivity of detection of recipient cells is relatively similar with the two methods. Our results show that this innovative method can be used for MC assessment by RQ-PCR.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
On a new iterative method for solving linear systems and comparison results
NASA Astrophysics Data System (ADS)
Jing, Yan-Fei; Huang, Ting-Zhu
2008-10-01
In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.
[Study on Accurately Controlling Discharge Energy Method Used in External Defibrillator].
Song, Biao; Wang, Jianfei; Jin, Lian; Wu, Xiaomei
2016-01-01
This paper introduces a new method which controls discharge energy accurately. It is achieved by calculating target voltage based on transthoracic impedance and accurately controlling charging voltage and discharge pulse width. A new defibrillator is designed and programmed using this method. The test results show that this method is valid and applicable to all kinds of external defibrillators.
Evaluation of finite difference and FFT-based solutions of the transport of intensity equation.
Zhang, Hongbo; Zhou, Wen-Jing; Liu, Ying; Leber, Donald; Banerjee, Partha; Basunia, Mahmudunnabi; Poon, Ting-Chung
2018-01-01
A finite difference method is proposed for solving the transport of intensity equation. Simulation results show that although slower than fast Fourier transform (FFT)-based methods, finite difference methods are able to reconstruct the phase with better accuracy due to relaxed assumptions for solving the transport of intensity equation relative to FFT methods. Finite difference methods are also more flexible than FFT methods in dealing with different boundary conditions.
NASA Astrophysics Data System (ADS)
Ali, Nauman; Ismail, Muhammad; Khan, Adnan; Khan, Hamayun; Haider, Sajjad; Kamal, Tahseen
2018-01-01
In this work, we have developed simple, sensitive and inexpensive methods for the spectrophotometric determination of urea in urine samples using silver nanoparticles (AgNPs). The standard addition and 2nd order derivative methods were adopted for this purpose. AgNPs were prepared by chemical reduction of AgNO3 with hydrazine using 1,3-di-(1H-imidazol-1-yl)-2-propanol (DIPO) as a stabilizing agent in aqueous medium. The proposed methods were based on the complexation of AgNPs with urea. Using this concept, urea in the urine samples was successfully determined spectrophotometric methods. The results showed high percent recovery with ± RSD. The recoveries of urea in the three urine samples by spectrophotometric standard addition were 99.2% ± 5.37, 96.3% ± 4.49, 104.88% ± 4.99 and that of spectrophotometric 2nd order derivative method were 115.3% ± 5.2, 103.4% ± 2.6, 105.93% ± 0.76. The results show that these methods can open doors for a potential role of AgNPs in the clinical determination of urea in urine, blood, biological, non-biological fluids.
Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments
Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed
2013-01-01
In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135
Cappella, Annalisa; Gibelli, Daniele; Muccino, Enrico; Scarpulla, Valentina; Cerutti, Elisa; Caruso, Valentina; Sguazza, Emanuela; Mazzarelli, Debora; Cattaneo, Cristina
2015-01-27
When estimating post-mortem interval (PMI) in forensic anthropology, the only method able to give an unambiguous result is the analysis of C-14, although the procedure is expensive. Other methods, such as luminol tests and histological analysis, can be performed as preliminary investigations and may allow the operators to gain a preliminary indication concerning PMI, but they lack scientific verification, although luminol testing has been somewhat more accredited in the past few years. Such methods in fact may provide some help as they are inexpensive and can give a fast response, especially in the phase of preliminary investigations. In this study, 20 court cases of human skeletonized remains were dated by the C-14 method. For two cases, results were chronologically set after the 1950s; for one case, the analysis was not possible technically. The remaining 17 cases showed an archaeological or historical collocation. The same bone samples were also screened with histological examination and with the luminol test. Results showed that only four cases gave a positivity to luminol and a high Oxford Histology Index (OHI) score at the same time: among these, two cases were dated as recent by the radiocarbon analysis. Thus, only two false-positive results were given by the combination of these methods and no false negatives. Thus, the combination of two qualitative methods (luminol test and microscopic analysis) may represent a promising solution to cases where many fragments need to be quickly tested.
Monitoring biodiesel reactions of soybean oil and sunflower oil using ultrasonic parameters
NASA Astrophysics Data System (ADS)
Figueiredo, M. K. K.; Silva, C. E. R.; Alvarenga, A. V.; Costa-Félix, R. P. B.
2015-01-01
Biodiesel is an innovation that attempts to substitute diesel oil with biomass. The aim of this paper is to show the development of a real-time method to monitor transesterification reactions by using low-power ultrasound and pulse/echo techniques. The results showed that it is possible to identify different events during the transesterification process by using the proposed parameters, showing that the proposed method is a feasible way to monitor the reactions of biodiesel during its fabrication, in real time, and with relatively low- cost equipment.
Comparison of normalization methods for the analysis of metagenomic gene abundance data.
Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik
2018-04-20
In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead to incorrect or obfuscated biological interpretation.
Ahmadi, Nader; Ahmadi, Fereshteh
2017-07-01
In the present article, based on results from a survey study in Sweden among 2,355 cancer patients, the role of religion in coping is discussed. The survey study, in turn, was based on earlier findings from a qualitative study of cancer patients in Sweden. The purpose of the present survey study was to determine to what extent results obtained in the qualitative study can be applied to a wider population of cancer patients in Sweden. The present study shows that use of religious coping methods is infrequent among cancer patients in Sweden. Besides the two methods that are ranked in 12th and 13th place, that is, in the middle (Listening to religious music and Praying to God to make things better), the other religious coping methods receive the lowest rankings, showing how nonsignificant such methods are in coping with cancer in Sweden. However, the question of who turns to God and who is self-reliant in a critical situation is too complicated to be resolved solely in terms of the strength of individuals' religious commitments. In addition to background and situational factors, the culture in which the individual was socialized is an important factor. Regarding the influence of background variables, the present results show that gender, age , and area of upbringing played an important role in almost all of the religious coping methods our respondents used. In general, people in the oldest age-group, women, and people raised in places with 20,000 or fewer residents had a higher average use of religious coping methods than did younger people, men, and those raised in larger towns.
The Use of Religious Coping Methods in a Secular Society
Ahmadi, Nader
2015-01-01
In the present article, based on results from a survey study in Sweden among 2,355 cancer patients, the role of religion in coping is discussed. The survey study, in turn, was based on earlier findings from a qualitative study of cancer patients in Sweden. The purpose of the present survey study was to determine to what extent results obtained in the qualitative study can be applied to a wider population of cancer patients in Sweden. The present study shows that use of religious coping methods is infrequent among cancer patients in Sweden. Besides the two methods that are ranked in 12th and 13th place, that is, in the middle (Listening to religious music and Praying to God to make things better), the other religious coping methods receive the lowest rankings, showing how nonsignificant such methods are in coping with cancer in Sweden. However, the question of who turns to God and who is self-reliant in a critical situation is too complicated to be resolved solely in terms of the strength of individuals’ religious commitments. In addition to background and situational factors, the culture in which the individual was socialized is an important factor. Regarding the influence of background variables, the present results show that gender, age, and area of upbringing played an important role in almost all of the religious coping methods our respondents used. In general, people in the oldest age-group, women, and people raised in places with 20,000 or fewer residents had a higher average use of religious coping methods than did younger people, men, and those raised in larger towns. PMID:28690385
Cytotoxicity and genotoxicity properties of particulate matter fraction 2.5 μm
NASA Astrophysics Data System (ADS)
Bełcik, Maciej K.; Trusz-Zdybek, Agnieszka; Zaczyńska, Ewa; Czarny, Anna; Piekarska, Katarzyna
2017-11-01
In the ambient is more than 2,000 chemical substances, some of them are absorbed on the surface of the particulate matter and may causes many health problems. Air pollution is responsible for more than 3.2 million premature deaths which classifies it as a second place environmental risk factor. Especially dangerous for health are polycyclic aromatic hydrocarbons and their nitro- and amino derivatives which shows mutagenic and carcinogenic properties. Air pollutions were also classified by International Agency for Research on Cancer to group which carcinogenic properties on human were proved by available knowledge. Air pollutions, including particulate matter are one of the biggest problem in Polish cities. World Health Organization in report published in May 2016 set many of Polish cities on the top of the list most polluted in European Union. The article presents results of mutagenicity, genotoxicity and cytotoxicity researches conducted on a particulate matter fraction 2.5 μm collected during all year long in Wroclaw agglomeration. The material were collected on filters using high-flow air aspirator and extracted using dichloromethane. Additionally it was fractionated into 2 parts containing: all pollutants and only polycyclic aromatic hydrocarbons. Dry residue of this fractions were dissolving in DMSO and tested using biological methods. Biological methods include mutagenicity properties which are investigated by Salmonella assay (Ames assay). Other biological method was comet assay and 4 parameter cytotoxicity test PAN-I assay. Results of the conducted experiments shows differences in mutagenic, genotoxic and cytotoxic properties between seasons of collection and between volume of dust pollutions fractions. The worst properties shows particles collected in autumn and winter season and this containing only polycyclic aromatics hydrocarbons. Results showed also some correlations in results obtained during different methods and properties.
Estimation of relative effectiveness of phylogenetic programs by machine learning.
Krivozubov, Mikhail; Goebels, Florian; Spirin, Sergei
2014-04-01
Reconstruction of phylogeny of a protein family from a sequence alignment can produce results of different quality. Our goal is to predict the quality of phylogeny reconstruction basing on features that can be extracted from the input alignment. We used Fitch-Margoliash (FM) method of phylogeny reconstruction and random forest as a predictor. For training and testing the predictor, alignments of orthologous series (OS) were used, for which the result of phylogeny reconstruction can be evaluated by comparison with trees of corresponding organisms. Our results show that the quality of phylogeny reconstruction can be predicted with more than 80% precision. Also, we tried to predict which phylogeny reconstruction method, FM or UPGMA, is better for a particular alignment. With the used set of features, among alignments for which the obtained predictor predicts a better performance of UPGMA, 56% really give a better result with UPGMA. Taking into account that in our testing set only for 34% alignments UPGMA performs better, this result shows a principal possibility to predict the better phylogeny reconstruction method basing on features of a sequence alignment.
Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung
2010-06-01
Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.
NASA Astrophysics Data System (ADS)
Mushahali, Hahaer; Mu, Baoxia; Wang, Qian; Mamat, Mamatrishat; Cao, Haibin; Yang, Guang; Jing, Qun
2018-07-01
The finite-field methods can be used to intuitively learn about the optical response and find out the atomic contributions to the birefringence and SHG tensors. In this paper, the linear and second-order nonlinear optical properties of ABe2BO3F2 family (A = K, Rb, Cs) compounds are investigated using the finite-field methods within different exchange-correlation functionals. The results show that the obtained birefringence and SHG tensors are in good agreement with the experimental values. The atomic contribution to the total birefringence was further investigated using the variation of the atomic charges, and the Born effective charges. The results show that the boron-oxygen groups give main contribution to the anisotropic birefringence.
Zheng, Hai-ming; Li, Guang-jie; Wu, Hao
2015-06-01
Differential optical absorption spectroscopy (DOAS) is a commonly used atmospheric pollution monitoring method. Denoising of monitoring spectral data will improve the inversion accuracy. Fourier transform filtering method is effectively capable of filtering out the noise in the spectral data. But the algorithm itself can introduce errors. In this paper, a chirp-z transform method is put forward. By means of the local thinning of Fourier transform spectrum, it can retain the denoising effect of Fourier transform and compensate the error of the algorithm, which will further improve the inversion accuracy. The paper study on the concentration retrieving of SO2 and NO2. The results show that simple division causes bigger error and is not very stable. Chirp-z transform is proved to be more accurate than Fourier transform. Results of the frequency spectrum analysis show that Fourier transform cannot solve the distortion and weakening problems of characteristic absorption spectrum. Chirp-z transform shows ability in fine refactoring of specific frequency spectrum.
Anti-inflammatory, Analgesic and Antiulcer properties of Porphyra vietnamensis
Bhatia, Saurabh; Sharma, Kiran; Sharma, Ajay; Nagpal, Kalpana; Bera, Tanmoy
2015-01-01
Objectives: Aim of the present work was to investigate the anti-inflammatory, analgesic and antiulcer effects of red seaweed Porphyra vietnamensis (P. vietnamenis). Materials and Methods: Aqueous (POR) and alcoholic (PE) fractions were successfully isolated from P. vietnamenis. Further biological investigations were performed using a classic test of paw edema induced by carrageenan, writhing induced by acetic acid, hot plate method and naproxen induced gastro-duodenal ulcer. Results: Among the fractions POR showed better activity. POR and PE significantly (p < 0.05) reduced carrageenan induced paw edema in a dose dependent manner. In the writhing test POR significantly (p < 0.05) reduced abdominal writhes than PE. In hot plate method POR showed better analgesic activity than PE. POR showed comparable ulcers reducing potential (p<0.01) to that of omeprazole, and has more ulcer reducing potential then PE. Conclusions: The results of this study demonstrated that P. vietnamenis aqueous fraction possesses biological activity that is close to the standards taken for the treatment of peripheral painful or/and inflammatory and ulcer conditions. PMID:25767759
Full-field stress determination in photoelasticity with phase shifting technique
NASA Astrophysics Data System (ADS)
Guo, Enhai; Liu, Yonggang; Han, Yongsheng; Arola, Dwayne; Zhang, Dongsheng
2018-04-01
Photoelasticity is an effective method for evaluating the stress and its spatial variations within a stressed body. In the present study, a method to determine the stress distribution by means of phase shifting and a modified shear-difference is proposed. First, the orientation of the first principal stress and the retardation between the principal stresses are determined in the full-field through phase shifting. Then, through bicubic interpolation and derivation of a modified shear-difference method, the internal stress is calculated from the point with a free boundary along its normal direction. A method to reduce integration error in the shear difference scheme is proposed and compared to the existing methods; the integration error is reduced when using theoretical photoelastic parameters to calculate the stress component with the same points. Results show that when the value of Δx/Δy approaches one, the error is minimum, and although the interpolation error is inevitable, it has limited influence on the accuracy of the result. Finally, examples are presented for determining the stresses in a circular plate and ring subjected to diametric loading. Results show that the proposed approach provides a complete solution for determining the full-field stresses in photoelastic models.
Garrido, M; Larrechi, M S; Rius, F X
2007-03-07
This paper reports the validation of the results obtained by combining near infrared spectroscopy and multivariate curve resolution-alternating least squares (MCR-ALS) and using high performance liquid chromatography as a reference method, for the model reaction of phenylglycidylether (PGE) and aniline. The results are obtained as concentration profiles over the reaction time. The trueness of the proposed method has been evaluated in terms of lack of bias. The joint test for the intercept and the slope showed that there were no significant differences between the profiles calculated spectroscopically and the ones obtained experimentally by means of the chromatographic reference method at an overall level of confidence of 5%. The uncertainty of the results was estimated by using information derived from the process of assessment of trueness. Such operational aspects as the cost and availability of instrumentation and the length and cost of the analysis were evaluated. The method proposed is a good way of monitoring the reactions of epoxy resins, and it adequately shows how the species concentration varies over time.
Alessandri, Stefano
2017-01-01
Introduction: Some release of radionuclides into the environment can be expected from the growing number of nuclear plants, either in or out of service. The citizen and the big organization could be both interested in simple and innovative methods for checking the radiological safety of their environment and of commodities, starting from foods. Methods: In this work three methods to detect radioactivity are briefly compared focusing on the most recent, which converts a smartphone into a radiation counter. Results: The results of a simple sensitivity test are presented showing the measure of the activity of reference sources put at different distances from each sensor. Discussion: The three methods are discussed in terms of availability, technology, sensitivity, resolution and usefulness. The reported results can be usefully transferred into a radiological emergency scenario and they also offer some interesting implication for our current everyday life, but show that the hardware of the tested smart-phone can detect only high levels of radioactivity. However the technology could be interesting to build a working detection and measurement chain which could start from a diffused and networked first screening before the final high resolution analysis. PMID:28744409
Electrohydrodynamic assisted droplet alignment for lens fabrication by droplet evaporation
NASA Astrophysics Data System (ADS)
Wang, Guangxu; Deng, Jia; Guo, Xing
2018-04-01
Lens fabrication by droplet evaporation has attracted a lot of attention since the fabrication approach is simple and moldless. Droplet position accuracy is a critical parameter in this approach, and thus it is of great importance to use accurate methods to realize the droplet position alignment. In this paper, we propose an electrohydrodynamic (EHD) assisted droplet alignment method. An electrostatic force was induced at the interface between materials to overcome the surface tension and gravity. The deviation of droplet position from the center region was eliminated and alignment was successfully realized. We demonstrated the capability of the proposed method theoretically and experimentally. First, we built a simulation model coupled with the three-phase flow formulations and the EHD equations to study the three-phase flowing process in an electric field. Results show that it is the uneven electric field distribution that leads to the relative movement of the droplet. Then, we conducted experiments to verify the method. Experimental results are consistent with the numerical simulation results. Moreover, we successfully fabricated a crater lens after applying the proposed method. A light emitting diode module packaging with the fabricated crater lens shows a significant light intensity distribution adjustment compared with a spherical cap lens.
Further Investigating Method Effects Associated with Negatively Worded Items on Self-Report Surveys
ERIC Educational Resources Information Center
DiStefano, Christine; Motl, Robert W.
2006-01-01
This article used multitrait-multimethod methodology and covariance modeling for an investigation of the presence and correlates of method effects associated with negatively worded items on the Rosenberg Self-Esteem (RSE) scale (Rosenberg, 1989) using a sample of 757 adults. Results showed that method effects associated with negative item phrasing…
The Role of Attention in Somatosensory Processing: A Multi-Trait, Multi-Method Analysis
ERIC Educational Resources Information Center
Wodka, Ericka L.; Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.
2016-01-01
Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different…
Optimal Stratification of Item Pools in a-Stratified Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Chang, Hua-Hua; van der Linden, Wim J.
2003-01-01
Developed a method based on 0-1 linear programming to stratify an item pool optimally for use in alpha-stratified adaptive testing. Applied the method to a previous item pool from the computerized adaptive test of the Graduate Record Examinations. Results show the new method performs well in practical situations. (SLD)
Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory
Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong
2016-01-01
Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611
Contact angle adjustment in equation-of-state-based pseudopotential model.
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Contact angle adjustment in equation-of-state-based pseudopotential model
NASA Astrophysics Data System (ADS)
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Swiderska, Zaneta; Korzynska, Anna; Markiewicz, Tomasz; Lorent, Malgorzata; Zak, Jakub; Wesolowska, Anna; Roszkowiak, Lukasz; Slodkowska, Janina; Grala, Bartlomiej
2015-01-01
Background. This paper presents the study concerning hot-spot selection in the assessment of whole slide images of tissue sections collected from meningioma patients. The samples were immunohistochemically stained to determine the Ki-67/MIB-1 proliferation index used for prognosis and treatment planning. Objective. The observer performance was examined by comparing results of the proposed method of automatic hot-spot selection in whole slide images, results of traditional scoring under a microscope, and results of a pathologist's manual hot-spot selection. Methods. The results of scoring the Ki-67 index using optical scoring under a microscope, software for Ki-67 index quantification based on hot spots selected by two pathologists (resp., once and three times), and the same software but on hot spots selected by proposed automatic methods were compared using Kendall's tau-b statistics. Results. Results show intra- and interobserver agreement. The agreement between Ki-67 scoring with manual and automatic hot-spot selection is high, while agreement between Ki-67 index scoring results in whole slide images and traditional microscopic examination is lower. Conclusions. The agreement observed for the three scoring methods shows that automation of area selection is an effective tool in supporting physicians and in increasing the reliability of Ki-67 scoring in meningioma.
Swiderska, Zaneta; Korzynska, Anna; Markiewicz, Tomasz; Lorent, Malgorzata; Zak, Jakub; Wesolowska, Anna; Roszkowiak, Lukasz; Slodkowska, Janina; Grala, Bartlomiej
2015-01-01
Background. This paper presents the study concerning hot-spot selection in the assessment of whole slide images of tissue sections collected from meningioma patients. The samples were immunohistochemically stained to determine the Ki-67/MIB-1 proliferation index used for prognosis and treatment planning. Objective. The observer performance was examined by comparing results of the proposed method of automatic hot-spot selection in whole slide images, results of traditional scoring under a microscope, and results of a pathologist's manual hot-spot selection. Methods. The results of scoring the Ki-67 index using optical scoring under a microscope, software for Ki-67 index quantification based on hot spots selected by two pathologists (resp., once and three times), and the same software but on hot spots selected by proposed automatic methods were compared using Kendall's tau-b statistics. Results. Results show intra- and interobserver agreement. The agreement between Ki-67 scoring with manual and automatic hot-spot selection is high, while agreement between Ki-67 index scoring results in whole slide images and traditional microscopic examination is lower. Conclusions. The agreement observed for the three scoring methods shows that automation of area selection is an effective tool in supporting physicians and in increasing the reliability of Ki-67 scoring in meningioma. PMID:26240787
Monte Carlo method for magnetic impurities in metals
NASA Technical Reports Server (NTRS)
Hirsch, J. E.; Fye, R. M.
1986-01-01
The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
ERIC Educational Resources Information Center
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
Invariant 2D object recognition using the wavelet transform and structured neural networks
NASA Astrophysics Data System (ADS)
Khalil, Mahmoud I.; Bayoumi, Mohamed M.
1999-03-01
This paper applies the dyadic wavelet transform and the structured neural networks approach to recognize 2D objects under translation, rotation, and scale transformation. Experimental results are presented and compared with traditional methods. The experimental results showed that this refined technique successfully classified the objects and outperformed some traditional methods especially in the presence of noise.
NASA Astrophysics Data System (ADS)
Ames, D. P.; Peterson, M.; Larsen, J.
2016-12-01
A steady flow of manuscripts describing integrated water resources management (IWRM) modelling has been published in Environmental Modelling & Software since the journal's inaugural issue in 1997. These papers represent two decades of peer-reviewed scientific knowledge regarding methods, practices, and protocols for conducting IWRM. We have undertaken to explore this specific assemblage of literature with the intention of identifying commonly reported procedures in terms of data integration methods, modelling techniques, approaches to stakeholder participation, means of communication of model results, and other elements of the model development and application life cycle. Initial results from this effort will be presented including a summary of commonly used practices, and their evolution over the past two decades. We anticipate that results will show a pattern of movement toward greater use of both stakeholder/participatory modelling methods as well as increased use of automated methods for data integration and model preparation. Interestingly, such results could be interpreted to show that the availability of better, faster, and more integrated software tools and technologies free the modeler to take a less technocratic and more human approach to water resources modelling.
Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao
2017-12-26
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.
Springer, Jan; Schloßnagel, Hannes; Heinz, Werner; Doedt, Thomas; Soeller, Rainer; Einsele, Hermann
2012-01-01
Diagnosis of invasive aspergillosis (IA) is still a major problem in routine clinical practice. Early diagnosis is essential for a good patient prognosis. PCR is a highly sensitive method for the detection of nucleic acids and could play an important role in improving the diagnosis of fungal infections. Therefore, a novel DNA extraction method, ultraclean production (UCP), was developed allowing purification of both cellular and cell-free circulating fungal DNA. In this prospective study we evaluated the commercially available UCP extraction system and compared it to an in-house system. Sixty-three patients at high risk for IA were screened twice weekly, and DNA extracted by both methods was cross-analyzed, in triplicate, by two different real-time PCR assays. The negative predictive values were high for all methods (94.3 to 100%), qualifying them as screening methods, but the sensitivity and diagnostic odds ratios were higher using the UCP extraction method. Sensitivity ranged from 33.3 to 66.7% using the in-house extracts to 100% using the UCP extraction method. Most of the unclassified patients showed no positive PCR results; however, single-positive PCR replicates were observed in some cases. These can bear clinical relevance but should be interpreted with additional clinical and laboratory data. The PCR assays from the UCP extracts showed greater reproducibility than the in-house method for probable IA patients. The standardized UCP extraction method yielded superior results, with regard to sensitivity and reproducibility, than the in-house method. This was independent of the PCR assay used to detect fungal DNA in the sample extracts. PMID:22593600
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-01-01
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000
Jia, Yuanyuan; He, Zhongshi; Gholipour, Ali; Warfield, Simon K
2016-11-01
In magnetic resonance (MR), hardware limitation, scanning time, and patient comfort often result in the acquisition of anisotropic 3-D MR images. Enhancing image resolution is desired but has been very challenging in medical image processing. Super resolution reconstruction based on sparse representation and overcomplete dictionary has been lately employed to address this problem; however, these methods require extra training sets, which may not be always available. This paper proposes a novel single anisotropic 3-D MR image upsampling method via sparse representation and overcomplete dictionary that is trained from in-plane high resolution slices to upsample in the out-of-plane dimensions. The proposed method, therefore, does not require extra training sets. Abundant experiments, conducted on simulated and clinical brain MR images, show that the proposed method is more accurate than classical interpolation. When compared to a recent upsampling method based on the nonlocal means approach, the proposed method did not show improved results at low upsampling factors with simulated images, but generated comparable results with much better computational efficiency in clinical cases. Therefore, the proposed approach can be efficiently implemented and routinely used to upsample MR images in the out-of-planes views for radiologic assessment and postacquisition processing.
Samarakoon, Kalpa; Senevirathne, Mahinda; Lee, Won-Woo; Kim, Young-Tae; Kim, Jae-Il; Oh, Myung-Cheol
2012-01-01
In this study, the antibacterial effect was evaluated to determine the benefits of high speed drying (HSD) and far-infrared radiation drying (FIR) compared to the freeze drying (FD) method. Citrus press-cakes (CPCs) are released as a by-product in the citrus processing industry. Previous studies have shown that the HSD and FIR drying methods are much more economical for drying time and mass drying than those of FD, even though FD is the most qualified drying method. The disk diffusion assay was conducted, and the minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) were determined with methanol extracts of the dried CPCs against 11 fish and five food-related pathogenic bacteria. The disk diffusion results indicated that the CPCs dried by HSD, FIR, and FD prevented growth of all tested bacteria almost identically. The MIC and MBC results showed a range from 0.5-8.0 mg/mL and 1.0-16.0 mg/mL respectively. Scanning electron microscopy indicated that the extracts changed the morphology of the bacteria cell wall, leading to destruction. These results suggest that CPCs dried by HSD and FIR showed strong antibacterial activity against pathogenic bacteria and are more useful drying methods than that of the classic FD method in CPCs utilization. PMID:22808341
Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks.
Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin
2016-06-03
This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method.
Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks
Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin
2016-01-01
This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method. PMID:27271626
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.
Li, Mei; Li, Jun; Xia, Zhigui; Xiao, Ning; Jiang, Weikang; Wen, Yongkang
2017-04-30
Early and accurate diagnosis of imported malaria cases in clusters is crucial for protecting the health of patients and local populations, especially confirmed parasitic persons who are asymptomatic. A total of 226 gold miners who had stayed in highly endemic areas of Ghana for more than six months and returned in clusters were selected randomly. Blood samples from them were tested with microscopy, nest polymerase chain reaction, and rapid diagnostic test (RDT). The sensitivity, specificity, predictive values, agreement rate, and Youden's index of each of three diagnostic methods were calculated and compared with the defined gold standard. A quick and efficient way to respond to screening such a clustered mobile population was predicted and analyzed by evaluating two assumed results of combining microscopy and RDT with or without symptoms of illness. The rate of the carriers of malaria parasites in the populations of gold miners was 19.47%, including 39 P. falciparum. Among the three diagnostic methods, the microscopy method showed the highest specificity, while the RDT method showed the highest sensitivity but the lowest specificity in detecting P. falciparum. The assumed results of combining RDT and microscopy with symptoms showed the best results among all the test results in screening P. falciparum. It was too complex and difficult to catch all parasite carriers in a short period of time among populations with such a complicated situation as that in Shanglin County. A strategy of combing microscopy and RDT for diagnosis is highly recommended.
Rayleigh Wave Tomography of Mid-Continent Rift (MCR) using Earthquake and Ambient Noise Data
NASA Astrophysics Data System (ADS)
Aleqabi, G. I.; Wiens, D.; Wysession, M. E.; Shen, W.; van der Lee, S.; Revenaugh, J.; Frederiksen, A. W.; Darbyshire, F. A.; Stein, S. A.; Jurdy, D. M.; Wolin, E.; Bollmann, T. A.
2015-12-01
The structure of the North American Mid-Continent Rift Zone (MCRZ) is examined using Rayleigh waves from teleseismic earthquakes and ambient seismic noise recorded by the Superior Province Rifting EarthScope Experiment (SPREE). Eighty-four broadband seismometers were deployed during 2011-2013 in Minnesota and Wisconsin, USA, and Ontario, CA, along three lines; two across the rift axis and the third along the rift axis. These stations, together with the EarthScope Transportable Array, provided excellent coverage of the MCRZ. The 1.1 Ga Mesoproterozoic failed rift consists of two arms, buried under post-rifting sedimentary formations that meet at Lake Superior. We compare two array-based tomography methods using teleseismic fundamental mode Rayleigh waves phase and amplitude measurements: the two-plane wave method (TPWM, Forsyth, 1998) and the automated surface wave phase velocity measuring system (ASWMS, Jin and Gaherty, 2015). Both array methods and the ambient noise method give relatively similar results showing low velocity zones extending along the MCRZ arms. The teleseismic Rayleigh wave results from 18 - 180 s period are combined with short period phase velocity results (period 8-30 s) obtained from ambient noise by cross correlation. Phase velocities from the methods are very similar at periods of 18-30 where results overlap; in this period range we use the average of the noise and teleseismic results. Finally the combined phase velocity curve is inverted using a Monte-Carlo inversion method at each geographic point in the model. The results show low velocities at shallow depths (5-10 km) that are the result of very deep sedimentary fill within the MCRZ. Deeper-seated low velocity regions may correspond to mafic underplating of the rift zone.
Alles, Susan; Peng, Linda X; Mozola, Mark A
2009-01-01
A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.
Hybrid DFP-CG method for solving unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa
2017-09-01
The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-05
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-01
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749
A thermoluminescent method for aerosol characterization
NASA Technical Reports Server (NTRS)
Long, E. R., Jr.; Rogowski, R. S.
1976-01-01
A thermoluminescent method has been used to study the interactions of aerosols with ozone. The preliminary results show that ozone reacts with many compounds found in aerosols, and that the thermoluminescence curves obtained from ozonated aerosols are characteristic of the aerosol. The results suggest several important applications of the thermoluminescent method: development of a detector for identification of effluent sources; a sensitive experimental tool for study of heterogeneous chemistry; evaluation of importance of aerosols in atmospheric chemistry; and study of formation of toxic, electronically excited species in airborne particles.
Analysis of drift correction in different simulated weighing schemes
NASA Astrophysics Data System (ADS)
Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.
2015-10-01
In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.
Scaling analysis of stock markets
NASA Astrophysics Data System (ADS)
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.
Nicoś, M; Krawczyk, P; Wojas-Krawczyk, K; Bożyk, A; Jarosz, B; Sawicki, M; Trojanowski, T; Milanowski, J
2017-12-01
RT-PCR technique has showed a promising value as pre-screening method for detection of mRNA containing abnormal ALK sequences, but its sensitivity and specificity is still discussable. Previously, we determined the incidence of ALK rearrangement in CNS metastases of NSCLC using IHC and FISH methods. We evaluated ALK gene rearrangement using two-step RT-PCR method with EML4-ALK Fusion Gene Detection Kit (Entrogen, USA). The studied group included 145 patients (45 females, 100 males) with CNS metastases of NSCLC and was heterogeneous in terms of histology and smoking status. 21% of CNS metastases of NSCLC (30/145) showed presence of mRNA containing abnormal ALK sequences. FISH and IHC tests confirmed the presence of ALK gene rearrangement and expression of ALK abnormal protein in seven patients with positive result of RT-PCR analysis (4.8% of all patients, 20% of RT-PCR positive patients). RT-PCR method compared to FISH analysis achieved 100% of sensitivity and only 82.7% of specificity. IHC method compared to FISH method indicated 100% of sensitivity and 97.8% of specificity. In comparison to IHC, RT-PCR showed identical sensitivity with high number of false positive results. Utility of RT-PCR technique in screening of ALK abnormalities and in qualification patients for molecularly targeted therapies needs further validation.
Virtual and stereoscopic anatomy: when virtual reality meets medical education.
de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha
2016-11-01
OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p < 0.05). Group 2 did not differ statistically from Group 3 (p > 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p < 0.05) between groups, and the Tukey test showed statistical differences between Group 1 and the other 2 groups (p < 0.05). No statistical differences between Groups 2 and 3 were found in the practicum. However, there were significant differences when Groups 2 and 3 were compared with Group 1 (p < 0.05). CONCLUSIONS The authors conclude that this method promoted further improvement in knowledge for students and fostered significantly higher learning when compared with traditional teaching resources.
Yin, Chaomin; Fan, Xiuzhi; Fan, Zhe; Shi, Defang; Gao, Hong
2018-05-01
Enzymes-microwave-ultrasound assisted extraction (EMUE) method had been used to extract Lentinus edodes polysaccharides (LEPs). The enzymatic temperature, enzymatic pH, microwave power and microwave time were optimized by response surface methodology. The yields, properties and antioxidant activities of LEPs from EMUE and other extraction methods including hot-water extraction, enzymes-assisted extraction, microwave-assisted extraction and ultrasound-assisted extraction were evaluated. The results showed that the highest LEPs yield of 9.38% was achieved with enzymatic temperature of 48°C, enzymatic pH of 5.0, microwave power of 440W and microwave time of 10min, which correlated well with the predicted value of 9.79%. Additionally, LEPs from different extraction methods possessed typical absorption peak of polysaccharides, which meant different extraction methods had no significant effects on type of glycosidic bonds and sugar ring of LEPs. However, SEM images of LEPs from different extraction methods were significantly different. Moreover, the different LEPs all showed antioxidant activities, but LEPs from EMUE showed the highest reducing power when compared to other LEPs. The results indicated LEPs from EMUE can be used as natural antioxidant component in the pharmaceutical and functional food industries. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, A; Bouchard, H
Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less
Remote air pollution measurement
NASA Technical Reports Server (NTRS)
Byer, R. L.
1975-01-01
This paper presents a discussion and comparison of the Raman method, the resonance and fluorescence backscatter method, long path absorption methods and the differential absorption method for remote air pollution measurement. A comparison of the above remote detection methods shows that the absorption methods offer the most sensitivity at the least required transmitted energy. Topographical absorption provides the advantage of a single ended measurement, and differential absorption offers the additional advantage of a fully depth resolved absorption measurement. Recent experimental results confirming the range and sensitivity of the methods are presented.
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Analysis of forecasting and inventory control of raw material supplies in PT INDAC INT’L
NASA Astrophysics Data System (ADS)
Lesmana, E.; Subartini, B.; Riaman; Jabar, D. A.
2018-03-01
This study discusses the data forecasting sales of carbon electrodes at PT. INDAC INT L uses winters and double moving average methods, while for predicting the amount of inventory and cost required in ordering raw material of carbon electrode next period using Economic Order Quantity (EOQ) model. The result of error analysis shows that winters method for next period gives result of MAE, MSE, and MAPE, the winters method is a better forecasting method for forecasting sales of carbon electrode products. So that PT. INDAC INT L is advised to provide products that will be sold following the sales amount by the winters method.
Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems
Saberi Nik, Hassan; Rebelo, Paulo
2014-01-01
We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM) is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results. PMID:25386624
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, Juliane; Tolson, Bryan
2017-04-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Prior-based artifact correction (PBAC) in computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heußer, Thorsten, E-mail: thorsten.heusser@dkfz-heidelberg.de; Brehm, Marcus; Ritschl, Ludwig
2014-02-15
Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form ofmore » a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data.« less
Transverse vibrations of non-uniform beams. [combined finite element and Rayleigh-Ritz methods
NASA Technical Reports Server (NTRS)
Klein, L.
1974-01-01
The free vibrations of elastic beams with nonuniform characteristics are investigated theoretically by a new method. The new method is seen to combine the advantages of a finite element approach and of a Rayleigh-Ritz analysis. Comparison with the known analytical results for uniform beams shows good convergence of the method for natural frequencies and modes. For internal shear forces and bending moments, the rate of convergence is less rapid. Results from experiments conducted with a cantilevered helicopter blade with strong nonuniformities and also from alternative theoretical methods, indicate that the theory adequately predicts natural frequencies and mode shapes. General guidelines for efficient use of the method are presented.
Analysis of titanium content in titanium tetrachloride solution
NASA Astrophysics Data System (ADS)
Bi, Xiaoguo; Dong, Yingnan; Li, Shanshan; Guan, Duojiao; Wang, Jianyu; Tang, Meiling
2018-03-01
Strontium titanate, barium titan and lead titanate are new type of functional ceramic materials with good prospect, and titanium tetrachloride is a commonly in the production such products. Which excellent electrochemical performance of ferroelectric tempreature coefficient effect.In this article, three methods are used to calibrate the samples of titanium tetrachloride solution by back titration method, replacement titration method and gravimetric analysis method. The results show that the back titration method has many good points, for example, relatively simple operation, easy to judgment the titration end point, better accuracy and precision of analytical results, the relative standard deviation not less than 0.2%. So, it is the ideal of conventional analysis methods in the mass production.
NASA Technical Reports Server (NTRS)
Baer-Riedhart, J. L.
1982-01-01
A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.
Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.
2013-09-01
In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.
Mirsadraee, Majid; Shafahie, Ahmad; Reza Khakzad, Mohammad; Sankian, Mojtaba
2014-04-01
Anthracofibrosis is the black discoloration of the bronchial mucosa with deformity and obstruction. Association of this disease with tuberculosis (TB) was approved. The objective of this study was to find the additional benefit of assessment of TB by the polymerase chain reaction (PCR) method. Bronchoscopy was performed on 103 subjects (54 anthracofibrosis and 49 control subjects) who required bronchoscopy for their pulmonary problems. According to bronchoscopic findings, participants were classified to anthracofibrosis and nonanthracotic groups. They were examined for TB with traditional methods such as direct smear (Ziehl-Neelsen staining), Löwenstein-Jensen culture, and histopathology and the new method "PCR" for Mycobacterium tuberculosis genome (IS6110). Age, sex, smoking, and clinical findings were not significantly different in the TB and the non-TB groups. Acid-fast bacilli could be detected by a direct smear in 12 (25%) of the anthracofibrosis subjects, and adding the results of culture and histopathology traditional tests indicated TB in 27 (31%) of the cases. Mycobacterium tuberculosis was diagnosed by PCR in 18 (33%) patients, but the difference was not significant. Detection of acid-fast bacilli in control nonanthracosis subjects was significantly lower (3, 6%), but PCR (20, 40%) and accumulation of results from all traditional methods (22, 44%) showed a nonsignificant difference. The PCR method showed a result equal to traditional methods including accumulation of smear, culture, and histopathology.
Pourhoseingholi, Mohamad Amin; Kheirian, Sedigheh; Zali, Mohammad Reza
2017-12-01
Colorectal cancer (CRC) is one of the most common malignancies and cause of cancer mortality worldwide. Given the importance of predicting the survival of CRC patients and the growing use of data mining methods, this study aims to compare the performance of models for predicting 5-year survival of CRC patients using variety of basic and ensemble data mining methods. The CRC dataset from The Shahid Beheshti University of Medical Sciences Research Center for Gastroenterology and Liver Diseases were used for prediction and comparative study of the base and ensemble data mining techniques. Feature selection methods were used to select predictor attributes for classification. The WEKA toolkit and MedCalc software were respectively utilized for creating and comparing the models. The obtained results showed that the predictive performance of developed models was altogether high (all greater than 90%). Overall, the performance of ensemble models was higher than that of basic classifiers and the best result achieved by ensemble voting model in terms of area under the ROC curve (AUC= 0.96). AUC Comparison of models showed that the ensemble voting method significantly outperformed all models except for two methods of Random Forest (RF) and Bayesian Network (BN) considered the overlapping 95% confidence intervals. This result may indicate high predictive power of these two methods along with ensemble voting for predicting 5-year survival of CRC patients.
A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields
NASA Astrophysics Data System (ADS)
Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang
2017-03-01
Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.
Assessment of Mudrock Brittleness with Micro-scratch Testing
NASA Astrophysics Data System (ADS)
Hernandez-Uribe, Luis Alberto; Aman, Michael; Espinoza, D. Nicolas
2017-11-01
Mechanical properties are essential for understanding natural and induced deformational behavior of geological formations. Brittleness characterizes energy dissipation rate and strain localization at failure. Brittleness has been investigated in hydrocarbon-bearing mudrocks in order to quantify the impact of hydraulic fracturing on the creation of complex fracture networks and surface area for reservoir drainage. Typical well logging correlations associate brittleness with carbonate content or dynamic elastic properties. However, an index of rock brittleness should involve actual rock failure and have a consistent method to quantify it. Here, we present a systematic method to quantify mudrock brittleness based on micro-mechanical measurements from the scratch test. Brittleness is formulated as the ratio of energy associated with brittle failure to the total energy required to perform a scratch. Soda lime glass and polycarbonate are used for comparison to identify failure in brittle and ductile mode and validate the developed method. Scratch testing results on mudrocks indicate that it is possible to use the recorded transverse force to estimate brittleness. Results show that tested samples rank as follows in increasing degree of brittleness: Woodford, Eagle Ford, Marcellus, Mancos, and Vaca Muerta. Eagle Ford samples show mixed ductile/brittle failure characteristics. There appears to be no definite correlation between micro-scratch brittleness and quartz or total carbonate content. Dolomite content shows a stronger correlation with brittleness than any other major mineral group. The scratch brittleness index correlates positively with increasing Young's modulus and decreasing Poisson's ratio, but shows deviations in rocks with distinct porosity and with stress-sensitive brittle/ductile behavior (Eagle Ford). The results of our study demonstrate that the micro-scratch test method can be used to investigate mudrock brittleness. The method is particularly useful for reservoir characterization methods that take advantage of drill cuttings or whenever large samples for triaxial testing or fracture mechanics testing cannot be recovered.
NASA Astrophysics Data System (ADS)
Wang, Yi-Hong; Wu, Guo-Cheng; Baleanu, Dumitru
2013-10-01
The variational iteration method is newly used to construct various integral equations of fractional order. Some iterative schemes are proposed which fully use the method and the predictor-corrector approach. The fractional Bagley-Torvik equation is then illustrated as an example of multi-order and the results show the efficiency of the variational iteration method's new role.
Interlinking backscatter, grain size and benthic community structure
NASA Astrophysics Data System (ADS)
McGonigle, Chris; Collier, Jenny S.
2014-06-01
The relationship between acoustic backscatter, sediment grain size and benthic community structure is examined using three different quantitative methods, covering image- and angular response-based approaches. Multibeam time-series backscatter (300 kHz) data acquired in 2008 off the coast of East Anglia (UK) are compared with grain size properties, macrofaunal abundance and biomass from 130 Hamon and 16 Clamshell grab samples. Three predictive methods are used: 1) image-based (mean backscatter intensity); 2) angular response-based (predicted mean grain size), and 3) image-based (1st principal component and classification) from Quester Tangent Corporation Multiview software. Relationships between grain size and backscatter are explored using linear regression. Differences in grain size and benthic community structure between acoustically defined groups are examined using ANOVA and PERMANOVA+. Results for the Hamon grab stations indicate significant correlations between measured mean grain size and mean backscatter intensity, angular response predicted mean grain size, and 1st principal component of QTC analysis (all p < 0.001). Results for the Clamshell grab for two of the methods have stronger positive correlations; mean backscatter intensity (r2 = 0.619; p < 0.001) and angular response predicted mean grain size (r2 = 0.692; p < 0.001). ANOVA reveals significant differences in mean grain size (Hamon) within acoustic groups for all methods: mean backscatter (p < 0.001), angular response predicted grain size (p < 0.001), and QTC class (p = 0.009). Mean grain size (Clamshell) shows a significant difference between groups for mean backscatter (p = 0.001); other methods were not significant. PERMANOVA for the Hamon abundance shows benthic community structure was significantly different between acoustic groups for all methods (p ≤ 0.001). Overall these results show considerable promise in that more than 60% of the variance in the mean grain size of the Clamshell grab samples can be explained by mean backscatter or acoustically-predicted grain size. These results show that there is significant predictive capacity for sediment characteristics from multibeam backscatter and that these acoustic classifications can have ecological validity.
Study on Sumbawa gold recovery using centrifuge
NASA Astrophysics Data System (ADS)
Ferdana, A. D.; Petrus, H. T. B. M.; Bendiyasa, I. M.; Prijambada, I. D.; Hamada, F.; Sachiko, T.
2018-01-01
The Artisanal Small Gold Mining in Sumbawa has been processing gold with mercury (Hg), which poses a serious threat to the mining and global environment. One method of gold processing that does not use mercury is by gravity method. Before processing the ore first performed an analysis of Mineragraphy and analysis of compound with XRD. Mineragraphy results show that gold is associated with chalcopyrite and covelite and is a single particle (native) on size 58.8 μm, 117 μm up to 294 μm. characterization with XRD shows that the Sumbawa Gold Ore is composed of quartz, pyrite, pyroxene, and sericite compounds. Sentrifugation is one of separation equipment of gravity method to increase concentrate based on difference of specific gravity. The optimum concentration result is influenced by several variables, such as water flow rate and particle size. In this present research, the range of flow rate is 5 lpm and 10 lpm, the particle size - 100 + 200 mesh and -200 +300 mesh. Gold concentration in concentrate is measured by EDX. The result shows that the optimum condition is obtained at a separation with flow rate 5 lpm and a particle size of -100 + 200 mesh.
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Modified Spectral Fatigue Methods for S-N Curves With MIL-HDBK-5J Coefficients
NASA Technical Reports Server (NTRS)
Irvine, Tom; Larsen, Curtis
2016-01-01
The rainflow method is used for counting fatigue cycles from a stress response time history, where the fatigue cycles are stress-reversals. The rainflow method allows the application of Palmgren-Miner's rule in order to assess the fatigue life of a structure subject to complex loading. The fatigue damage may also be calculated from a stress response power spectral density (PSD) using the semi-empirical Dirlik, Single Moment, Zhao-Baker and other spectral methods. These methods effectively assume that the PSD has a corresponding time history which is stationary with a normal distribution. This paper shows how the probability density function for rainflow stress cycles can be extracted from each of the spectral methods. This extraction allows for the application of the MIL-HDBK-5J fatigue coefficients in the cumulative damage summation. A numerical example is given in this paper for the stress response of a beam undergoing random base excitation, where the excitation is applied separately by a time history and by its corresponding PSD. The fatigue calculation is performed in the time domain, as well as in the frequency domain via the modified spectral methods. The result comparison shows that the modified spectral methods give comparable results to the time domain rainflow counting method.
Assessment of forward head posture in females: observational and photogrammetry methods.
Salahzadeh, Zahra; Maroufi, Nader; Ahmadi, Amir; Behtash, Hamid; Razmjoo, Arash; Gohari, Mahmoud; Parnianpour, Mohamad
2014-01-01
There are different methods to assess forward head posture (FHP) but the accuracy and discrimination ability of these methods are not clear. Here, we want to compare three postural angles for FHP assessment and also study the discrimination accuracy of three photogrammetric methods to differentiate groups categorized based on observational method. All Seventy-eight healthy female participants (23 ± 2.63 years), were classified into three groups: moderate-severe FHP, slight FHP and non FHP based on observational postural assessment rules. Applying three photogrammetric methods - craniovertebral angle, head title angle and head position angle - to measure FHP objectively. One - way ANOVA test showed a significant difference in three categorized group's craniovertebral angle (P< 0.05, F=83.07). There was no dramatic difference in head tilt angle and head position angle methods in three groups. According to Linear Discriminate Analysis (LDA) results, the canonical discriminant function (Wilks'Lambda) was 0.311 for craniovertebral angle with 79.5% of cross-validated grouped cases correctly classified. Our results showed that, craniovertebral angle method may discriminate the females with moderate-severe and non FHP more accurate than head position angle and head tilt angle. The photogrammetric method had excellent inter and intra rater reliability to assess the head and cervical posture.
Agbangla, N F; Audiffren, M; Albinet, C T
2017-12-20
Using continuous-wave near-infrared spectroscopy (NIRS), this study compared three different methods, namely the slope method (SM), the amplitude method (AM), and the area under the curve (AUC) method to determine the variations of intramuscular oxygenation level as a function of workload. Ten right-handed subjects (22+/-4 years) performed one isometric contraction at each of three different workloads (30 %, 50 % and 90 % of maximal voluntary strength) during a period of twenty seconds. Changes in oxyhemoglobin (delta[HbO(2)]) and deoxyhemoglobin (delta[HHb]) concentrations in the superficial flexor of fingers were recorded using continuous-wave NIRS. The results showed a strong consistency between the three methods, with standardized Cronbach alphas of 0.87 for delta[HHb] and 0.95 for delta[HbO(2)]. No significant differences between the three methods were observed concerning delta[HHb] as a function of workload. However, only the SM showed sufficient sensitivity to detect a significant decrease in delta[HbO(2)] between 30 % and 50 % of workload (p<0.01). Among these three methods, the SM appeared to be the only method that was well adapted and sensitive enough to determine slight changes in delta[HbO(2)]. Theoretical and methodological implications of these results are discussed.
Use of petroleum-based correlations and estimation methods for synthetic fuels
NASA Technical Reports Server (NTRS)
Antoine, A. C.
1980-01-01
Correlations of hydrogen content with aromatics content, heat of combustion, and smoke point are derived for some synthetic fuels prepared from oil and coal syncrudes. Comparing the results of the aromatics content with correlations derived for petroleum fuels shows that the shale-derived fuels fit the petroleum-based correlations, but the coal-derived fuels do not. The correlations derived for heat of combustion and smoke point are comparable to some found for petroleum-based correlations. Calculated values of hydrogen content and of heat of combustion are obtained for the synthetic fuels by use of ASTM estimation methods. Comparisons of the measured and calculated values show biases in the equations that exceed the critical statistics values. Comparison of the measured hydrogen content by the standard ASTM combustion method with that by a nuclear magnetic resonance (NMR) method shows a decided bias. The comparison of the calculated and measured NMR hydrogen contents shows a difference similar to that found with petroleum fuels.
NASA Astrophysics Data System (ADS)
Dong, H.; Kun, Z.; Zhang, L.
2015-12-01
This magnetotelluric (MT) system contains static shift correction and 3D inversion. The correction method is based on the data study on 3D forward modeling and field test. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with zero-cost, and avoids the additional field work and indoor processing with good results shown in Figure 1a-e. Figure 1a shows a normal model (I) without any local heterogeneity. Figure 1b shows a static-shifted model (II) with two local heterogeneous bodies (10 and 1000 ohm.m). Figure 1c is the inversion result (A) for the synthetic data generated from model I. Figure 1d is the inversion result (B) for the static-shifted data generated from model II. Figure 1e is the inversion result (C) for the static-shifted data from model II, but with static shift correction. The results show that the correction method is useful. The 3D inversion algorithm is improved base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the frequency based parallel structure, improved the computational efficiency, reduced the memory of computer, added the topographic and marine factors, and added the constraints of geology and geophysics. So the 3D inversion could even work in PAD with high efficiency and accuracy. The application example of theoretical assessment in oil and gas exploration is shown in Figure 1f-i. The synthetic geophysical model consists of five layers (from top to downwards): shale, limestone, gas, oil, groundwater and limestone overlying a basement rock. Figure 1f-g show the 3D model and central profile. Figure 1h shows the centrel section of 3D inversion, the resultsd show a high degree of reduction in difference on the synthetic model. Figure 1i shows the seismic waveform reflects the interfaces of every layer overall, but the relative positions of the interface of the two-way travel time vary, and the interface between limestone and oil at the sides of the section is not reflected. So 3-D MT can make up for the deficiency of the seismic results such as the fake sync-phase axis and multiple waves.
Wu, Yue; Jiang, Ying
2016-09-15
Water extractable organic carbon (WEOC) plays important roles in soil dissolved organic matter (DOM) research. In the present study, we have detected the chemical properties and biodegradability of WEOC obtained from one granitic forest soil with four commonly used or suggested extraction methods, to study the potential methodological influence in soil DOM research. Results showed great difference in both chemical properties and biodegradation of WEOC from various methods. For the chosen soil, compared to that from fresh soil, WEOC from dried soil contained large proportion of HIN, Base fractions and labile O-alkyl components which might be derived from microbial cell lysis, and showed low fluorescence characteristics, exhibiting great biodegradability. Similarly, WEOC extracted under low temperature and short time conditions showed low fluorescence characteristics and exhibited considerable biodegradability. Conversely, WEOC, which might be potentially subjected to decomposition and loss during extraction, contained higher percentages of HOA fractions and aromatic alkyl and aryl components, and showed high fluorescence characteristics, exhibiting low biodegradability. WEOC extracted in moderate time and temperature showed moderate biodegradability. These method-induced differences implied the direct comparison of the results from similar works is difficult, as we considered here a specific forest soil and other authors other soil types and uses. However, the complexity in comparison reminds that the methodological influence be paid more attention in future soil WEOC researches. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hernández-Bello, Jimmy; D'Souza, Derek; Rossenberg, Ivan
2002-08-01
A method to determine the electron beam energy and an electron audit based on the current IPEM electron Code of Practice has been devised. During the commissioning on the new Varian 2100CD linear accelerator in The Middlesex Hospital, two methods were devised for the determination of electron energy. The first method involves the use of a two-depth method, whereby the ratio of ionisation (presented as a percentage) measured by an ion chamber at two depths in solid water is used to compare against the baseline ionisation depth value for that energy. The second method involves the irradiation of an X-ray film in solid water to obtain a depth dose curve and, hence determine the half value depth and practical range of the electrons. The results showed that the two-depth method has a better accuracy, repeatability, reliability and consistency than the X-ray method. The results for the electron audit showed that electron absolute outputs are obtained from ionisation measurements in solid water, where the energy-range parameters such as practical range and the depth at which ionisation is 50% of that at the maximum for the depth-ionisation curve are determined.
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2012-01-01
A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-02-18
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Darwish, Hany W; Bakheit, Ahmed H; Naguib, Ibrahim A
2016-01-01
This paper presents novel methods for spectrophotometric determination of ascorbic acid (AA) in presence of rutin (RU) (coformulated drug) in their combined pharmaceutical formulation. The seven methods are ratio difference (RD), isoabsorptive_RD (Iso_RD), amplitude summation (A_Sum), isoabsorptive point, first derivative of the ratio spectra ((1)DD), mean centering (MCN), and ratio subtraction (RS). On the other hand, RU was determined directly by measuring the absorbance at 358 nm in addition to the two novel Iso_RD and A_Sum methods. The work introduced in this paper aims to compare these different methods, showing the advantages for each and making a comparison of analysis results. The calibration curve is linear over the concentration range of 4-50 μg/mL for AA and RU. The results show the high performance of proposed methods for the analysis of the binary mixture. The optimum assay conditions were established and the proposed methods were successfully applied for the assay of the two drugs in laboratory prepared mixtures and combined pharmaceutical tablets with excellent recoveries. No interference was observed from common pharmaceutical additives.
Darwish, Hany W.; Bakheit, Ahmed H.; Naguib, Ibrahim A.
2016-01-01
This paper presents novel methods for spectrophotometric determination of ascorbic acid (AA) in presence of rutin (RU) (coformulated drug) in their combined pharmaceutical formulation. The seven methods are ratio difference (RD), isoabsorptive_RD (Iso_RD), amplitude summation (A_Sum), isoabsorptive point, first derivative of the ratio spectra (1DD), mean centering (MCN), and ratio subtraction (RS). On the other hand, RU was determined directly by measuring the absorbance at 358 nm in addition to the two novel Iso_RD and A_Sum methods. The work introduced in this paper aims to compare these different methods, showing the advantages for each and making a comparison of analysis results. The calibration curve is linear over the concentration range of 4–50 μg/mL for AA and RU. The results show the high performance of proposed methods for the analysis of the binary mixture. The optimum assay conditions were established and the proposed methods were successfully applied for the assay of the two drugs in laboratory prepared mixtures and combined pharmaceutical tablets with excellent recoveries. No interference was observed from common pharmaceutical additives. PMID:26885440
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
A simple method for predicting solar fractions of IPH and space heating systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chauhan, R.; Goodling, J.S.
1982-01-01
In this paper, a method has been developed to evaluate the solar fractions of liquid based industrial process heat (IPH) and space heating systems, without the use of computer simulations. The new method is the result of joining two theories, Lunde's equation to determine monthly performance of solar heating systems and the utilizability correlations of Collares-Pereira and Rabl by making appropriate assumptions. The new method requires the input of the monthly averages of the utilizable radiation and the collector operating time. These quantities are determined conveniently by the method of Collares-Pereira and Rabl. A comparison of the results of themore » new method with the most acceptable design methods shows excellent agreement.« less
Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method
NASA Astrophysics Data System (ADS)
Sun, C. J.; Zhou, J. H.; Wu, W.
2017-10-01
During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.
Postharvest Monitoring of Tomato Ripening Using the Dynamic Laser Speckle
Pieczywek, Piotr Mariusz; Nowacka, Małgorzata; Dadan, Magdalena; Wiktor, Artur; Rybak, Katarzyna; Witrowa-Rajchert, Dorota; Zdunek, Artur
2018-01-01
The dynamic laser speckle (biospeckle) method was tested as a potential tool for the assessment and monitoring of the maturity stage of tomatoes. Two tomato cultivars—Admiro and Starbuck—were tested. The process of climacteric maturation of tomatoes was monitored during a shelf life storage experiment. The biospeckle phenomena were captured using 640 nm and 830 nm laser light wavelength, and analysed using two activity descriptors based on biospeckle pattern decorrelation—C4 and ε. The well-established optical parameters of tomatoes skin were used as a reference method (luminosity, a*/b*, chroma). Both methods were tested with respect to their prediction capabilities of the maturity and destructive indicators of tomatoes—firmness, chlorophyll and carotenoids content. The statistical significance of the tested relationships were investigated by means of linear regression models. The climacteric maturation of tomato fruit was associated with an increase in biospckle activity. Compared to the 830 nm laser wavelength the biospeckle activity measured at 640 nm enabled more accurate predictions of firmness, chlorophyll and carotenoids content. At 640 nm laser wavelength both activity descriptors (C4 and ε) provided similar results, while at 830 nm the ε showed slightly better performance. The linear regression models showed that biospeckle activity descriptors had a higher correlation with chlorophyll and carotenoids content than the a*/b* ratio and luminosity. The results for chroma were comparable with the results for both biospeckle activity indicators. The biospeckle method showed very good results in terms of maturation monitoring and the prediction of the maturity indices of tomatoes, proving the possibility of practical implementation of this method for the determination of the maturity stage of tomatoes. PMID:29617343
Influence of Te and Se doping on ZnO films growth by SILAR method
NASA Astrophysics Data System (ADS)
Güney, Harun; Duman, Ćaǧlar
2016-04-01
The AIP Successive ionic layer adsorption and reaction (SILAR) is an economic and simple method to growth thin films. In this study, SILAR method is used to growth Selenium (Se) and Tellurium (Te) doped zinc oxide (ZnO) thin films with different doping rates. For characterization of the films X-ray diffraction (XRD), absorbance and scanning electron microscopy (SEM) are used. XRD results are showed well-defined strongly (002) oriented crystal structure for all samples. Also, absorbance measurements show, Te and Se concentration are proportional and inversely proportional with band gap energy, respectively. SEM measurements show that the surface morphology and thickness of the material varied with Se and/or Te and varying concentrations.
Influence of Te and Se doping on ZnO films growth by SILAR method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Güney, Harun, E-mail: harunguney25@hotmail.com; Duman, Çağlar, E-mail: caglarduman@erzurum.edu.tr
2016-04-18
The AIP Successive ionic layer adsorption and reaction (SILAR) is an economic and simple method to growth thin films. In this study, SILAR method is used to growth Selenium (Se) and Tellurium (Te) doped zinc oxide (ZnO) thin films with different doping rates. For characterization of the films X-ray diffraction (XRD), absorbance and scanning electron microscopy (SEM) are used. XRD results are showed well-defined strongly (002) oriented crystal structure for all samples. Also, absorbance measurements show, Te and Se concentration are proportional and inversely proportional with band gap energy, respectively. SEM measurements show that the surface morphology and thickness ofmore » the material varied with Se and/or Te and varying concentrations.« less
Lee, Eun Gyung; Magrm, Rana; Kusti, Mohannad; Kashon, Michael L; Guffey, Steven; Costas, Michelle M; Boykin, Carie J; Harper, Martin
2017-01-01
This study was to determine occupational exposures to formaldehyde and to compare concentrations of formaldehyde obtained by active and passive sampling methods. In one pathology and one histology laboratories, exposure measurements were collected with sets of active air samplers (Supelco LpDNPH tubes) and passive badges (ChemDisk Aldehyde Monitor 571). Sixty-six sample pairs (49 personal and 17 area) were collected and analyzed by NIOSH NMAM 2016 for active samples and OSHA Method 1007 (using the manufacturer's updated uptake rate) for passive samples. All active and passive 8-hr time-weighted average (TWA) measurements showed compliance with the OSHA permissible exposure limit (PEL-0.75 ppm) except for one passive measurement, whereas 78% for the active and 88% for the passive samples exceeded the NIOSH recommended exposure limit (REL-0.016 ppm). Overall, 73% of the passive samples showed higher concentrations than the active samples and a statistical test indicated disagreement between two methods for all data and for data without outliers. The OSHA Method cautions that passive samplers should not be used for sampling situations involving formalin solutions because of low concentration estimates in the presence of reaction products of formaldehyde and methanol (a formalin additive). However, this situation was not observed, perhaps because the formalin solutions used in these laboratories included much less methanol (3%) than those tested in the OSHA Method (up to 15%). The passive samplers in general overestimated concentrations compared to the active method, which is prudent for demonstrating compliance with an occupational exposure limit, but occasional large differences may be a result of collecting aerosolized droplets or splashes on the face of the samplers. In the situations examined in this study the passive sampler generally produces higher results than the active sampler so that a body of results from passive samplers demonstrating compliance with the OSHA PEL would be a valid conclusion. However, individual passive samples can show lower results than a paired active sampler so that a single result should be treated with caution.
An improved correlation method for determining the period of a torsion pendulum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo Jie; Wang Dianhong
Considering variation of environment temperature and unhomogeneity of background gravitational field, an improved correlation method was proposed to determine the variational period of a torsion pendulum with high precision. The result of processing experimental data shows that the uncertainty of determining the period with this method has been improved about twofolds than traditional correlation method, which is significant for the determination of gravitational constant with time-of-swing method.
Accuracy of the domain method for the material derivative approach to shape design sensitivities
NASA Technical Reports Server (NTRS)
Yang, R. J.; Botkin, M. E.
1987-01-01
Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.
Fast cat-eye effect target recognition based on saliency extraction
NASA Astrophysics Data System (ADS)
Li, Li; Ren, Jianlin; Wang, Xingbin
2015-09-01
Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.
An evaluation method for nanoscale wrinkle
NASA Astrophysics Data System (ADS)
Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.
2016-06-01
In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.
Yu, Xiaojin; Liu, Pei; Min, Jie; Chen, Qiguang
2009-01-01
To explore the application of regression on order statistics (ROS) in estimating nondetects for food exposure assessment. Regression on order statistics was adopted in analysis of cadmium residual data set from global food contaminant monitoring, the mean residual was estimated basing SAS programming and compared with the results from substitution methods. The results show that ROS method performs better obviously than substitution methods for being robust and convenient for posterior analysis. Regression on order statistics is worth to adopt,but more efforts should be make for details of application of this method.
New method for characterization of retroreflective materials
NASA Astrophysics Data System (ADS)
Junior, O. S.; Silva, E. S.; Barros, K. N.; Vitro, J. G.
2018-03-01
The present article aims to propose a new method of analyzing the properties of retroreflective materials using a goniophotometer. The aim is to establish a higher resolution test method with a wide range of viewing angles, taking into account a three-dimensional analysis of the retroreflection of the tested material. The validation was performed by collecting data from specimens collected from materials used in safety clothing and road signs. The approach showed that the results obtained by the proposed method are comparable to the results obtained by the normative protocols, representing an evolution for the metrology of these materials.
Teixeira, Camila Palhares; Hirsch, André; Perini, Henrique; Young, Robert John
2006-01-01
We report the development of a new quantitative method of assessing the effects of anthropogenic impacts on living beings; this method allows us to assess actual impacts and to travel backwards in time to assess impacts. In this method, we have crossed data on fluctuating asymmetry (FA, a measure of environmental or genetic stress), using Didelphis albiventris as a model, with geographical information systems data relating to environmental composition. Our results show that more impacted environments resulted in statistically higher levels of FA. Our method appears to be a useful and flexible conservation tool for assessing anthropogenic impacts. PMID:16627287
Comparison of different screening methods for blood pressure disorders in children and adolescents.
Mourato, Felipe Alves; Lima Filho, José Luiz; Mattos, Sandra da Silva
2015-01-01
To compare different methods of screening for blood pressure disorders in children and adolescents. A database with 17,083 medical records of patients from a pediatric cardiology clinic was used. After analyzing the inclusion and exclusion criteria, 5,650 were selected. These were divided into two age groups: between 5 and 13 years and between 13 and 18 years. The blood pressure measurement was classified as normal, pre-hypertensive, or hypertensive, consistent with recent guidelines and the selected screening methods. Sensitivity, specificity, and accuracy were then calculated according to gender and age range. The formulas proposed by Somu and Ardissino's table showed low sensitivity in identifying pre-hypertension in all age groups, whereas the table proposed by Kaelber showed the best results. The ratio between blood pressure and height showed low specificity in the younger age group, but showed good performance in adolescents. Screening tools used for the assessment of blood pressure disorders in children and adolescents may be useful to decrease the current rate of underdiagnosis of this condition. The table proposed by Kaelber showed the best results; however, the ratio between BP and height demonstrated specific advantages, as it does not require tables. Copyright © 2014 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Effect of storytelling on hopefulness in girl students
Shafieyan, Shima; Soleymani, Mohammad Reza; Samouei, Raheleh; Afshar, Mina
2017-01-01
BACKGROUND AND AIM: One of the methods that help students in learning critical thinking and decision-making skills is storytelling. Story helps the students to place themselves in the same situation as the main protagonist and try different ways and finally select and implement the best possible method. The goal of this study is to investigate the effect of storytelling on hopefulness of students, age 8–11 in Isfahan's 2nd educational district. METHODS: This is an applied, quasi-experimental study. The study population comprised of 34 randomly selected students attending one of the schools in Isfahan's 2nd educational district. The data gathering tool was the standard Kazdin hopefulness scale (α = 0.72) and data were gathered before and after 8 storytelling sessions for the intervention group. The gathered data were analyzed using descriptive and analytical (paired and independent t-test) with the help of SPSS Version 18 software. RESULTS: The study's findings showed a significant difference in the average hopefulness score of students in study group in pre- and posttest (P = 0.04). Furthermore, independent t-test results showed a significant difference in hopefulness score of intervention and control (P = 0.001). The average hopefulness score of the control group after storytelling sessions was higher than that of the intervention and control. CONCLUSION: The results show the effectiveness of storytelling as a method for improving hopefulness in students. PMID:29296602
[Kriging analysis of vegetation index depression in peak cluster karst area].
Yang, Qi-Yong; Jiang, Zhong-Cheng; Ma, Zu-Lu; Cao, Jian-Hua; Luo, Wei-Qun; Li, Wen-Jun; Duan, Xiao-Fang
2012-04-01
In order to master the spatial variability of the normal different vegetation index (NDVI) of the peak cluster karst area, taking into account the problem of the mountain shadow "missing" information of remote sensing images existing in the karst area, NDVI of the non-shaded area were extracted in Guohua Ecological Experimental Area, in Pingguo County, Guangxi applying image processing software, ENVI. The spatial variability of NDVI was analyzed applying geostatistical method, and the NDVI of the mountain shadow areas was predicted and validated. The results indicated that the NDVI of the study area showed strong spatial variability and spatial autocorrelation resulting from the impact of intrinsic factors, and the range was 300 m. The spatial distribution maps of the NDVI interpolated by Kriging interpolation method showed that the mean of NDVI was 0.196, apparently strip and block. The higher NDVI values distributed in the area where the slope was greater than 25 degrees of the peak cluster area, while the lower values distributed in the area such as foot of the peak cluster and depression, where slope was less than 25 degrees. Kriging method validation results show that interpolation has a very high prediction accuracy and could predict the NDVI of the shadow area, which provides a new idea and method for monitoring and evaluation of the karst rocky desertification.
Community-based Inquiry Improves Critical Thinking in General Education Biology
Faiola, Celia L.; Johnson, James E.; Kurtz, Martha J.
2008-01-01
National stakeholders are becoming increasingly concerned about the inability of college graduates to think critically. Research shows that, while both faculty and students deem critical thinking essential, only a small fraction of graduates can demonstrate the thinking skills necessary for academic and professional success. Many faculty are considering nontraditional teaching methods that incorporate undergraduate research because they more closely align with the process of doing investigative science. This study compared a research-focused teaching method called community-based inquiry (CBI) with traditional lecture/laboratory in general education biology to discover which method would elicit greater gains in critical thinking. Results showed significant critical-thinking gains in the CBI group but decreases in a traditional group and a mixed CBI/traditional group. Prior critical-thinking skill, instructor, and ethnicity also significantly influenced critical-thinking gains, with nearly all ethnicities in the CBI group outperforming peers in both the mixed and traditional groups. Females, who showed decreased critical thinking in traditional courses relative to males, outperformed their male counterparts in CBI courses. Through the results of this study, it is hoped that faculty who value both research and critical thinking will consider using the CBI method. PMID:18765755
Detecting the sampling rate through observations
NASA Astrophysics Data System (ADS)
Shoji, Isao
2018-09-01
This paper proposes a method to detect the sampling rate of discrete time series of diffusion processes. Using the maximum likelihood estimates of the parameters of a diffusion process, we establish a criterion based on the Kullback-Leibler divergence and thereby estimate the sampling rate. Simulation studies are conducted to check whether the method can detect the sampling rates from data and their results show a good performance in the detection. In addition, the method is applied to a financial time series sampled on daily basis and shows the detected sampling rate is different from the conventional rates.
1972-01-01
The membrane methods described in Report 71 on the bacteriological examination of water supplies (Report, 1969) for the enumeration of coliform organisms and Escherichia coli in waters, together with a glutamate membrane method, were compared with the glutamate multiple tube method recommended in Report 71 and an incubation procedure similar to that used for membranes with the first 4 hr. at 30° C., and with MacConkey broth in multiple tubes. Although there were some differences between individual laboratories, the combined results from all participating laboratories showed that standard and extended membrane methods gave significantly higher results than the glutamate tube method for coliform organisms in both chlorinated and unchlorinated waters, but significantly lower results for Esch. coli with chlorinated waters and equivocal results with unchlorinated waters. Extended membranes gave higher results than glutamate tubes in larger proportions of samples than did standard membranes. Although transport membranes did not do so well as standard membrane methods, the results were usually in agreement with glutamate tubes except for Esch. coli in chlorinated waters. The glutamate membranes were unsatisfactory. Preliminary incubation of glutamate at 30° C. made little difference to the results. PMID:4567313
Non-equilibrium STLS approach to transport properties of single impurity Anderson model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rezai, Raheleh, E-mail: R_Rezai@sbu.ac.ir; Ebrahimi, Farshad, E-mail: Ebrahimi@sbu.ac.ir
In this work, using the non-equilibrium Keldysh formalism, we study the effects of the electron–electron interaction and the electron-spin correlation on the non-equilibrium Kondo effect and the transport properties of the symmetric single impurity Anderson model (SIAM) at zero temperature by generalizing the self-consistent method of Singwi, Tosi, Land, and Sjolander (STLS) for a single-band tight-binding model with Hubbard type interaction to out of equilibrium steady-states. We at first determine in a self-consistent manner the non-equilibrium spin correlation function, the effective Hubbard interaction, and the double-occupancy at the impurity site. Then, using the non-equilibrium STLS spin polarization function in themore » non-equilibrium formalism of the iterative perturbation theory (IPT) of Yosida and Yamada, and Horvatic and Zlatic, we compute the spectral density, the current–voltage characteristics and the differential conductance as functions of the applied bias and the strength of on-site Hubbard interaction. We compare our spectral densities at zero bias with the results of numerical renormalization group (NRG) and depict the effects of the electron–electron interaction and electron-spin correlation at the impurity site on the aforementioned properties by comparing our numerical result with the order U{sup 2} IPT. Finally, we show that the obtained numerical results on the differential conductance have a quadratic universal scaling behavior and the resulting Kondo temperature shows an exponential behavior. -- Highlights: •We introduce for the first time the non-equilibrium method of STLS for Hubbard type models. •We determine the transport properties of SIAM using the non-equilibrium STLS method. •We compare our results with order-U2 IPT and NRG. •We show that non-equilibrium STLS, contrary to the GW and self-consistent RPA, produces the two Hubbard peaks in DOS. •We show that the method keeps the universal scaling behavior and correct exponential behavior of Kondo temperature.« less
Attitude motion compensation for imager on Fengyun-4 geostationary meteorological satellite
NASA Astrophysics Data System (ADS)
Lyu, Wang; Dai, Shoulun; Dong, Yaohai; Shen, Yili; Song, Xiaozheng; Wang, Tianshu
2017-09-01
A compensation method is used in Chinese Fengyun-4 satellite to counteracting the line-of-sight influence by attitude motion during imaging. The method is acted on-board by adding the compensation amount to the instrument scanning control circuit. The mathematics simulation and the three-axis air-bearing test results show that the method works effectively.
Radiation Heat Transfer Between Diffuse-Gray Surfaces Using Higher Order Finite Elements
NASA Technical Reports Server (NTRS)
Gould, Dana C.
2000-01-01
This paper presents recent work on developing methods for analyzing radiation heat transfer between diffuse-gray surfaces using p-version finite elements. The work was motivated by a thermal analysis of a High Speed Civil Transport (HSCT) wing structure which showed the importance of radiation heat transfer throughout the structure. The analysis also showed that refining the finite element mesh to accurately capture the temperature distribution on the internal structure led to very large meshes with unacceptably long execution times. Traditional methods for calculating surface-to-surface radiation are based on assumptions that are not appropriate for p-version finite elements. Two methods for determining internal radiation heat transfer are developed for one and two-dimensional p-version finite elements. In the first method, higher-order elements are divided into a number of sub-elements. Traditional methods are used to determine radiation heat flux along each sub-element and then mapped back to the parent element. In the second method, the radiation heat transfer equations are numerically integrated over the higher-order element. Comparisons with analytical solutions show that the integration scheme is generally more accurate than the sub-element method. Comparison to results from traditional finite elements shows that significant reduction in the number of elements in the mesh is possible using higher-order (p-version) finite elements.
A literature-driven method to calculate similarities among diseases.
Kim, Hyunjin; Yoon, Youngmi; Ahn, Jaegyoon; Park, Sanghyun
2015-11-01
"Our lives are connected by a thousand invisible threads and along these sympathetic fibers, our actions run as causes and return to us as results". It is Herman Melville's famous quote describing connections among human lives. To paraphrase the Melville's quote, diseases are connected by many functional threads and along these sympathetic fibers, diseases run as causes and return as results. The Melville's quote explains the reason for researching disease-disease similarity and disease network. Measuring similarities between diseases and constructing disease network can play an important role in disease function research and in disease treatment. To estimate disease-disease similarities, we proposed a novel literature-based method. The proposed method extracted disease-gene relations and disease-drug relations from literature and used the frequencies of occurrence of the relations as features to calculate similarities among diseases. We also constructed disease network with top-ranking disease pairs from our method. The proposed method discovered a larger number of answer disease pairs than other comparable methods and showed the lowest p-value. We presume that our method showed good results because of using literature data, using all possible gene symbols and drug names for features of a disease, and determining feature values of diseases with the frequencies of co-occurrence of two entities. The disease-disease similarities from the proposed method can be used in computational biology researches which use similarities among diseases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology
NASA Technical Reports Server (NTRS)
Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus
2013-01-01
Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.
A calibration method of infrared LVF based spectroradiometer
NASA Astrophysics Data System (ADS)
Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin
2017-10-01
In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.
A three phase optimization method for precopy based VM live migration.
Sharma, Sangeeta; Chawla, Meenu
2016-01-01
Virtual machine live migration is a method of moving virtual machine across hosts within a virtualized datacenter. It provides significant benefits for administrator to manage datacenter efficiently. It reduces service interruption by transferring the virtual machine without stopping at source. Transfer of large number of virtual machine memory pages results in long migration time as well as downtime, which also affects the overall system performance. This situation becomes unbearable when migration takes place over slower network or a long distance migration within a cloud. In this paper, precopy based virtual machine live migration method is thoroughly analyzed to trace out the issues responsible for its performance drops. In order to address these issues, this paper proposes three phase optimization (TPO) method. It works in three phases as follows: (i) reduce the transfer of memory pages in first phase, (ii) reduce the transfer of duplicate pages by classifying frequently and non-frequently updated pages, and (iii) reduce the data sent in last iteration of migration by applying the simple RLE compression technique. As a result, each phase significantly reduces total pages transferred, total migration time and downtime respectively. The proposed TPO method is evaluated using different representative workloads on a Xen virtualized environment. Experimental results show that TPO method reduces total pages transferred by 71 %, total migration time by 70 %, downtime by 3 % for higher workload, and it does not impose significant overhead as compared to traditional precopy method. Comparison of TPO method with other methods is also done for supporting and showing its effectiveness. TPO method and precopy methods are also tested at different number of iterations. The TPO method gives better performance even with less number of iterations.
Research on Methods of High Coherent Target Extraction in Urban Area Based on Psinsar Technology
NASA Astrophysics Data System (ADS)
Li, N.; Wu, J.
2018-04-01
PSInSAR technology has been widely applied in ground deformation monitoring. Accurate identification of Persistent Scatterers (PS) is key to the success of PSInSAR data processing. In this paper, the theoretic models and specific algorithms of PS point extraction methods are summarized and the characteristics and applicable conditions of each method, such as Coherence Coefficient Threshold method, Amplitude Threshold method, Dispersion of Amplitude method, Dispersion of Intensity method, are analyzed. Based on the merits and demerits of different methods, an improved method for PS point extraction in urban area is proposed, that uses simultaneously backscattering characteristic, amplitude and phase stability to find PS point in all pixels. Shanghai city is chosen as an example area for checking the improvements of the new method. The results show that the PS points extracted by the new method have high quality, high stability and meet the strong scattering characteristics. Based on these high quality PS points, the deformation rate along the line-of-sight (LOS) in the central urban area of Shanghai is obtained by using 35 COSMO-SkyMed X-band SAR images acquired from 2008 to 2010 and it varies from -14.6 mm/year to 4.9 mm/year. There is a large sedimentation funnel in the cross boundary of Hongkou and Yangpu district with a maximum sedimentation rate of more than 14 mm per year. The obtained ground subsidence rates are also compared with the result of spirit leveling and show good consistent. Our new method for PS point extraction is more reasonable, and can improve the accuracy of the obtained deformation results.
Music Retrieval Based on the Relation between Color Association and Lyrics
NASA Astrophysics Data System (ADS)
Nakamur, Tetsuaki; Utsumi, Akira; Sakamoto, Maki
Various methods for music retrieval have been proposed. Recently, many researchers are tackling developing methods based on the relationship between music and feelings. In our previous psychological study, we found that there was a significant correlation between colors evoked from songs and colors evoked only from lyrics, and showed that the music retrieval system using lyrics could be developed. In this paper, we focus on the relationship among music, lyrics and colors, and propose a music retrieval method using colors as queries and analyzing lyrics. This method estimates colors evoked from songs by analyzing lyrics of the songs. On the first step of our method, words associated with colors are extracted from lyrics. We assumed two types of methods to extract words associated with colors. In the one of two methods, the words are extracted based on the result of a psychological experiment. In the other method, in addition to the words extracted based on the result of the psychological experiment, the words from corpora for the Latent Semantic Analysis are extracted. On the second step, colors evoked from the extracted words are compounded, and the compounded colors are regarded as those evoked from the song. On the last step, colors as queries are compared with colors estimated from lyrics, and the list of songs is presented based on similarities. We evaluated the two methods described above and found that the method based on the psychological experiment and corpora performed better than the method only based on the psychological experiment. As a result, we showed that the method using colors as queries and analyzing lyrics is effective for music retrieval.
Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow
NASA Technical Reports Server (NTRS)
Li, Wu
2003-01-01
Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.
Personality, Study Methods and Academic Performance
ERIC Educational Resources Information Center
Entwistle, N. J.; Wilson, J. D.
1970-01-01
A questionnaire measuring four student personality types--stable introvert, unstable introvert, stable extrovert and unstable extrovert--along with the Eysenck Personality Inventory (Form A) were give to 72 graduate students at Aberdeen University and the results showed recognizable interaction between study methods, motivation and personality…
Multi-parametric centrality method for graph network models
NASA Astrophysics Data System (ADS)
Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna
2018-04-01
The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.
Artificial Intelligence Estimation of Carotid-Femoral Pulse Wave Velocity using Carotid Waveform.
Tavallali, Peyman; Razavi, Marianne; Pahlevan, Niema M
2018-01-17
In this article, we offer an artificial intelligence method to estimate the carotid-femoral Pulse Wave Velocity (PWV) non-invasively from one uncalibrated carotid waveform measured by tonometry and few routine clinical variables. Since the signal processing inputs to this machine learning algorithm are sensor agnostic, the presented method can accompany any medical instrument that provides a calibrated or uncalibrated carotid pressure waveform. Our results show that, for an unseen hold back test set population in the age range of 20 to 69, our model can estimate PWV with a Root-Mean-Square Error (RMSE) of 1.12 m/sec compared to the reference method. The results convey the fact that this model is a reliable surrogate of PWV. Our study also showed that estimated PWV was significantly associated with an increased risk of CVDs.
Probing fibronectin–antibody interactions using AFM force spectroscopy and lateral force microscopy
Kulik, Andrzej J; Lee, Kyumin; Pyka-Fościak, Grazyna; Nowak, Wieslaw
2015-01-01
Summary The first experiment showing the effects of specific interaction forces using lateral force microscopy (LFM) was demonstrated for lectin–carbohydrate interactions some years ago. Such measurements are possible under the assumption that specific forces strongly dominate over the non-specific ones. However, obtaining quantitative results requires the complex and tedious calibration of a torsional force. Here, a new and relatively simple method for the calibration of the torsional force is presented. The proposed calibration method is validated through the measurement of the interaction forces between human fibronectin and its monoclonal antibody. The results obtained using LFM and AFM-based classical force spectroscopies showed similar unbinding forces recorded at similar loading rates. Our studies verify that the proposed lateral force calibration method can be applied to study single molecule interactions. PMID:26114080
Cardellicchio, Nicola; Di Leo, Antonella; Giandomenico, Santina; Santoro, Stefania
2006-01-01
Optimization of acid digestion method for mercury determination in marine biological samples (dolphin liver, fish and mussel tissues) using a closed vessel microwave sample preparation is presented. Five digestion procedures with different acid mixtures were investigated: the best results were obtained when the microwave-assisted digestion was based on sample dissolution with HNO3-H2SO4-K2Cr2O7 mixture. A comparison between microwave digestion and conventional reflux digestion shows there are considerable losses of mercury in the open digestion system. The microwave digestion method has been tested satisfactorily using two certified reference materials. Analytical results show a good agreement with certified values. The microwave digestion proved to be a reliable and rapid method for decomposition of biological samples in mercury determination.
Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients.
Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan
2016-07-27
The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. The study was conducted at the Be'sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable.
NASA Astrophysics Data System (ADS)
Al-Temeemy, Ali A.
2018-03-01
A descriptor is proposed for use in domiciliary healthcare monitoring systems. The descriptor is produced from chromatic methodology to extract robust features from the monitoring system's images. It has superior discrimination capabilities, is robust to events that normally disturb monitoring systems, and requires less computational time and storage space to achieve recognition. A method of human region segmentation is also used with this descriptor. The performance of the proposed descriptor was evaluated using experimental data sets, obtained through a series of experiments performed in the Centre for Intelligent Monitoring Systems, University of Liverpool. The evaluation results show high recognition performance for the proposed descriptor in comparison to traditional descriptors, such as moments invariant. The results also show the effectiveness of the proposed segmentation method regarding distortion effects associated with domiciliary healthcare systems.
Nitsche’s Method For Helmholtz Problems with Embedded Interfaces
Zou, Zilong; Aquino, Wilkins; Harari, Isaac
2016-01-01
SUMMARY In this work, we use Nitsche’s formulation to weakly enforce kinematic constraints at an embedded interface in Helmholtz problems. Allowing embedded interfaces in a mesh provides significant ease for discretization, especially when material interfaces have complex geometries. We provide analytical results that establish the well-posedness of Helmholtz variational problems and convergence of the corresponding finite element discretizations when Nitsche’s method is used to enforce kinematic constraints. As in the analysis of conventional Helmholtz problems, we show that the inf-sup constant remains positive provided that the Nitsche’s stabilization parameter is judiciously chosen. We then apply our formulation to several 2D plane-wave examples that confirm our analytical findings. Doing so, we demonstrate the asymptotic convergence of the proposed method and show that numerical results are in accordance with the theoretical analysis. PMID:28713177
NASA Astrophysics Data System (ADS)
Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.
2016-03-01
Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.
NASA Astrophysics Data System (ADS)
Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz
2018-06-01
The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.
A Summary of the Space-Time Conservation Element and Solution Element (CESE) Method
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen J.
2015-01-01
The space-time Conservation Element and Solution Element (CESE) method for solving conservation laws is examined for its development motivation and design requirements. The characteristics of the resulting scheme are discussed. The discretization of the Euler equations is presented to show readers how to construct a scheme based on the CESE method. The differences and similarities between the CESE method and other traditional methods are discussed. The strengths and weaknesses of the method are also addressed.
The numerical solution of linear multi-term fractional differential equations: systems of equations
NASA Astrophysics Data System (ADS)
Edwards, John T.; Ford, Neville J.; Simpson, A. Charles
2002-11-01
In this paper, we show how the numerical approximation of the solution of a linear multi-term fractional differential equation can be calculated by reduction of the problem to a system of ordinary and fractional differential equations each of order at most unity. We begin by showing how our method applies to a simple class of problems and we give a convergence result. We solve the Bagley Torvik equation as an example. We show how the method can be applied to a general linear multi-term equation and give two further examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puerari, Ivânio; Elmegreen, Bruce G.; Block, David L., E-mail: puerari@inaoep.mx
2014-12-01
We examine 8 μm IRAC images of the grand design two-arm spiral galaxies M81 and M51 using a new method whereby pitch angles are locally determined as a function of scale and position, in contrast to traditional Fourier transform spectral analyses which fit to average pitch angles for whole galaxies. The new analysis is based on a correlation between pieces of a galaxy in circular windows of (lnR,θ) space and logarithmic spirals with various pitch angles. The diameter of the windows is varied to study different scales. The result is a best-fit pitch angle to the spiral structure as amore » function of position and scale, or a distribution function of pitch angles as a function of scale for a given galactic region or area. We apply the method to determine the distribution of pitch angles in the arm and interarm regions of these two galaxies. In the arms, the method reproduces the known pitch angles for the main spirals on a large scale, but also shows higher pitch angles on smaller scales resulting from dust feathers. For the interarms, there is a broad distribution of pitch angles representing the continuation and evolution of the spiral arm feathers as the flow moves into the interarm regions. Our method shows a multiplicity of spiral structures on different scales, as expected from gas flow processes in a gravitating, turbulent and shearing interstellar medium. We also present results for M81 using classical 1D and 2D Fourier transforms, together with a new correlation method, which shows good agreement with conventional 2D Fourier transforms.« less
Friedrich, Ralf P; Janko, Christina; Poettler, Marina; Tripal, Philipp; Zaloga, Jan; Cicha, Iwona; Dürr, Stephan; Nowak, Johannes; Odenbach, Stefan; Slabu, Ioana; Liebl, Maik; Trahms, Lutz; Stapf, Marcus; Hilger, Ingrid; Lyer, Stefan; Alexiou, Christoph
2015-01-01
Due to their special physicochemical properties, iron nanoparticles offer new promising possibilities for biomedical applications. For bench to bedside translation of super-paramagnetic iron oxide nanoparticles (SPIONs), safety issues have to be comprehensively clarified. To understand concentration-dependent nanoparticle-mediated toxicity, the exact quantification of intracellular SPIONs by reliable methods is of great importance. In the present study, we compared three different SPION quantification methods (ultraviolet spectrophotometry, magnetic particle spectroscopy, atomic adsorption spectroscopy) and discussed the shortcomings and advantages of each method. Moreover, we used those results to evaluate the possibility to use flow cytometric technique to determine the cellular SPION content. For this purpose, we correlated the side scatter data received from flow cytometry with the actual cellular SPION amount. We showed that flow cytometry provides a rapid and reliable method to assess the cellular SPION content. Our data also demonstrate that internalization of iron oxide nanoparticles in human umbilical vein endothelial cells is strongly dependent to the SPION type and results in a dose-dependent increase of toxicity. Thus, treatment with lauric acid-coated SPIONs (SEONLA) resulted in a significant increase in the intensity of side scatter and toxicity, whereas SEONLA with an additional protein corona formed by bovine serum albumin (SEONLA-BSA) and commercially available Rienso® particles showed only a minimal increase in both side scatter intensity and cellular toxicity. The increase in side scatter was in accordance with the measurements for SPION content by the atomic adsorption spectroscopy reference method. In summary, our data show that flow cytometry analysis can be used for estimation of uptake of SPIONs by mammalian cells and provides a fast tool for scientists to evaluate the safety of nanoparticle products. PMID:26170658
Evaluation of a new reagent for anti-cytomegalovirus and anti-Epstein-Barr virus immunoglobulin G.
Gutierrez, J; Maroto, M D; Piédrola, G
1994-01-01
The Enzygnost alpha method was tested against the complement fixation test and anti-VCA immunofluorescence to determine the respective titers of anti-cytomegalovirus and anti-Epstein-Barr virus immunoglobulin G antibodies. For cytomegalovirus, the Enzygnost results showed 97.99% agreement with the readings obtained by the alternative method, with 100% sensitivity and 93.7% specificity. For Epstein-Barr virus, Enzygnost showed 97.71% agreement, 100% sensitivity, and 91.11% specificity. PMID:7814510
An h-p Taylor-Galerkin finite element method for compressible Euler equations
NASA Technical Reports Server (NTRS)
Demkowicz, L.; Oden, J. T.; Rachowicz, W.; Hardy, O.
1991-01-01
An extension of the familiar Taylor-Galerkin method to arbitrary h-p spatial approximations is proposed. Boundary conditions are analyzed, and a linear stability result for arbitrary meshes is given, showing the unconditional stability for the parameter of implicitness alpha not less than 0.5. The wedge and blunt body problems are solved with both linear, quadratic, and cubic elements and h-adaptivity, showing the feasibility of higher orders of approximation for problems with shocks.
Mahboudi, Hossein; Kazemi, Bahram; Soleimani, Masoud; Hanaee-Ahvaz, Hana; Ghanbarian, Hossein; Bandehpour, Mojgan; Enderami, Seyed Ehsan; Kehtari, Mousa; Barati, Ghasem
2018-02-15
Mesenchymal stem cells (MSC) from bone marrow hold great potential as a cell source for cartilage repair. The objective of our study was differentiation of MSC toward chondrocyte by using Nanofiber-based polyethersulfone (PES) scaffold and also enhanced chondrogenic differentiation of BMSC in vitro. MSCs were harvested from bone marrow of human and PES scaffold was fabricated via Electrospinning. The isolated cells were cultured on the PES scaffold and scaffold free method. After 21days, Real-time PCR was performed to evaluate the cartilage-specific genes in the mRNA levels. Also, in order to confirm our results, we have done immunocytochemistry and SEM imaging. Flowcytometry confirmed the nature of the isolated adherent cells. Immunocytochemistry and SEM imaging confirmed the differentiation of MSC toward chondrocyte. Also, real time PCR showed a significant increased gene expression of collagen type II and aggrecan on the PES scaffold method when compared to the mRNA levels measured in scaffold free method. Down regulation of Collagen type I was observed in PES scaffold compared to scaffold free at day 21. Also, both methods showed a similar pattern of expression of SOX9. Our results showed that PES scaffold maintains BMSC proliferation and differentiation, and can significantly enhance chondrogenic differentiation of BMSC. PES scaffold seeded BMSC showed the highest capacity for differentiation into chondrocyte-like cells. Copyright © 2017 Elsevier B.V. All rights reserved.
Determination of Failure Point of Asphalt-Mixture Fatigue-Test Results Using the Flow Number Method
NASA Astrophysics Data System (ADS)
Wulan, C. E. P.; Setyawan, A.; Pramesti, F. P.
2018-03-01
The failure point of the results of fatigue tests of asphalt mixtures performed in controlled stress mode is difficult to determine. However, several methods from empirical studies are available to solve this problem. The objectives of this study are to determine the fatigue failure point of the results of indirect tensile fatigue tests using the Flow Number Method and to determine the best Flow Number model for the asphalt mixtures tested. In order to achieve these goals, firstly the best asphalt mixture of three was selected based on their Marshall properties. Next, the Indirect Tensile Fatigue Test was performed on the chosen asphalt mixture. The stress-controlled fatigue tests were conducted at a temperature of 20°C and frequency of 10 Hz, with the application of three loads: 500, 600, and 700 kPa. The last step was the application of the Flow Number methods, namely the Three-Stages Model, FNest Model, Francken Model, and Stepwise Method, to the results of the fatigue tests to determine the failure point of the specimen. The chosen asphalt mixture is EVA (Ethyl Vinyl Acetate) polymer -modified asphalt mixture with 6.5% OBC (Optimum Bitumen Content). Furthermore, the result of this study shows that the failure points of the EVA-modified asphalt mixture under loads of 500, 600, and 700 kPa are 6621, 4841, and 611 for the Three-Stages Model; 4271, 3266, and 537 for the FNest Model; 3401, 2431, and 421 for the Francken Model, and 6901, 6841, and 1291 for the Stepwise Method, respectively. These different results show that the bigger the loading, the smaller the number of cycles to failure. However, the best FN results are shown by the Three-Stages Model and the Stepwise Method, which exhibit extreme increases after the constant development of accumulated strain.
NASA Astrophysics Data System (ADS)
Moreno, Claudia E.; Guevara, Roger; Sánchez-Rojas, Gerardo; Téllez, Dianeis; Verdú, José R.
2008-01-01
Environmental assessment at the community level in highly diverse ecosystems is limited by taxonomic constraints and statistical methods requiring true replicates. Our objective was to show how diverse systems can be studied at the community level using higher taxa as biodiversity surrogates, and re-sampling methods to allow comparisons. To illustrate this we compared the abundance, richness, evenness and diversity of the litter fauna in a pine-oak forest in central Mexico among seasons, sites and collecting methods. We also assessed changes in the abundance of trophic guilds and evaluated the relationships between community parameters and litter attributes. With the direct search method we observed differences in the rate of taxa accumulation between sites. Bootstrap analysis showed that abundance varied significantly between seasons and sampling methods, but not between sites. In contrast, diversity and evenness were significantly higher at the managed than at the non-managed site. Tree regression models show that abundance varied mainly between seasons, whereas taxa richness was affected by litter attributes (composition and moisture content). The abundance of trophic guilds varied among methods and seasons, but overall we found that parasitoids, predators and detrivores decreased under management. Therefore, although our results suggest that management has positive effects on the richness and diversity of litter fauna, the analysis of trophic guilds revealed a contrasting story. Our results indicate that functional groups and re-sampling methods may be used as tools for describing community patterns in highly diverse systems. Also, the higher taxa surrogacy could be seen as a preliminary approach when it is not possible to identify the specimens at a low taxonomic level in a reasonable period of time and in a context of limited financial resources, but further studies are needed to test whether the results are specific to a system or whether they are general with regards to land management.
[Analysis on microdialysis probe recovery of baicalin in vitro and in vivo based on LC-MS/MS].
Chen, Teng-Fei; Liu, Jian-Xun; Zhang, Ying; Lin, Li; Song, Wen-Ting; Yao, Ming-Jiang
2017-06-01
To further study the brain behavior and the pharmacokinetics of baicalin in intercellular fluid of brain, and study the recovery rate and stability of brain and blood microdialysis probe of baicalin in vitro and in vivo. The concentration of baicalin in brain and blood microdialysates was determined by LC-MS/MS and the probe recovery for baicalin was calculated. The effects of different flow rates (0.50, 1.0, 1.5, 2.0,3.0 μL•min⁻¹) on recovery in vitro were determined by incremental method and decrement method. The effects of different drug concentrations (50.00, 200.0, 500.0, 1 000 μg•L⁻¹) and using times (0, 1, 2) on recovery in vitro were determined by incremental method. The probe recovery stability and effect of flow rate on recovery in vivo were determined by decrement method, and its results were compared with those in in vitro trial. The in vitro recovery of brain and blood probe of baicalin was decreased with the increase of flow rate under the same concentration; and at the same flow rate, different concentrations of baicalin had little influence on the recovery. The probe which had been used for 2 times showed no obvious change in probe recovery by syringe with 2% heparin sodium and ultrapure water successively. In vitro recovery rates obtained by incremental method and decrement method were approximately equal under the same condition, and the in vivo recovery determined by decrement method was similar with the in vitro results and they were showed a good stability within 10 h. The results showed that decrement method can be used for pharmacokinetic study of baicalin, and can be used to study probe recovery in vivo at the same time. Copyright© by the Chinese Pharmaceutical Association.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
Fast calculation of low altitude disturbing gravity for ballistics
NASA Astrophysics Data System (ADS)
Wang, Jianqiang; Wang, Fanghao; Tian, Shasha
2018-03-01
Fast calculation of disturbing gravity is a key technology in ballistics while spherical cap harmonic(SCH) theory can be used to solve this problem. By using adjusted spherical cap harmonic(ASCH) methods, the spherical cap coordinates are projected into a global coordinates, then the non-integer associated Legendre functions(ALF) of SCH are replaced by integer ALF of spherical harmonics(SH). This new method is called virtual spherical harmonics(VSH) and some numerical experiment were done to test the effect of VSH. The results of earth's gravity model were set as the theoretical observation, and the model of regional gravity field was constructed by the new method. Simulation results show that the approximated errors are less than 5mGal in the low altitude range of the central region. In addition, numerical experiments were conducted to compare the calculation speed of SH model, SCH model and VSH model, and the results show that the calculation speed of the VSH model is raised one order magnitude in a small scope.
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
Investigation of Multiphase Flow in a Packed Bed Reactor Under Microgravity Conditions
NASA Technical Reports Server (NTRS)
Lian, Yongsheng; Motil, Brian; Rame, Enrique
2016-01-01
In this paper we study the two-phase flow phenomena in a packed bed reactor using an integrated experimental and numerical method. The cylindrical bed is filled with uniformly sized spheres. In the experiment water and air are injected into the bed simultaneously. The pressure distribution along the bed will be measured. The numerical simulation is based on a two-phase flow solver which solves the Navier-Stokes equations on Cartesian grids. A novel coupled level set and moment of fluid method is used to construct the interface. A sequential method is used to position spheres in the cylinder. Preliminary experimental results showed that the tested flow rates resulted in pulse flow. The numerical simulation revealed that air bubbles could merge into larger bubbles and also could break up into smaller bubbles to pass through the pores in the bed. Preliminary results showed that flow passed through regions where the porosity is high. Comparison between the experimental and numerical results in terms of pressure distributions at different flow injection rates will be conducted. Comparison of flow phenomena under terrestrial gravity and microgravity will be made.
Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng
2015-08-01
Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.
NASA Astrophysics Data System (ADS)
Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui
2018-01-01
The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.
NASA Astrophysics Data System (ADS)
Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.
2018-03-01
This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.
Qi, Tingting; Huang, Chenchen; Yan, Shan; Li, Xiu-Juan; Pan, Si-Yi
2015-11-01
Three kinds of magnetite/reduced graphene oxide (MRGO) nanocomposites were prepared by solvothermal, hydrothermal and co-precipitation methods. The as-prepared nanocomposites were characterized and compared by Fourier transform infrared spectroscopy, scanning electron microscopy, transmission electron microscopy, X-ray diffraction and zeta potential. The results showed that MRGO made by different methods differed in surface functional groups, crystal structure, particle sizes, surface morphology and surface charge. Due to their unlike features, these nanocomposites displayed dissimilar performances when they were used to adsorb drugs, dyes and metal ions. The MRGO prepared by the co-precipitation method showed special adsorption ability to negative ions, but those synthesized by the solvothermal method obtained the best extraction ability and reusability to the others and showed a good prospective in magnetic solid-phase extraction. Therefore, it is highly recommended to use the right preparation method before application in order to attain the best extraction performance. Copyright © 2015 Elsevier B.V. All rights reserved.
A k-Space Method for Moderately Nonlinear Wave Propagation
Jing, Yun; Wang, Tianren; Clement, Greg T.
2013-01-01
A k-space method for moderately nonlinear wave propagation in absorptive media is presented. The Westervelt equation is first transferred into k-space via Fourier transformation, and is solved by a modified wave-vector time-domain scheme. The present approach is not limited to forward propagation or parabolic approximation. One- and two-dimensional problems are investigated to verify the method by comparing results to analytic solutions and finite-difference time-domain (FDTD) method. It is found that to obtain accurate results in homogeneous media, the grid size can be as little as two points per wavelength, and for a moderately nonlinear problem, the Courant–Friedrichs–Lewy number can be as large as 0.4. Through comparisons with the conventional FDTD method, the k-space method for nonlinear wave propagation is shown here to be computationally more efficient and accurate. The k-space method is then employed to study three-dimensional nonlinear wave propagation through the skull, which shows that a relatively accurate focusing can be achieved in the brain at a high frequency by sending a low frequency from the transducer. Finally, implementations of the k-space method using a single graphics processing unit shows that it required about one-seventh the computation time of a single-core CPU calculation. PMID:22899114
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
Towards an Optimized Method of Olive Tree Crown Volume Measurement
Miranda-Fuentes, Antonio; Llorens, Jordi; Gamarra-Diezma, Juan L.; Gil-Ribes, Jesús A.; Gil, Emilio
2015-01-01
Accurate crown characterization of large isolated olive trees is vital for adjusting spray doses in three-dimensional crop agriculture. Among the many methodologies available, laser sensors have proved to be the most reliable and accurate. However, their operation is time consuming and requires specialist knowledge and so a simpler crown characterization method is required. To this end, three methods were evaluated and compared with LiDAR measurements to determine their accuracy: Vertical Crown Projected Area method (VCPA), Ellipsoid Volume method (VE) and Tree Silhouette Volume method (VTS). Trials were performed in three different kinds of olive tree plantations: intensive, adapted one-trunked traditional and traditional. In total, 55 trees were characterized. Results show that all three methods are appropriate to estimate the crown volume, reaching high coefficients of determination: R2 = 0.783, 0.843 and 0.824 for VCPA, VE and VTS, respectively. However, discrepancies arise when evaluating tree plantations separately, especially for traditional trees. Here, correlations between LiDAR volume and other parameters showed that the Mean Vector calculated for VCPA method showed the highest correlation for traditional trees, thus its use in traditional plantations is highly recommended. PMID:25658396
NASA Astrophysics Data System (ADS)
Tierney, Craig Cristy
Presented here are several investigations of ocean tides derived from TOPEX/POSEIDON (T/P) altimetry and numerical models. The purpose of these investigations is to study the short wavelength features in the T/P data and to preserve these wavelengths in global ocean tide models that are accurate in shallow and deep waters. With these new estimates, effects of the tides on loading, Earth's rotation, and tidal energetics are studied. To preserve tidal structure, tides have been estimated along the ground track of T/P by the harmonic and response methods using 4.5 years of data. Results show the two along-track (AT) estimates agree with each other and with other tide models for those components with minimal aliasing problems. Comparisons to global models show that there is tidal structure in the T/P data that is not preserved with current gridding methods. Error estimates suggest there is accurate information in the T/P data from shallow waters that can be used to improve tidal models. It has been shown by Ray and Mitchum (1996) that the first mode baroclinic tide can be separated from AT tide estimates by filtering. This method has been used to estimate the first mode semidiurnal baroclinic tides globally. Estimates for M2 show good correlation with known regions of baroclinic tide generation. Using gridded, filtered AT estimates, a lower bound on the energy contained in the M2 baroclinic tide is 50 PJ. Inspired by the structure found in the AT estimates, a gridding method is presented that preserves tidal structure in the T/P data. These estimates are assimilated into a nonlinear, finite difference, global barotropic tidal model. Results from the 8 major tidal constituents show the model performs equivalently to other models in the deep waters, and is significantly better in the shallow waters. Crossover variance is reduced from 14 cm to 10 cm in the shallow waters. Comparisons to Earth rotation show good agreement to results from VLBI data. Tidal energetics computed from the models show good agreement with previous results. PE/KE ratios and quality factors are more consistent in each frequency band than in previous results.
Detection of coupling delay: A problem not yet solved
NASA Astrophysics Data System (ADS)
Coufal, David; Jakubík, Jozef; Jajcay, Nikola; Hlinka, Jaroslav; Krakovská, Anna; Paluš, Milan
2017-08-01
Nonparametric detection of coupling delay in unidirectionally and bidirectionally coupled nonlinear dynamical systems is examined. Both continuous and discrete-time systems are considered. Two methods of detection are assessed—the method based on conditional mutual information—the CMI method (also known as the transfer entropy method) and the method of convergent cross mapping—the CCM method. Computer simulations show that neither method is generally reliable in the detection of coupling delays. For continuous-time chaotic systems, the CMI method appears to be more sensitive and applicable in a broader range of coupling parameters than the CCM method. In the case of tested discrete-time dynamical systems, the CCM method has been found to be more sensitive, while the CMI method required much stronger coupling strength in order to bring correct results. However, when studied systems contain a strong oscillatory component in their dynamics, results of both methods become ambiguous. The presented study suggests that results of the tested algorithms should be interpreted with utmost care and the nonparametric detection of coupling delay, in general, is a problem not yet solved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim
2013-03-15
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less
Hierarchical Si/ZnO trunk-branch nanostructure for photocurrent enhancement
2014-01-01
Hierarchical Si/ZnO trunk-branch nanostructures (NSs) have been synthesized by hot wire assisted chemical vapor deposition method for trunk Si nanowires (NWs) on indium tin oxide (ITO) substrate and followed by the vapor transport condensation (VTC) method for zinc oxide (ZnO) nanorods (NRs) which was laterally grown from each Si nanowires (NWs). A spin coating method has been used for zinc oxide (ZnO) seeding. This method is better compared with other group where they used sputtering method for the same process. The sputtering method only results in the growth of ZnO NRs on top of the Si trunk. Our method shows improvement by having the growth evenly distributed on the lateral sides and caps of the Si trunks, resulting in pine-leave-like NSs. Field emission scanning electron microscope image shows the hierarchical nanostructures resembling the shape of the leaves of pine trees. Single crystalline structure for the ZnO branch grown laterally from the crystalline Si trunk has been identified by using a lattice-resolved transmission electron microscope. A preliminary photoelectrochemical (PEC) cell testing has been setup to characterize the photocurrent of sole array of ZnO NR growth by both hydrothermal-grown (HTG) method and VTC method on ITO substrates. VTC-grown ZnO NRs showed greater photocurrent effect due to its better structural properties. The measured photocurrent was also compared with the array of hierarchical Si/ZnO trunk-branch NSs. The cell with the array of Si/ZnO trunk-branch NSs revealed four-fold magnitude enhancement in photocurrent density compared with the sole array of ZnO NRs obtain from VTC processes. PMID:25246872
Fullerton, Birgit; Pöhlmann, Boris; Krohn, Robert; Adams, John L; Gerlach, Ferdinand M; Erler, Antje
2016-10-01
To present a case study on how to compare various matching methods applying different measures of balance and to point out some pitfalls involved in relying on such measures. Administrative claims data from a German statutory health insurance fund covering the years 2004-2008. We applied three different covariance balance diagnostics to a choice of 12 different matching methods used to evaluate the effectiveness of the German disease management program for type 2 diabetes (DMPDM2). We further compared the effect estimates resulting from applying these different matching techniques in the evaluation of the DMPDM2. The choice of balance measure leads to different results on the performance of the applied matching methods. Exact matching methods performed well across all measures of balance, but resulted in the exclusion of many observations, leading to a change of the baseline characteristics of the study sample and also the effect estimate of the DMPDM2. All PS-based methods showed similar effect estimates. Applying a higher matching ratio and using a larger variable set generally resulted in better balance. Using a generalized boosted instead of a logistic regression model showed slightly better performance for balance diagnostics taking into account imbalances at higher moments. Best practice should include the application of several matching methods and thorough balance diagnostics. Applying matching techniques can provide a useful preprocessing step to reveal areas of the data that lack common support. The use of different balance diagnostics can be helpful for the interpretation of different effect estimates found with different matching methods. © Health Research and Educational Trust.
Confidence-based ensemble for GBM brain tumor segmentation
NASA Astrophysics Data System (ADS)
Huo, Jing; van Rikxoort, Eva M.; Okada, Kazunori; Kim, Hyun J.; Pope, Whitney; Goldin, Jonathan; Brown, Matthew
2011-03-01
It is a challenging task to automatically segment glioblastoma multiforme (GBM) brain tumors on T1w post-contrast isotropic MR images. A semi-automated system using fuzzy connectedness has recently been developed for computing the tumor volume that reduces the cost of manual annotation. In this study, we propose a an ensemble method that combines multiple segmentation results into a final ensemble one. The method is evaluated on a dataset of 20 cases from a multi-center pharmaceutical drug trial and compared to the fuzzy connectedness method. Three individual methods were used in the framework: fuzzy connectedness, GrowCut, and voxel classification. The combination method is a confidence map averaging (CMA) method. The CMA method shows an improved ROC curve compared to the fuzzy connectedness method (p < 0.001). The CMA ensemble result is more robust compared to the three individual methods.
Hyaluronan- and heparin-reduced silver nanoparticles with antimicrobial properties
Kemp, Melissa M; Kumar, Ashavani; Clement, Dylan; Ajayan, Pulickel; Mousa, Shaker
2009-01-01
Aims Silver nanoparticles exhibit unique antibacterial properties that make these ideal candidates for biological and medical applications. We utilized a clean method involving a single synthetic step to prepare silver nanoparticles that exhibit antimicrobial activity. Materials & methods These nanoparticles were prepared by reducing silver nitrate with diaminopyridinylated heparin (DAPHP) and hyaluronan (HA) polysaccharides and tested for their efficacy in inhibiting microbial growth. Results & discussion The resulting silver nanoparticles exhibit potent antimicrobial activity against Staphylococcus aureus and modest activity against Escherichia coli. Silver–HA showed greater antimicrobial activity than silver–DAPHP, while silver–glucose nanoparticles exhibited very weak antimicrobial activity. Neither HA nor DAPHP showed activity against S. aureus or E. coli. Conclusion These results suggest that DAPHP and HA silver nanoparticles have potential in antimicrobial therapeutic applications. PMID:19505245
NASA Astrophysics Data System (ADS)
Abitew, T. A.; van Griensven, A.; Bauwens, W.
2015-12-01
Evapotranspiration is the main process in hydrology (on average around 60%), though has not received as much attention in the evaluation and calibration of hydrological models. In this study, Remote Sensing (RS) derived Evapotranspiration (ET) is used to improve the spatially distributed processes of ET of SWAT model application in the upper Mara basin (Kenya) and the Blue Nile basin (Ethiopia). The RS derived ET data is obtained from recently compiled global datasets (continuously monthly data at 1 km resolution from MOD16NBI,SSEBop,ALEXI,CMRSET models) and from regionally applied Energy Balance Models (for several cloud free days). The RS-RT data is used in different forms: Method 1) to evaluate spatially distributed evapotransiration model resultsMethod 2) to calibrate the evotranspiration processes in hydrological modelMethod 3) to bias-correct the evapotranpiration in hydrological model during simulation after changing the SWAT codesAn inter-comparison of the RS-ET products shows that at present there is a significant bias, but at the same time an agreement on the spatial variability of ET. The ensemble mean of different ET products seems the most realistic estimation and was further used in this study.The results show that:Method 1) the spatially mapped evapotranspiration of hydrological models shows clear differences when compared to RS derived evapotranspiration (low correlations). Especially evapotranspiration in forested areas is strongly underestimated compared to other land covers.Method 2) Calibration allows to improve the correlations between the RS and hydrological model results to some extent.Method 3) Bias-corrections are efficient in producing (sesonal or annual) evapotranspiration maps from hydrological models which are very similar to the patterns obtained from RS data.Though the bias-correction is very efficient, it is advised to improve the model results by better representing the ET processes by improved plant/crop computations, improved agricultural management practices or by providing improved meteorological data.
Estimating Cyanobacteria Community Dynamics and its Relationship with Environmental Factors
Luo, Wenhuai; Chen, Huirong; Lei, Anping; Lu, Jun; Hu, Zhangli
2014-01-01
The cyanobacteria community dynamics in two eutrophic freshwater bodies (Tiegang Reservoir and Shiyan Reservoir) was studied with both a traditional microscopic counting method and a PCR-DGGE genotyping method. Results showed that cyanobacterium Phormidium tenue was the predominant species; twenty-six cyanobacteria species were identified in water samples collected from the two reservoirs, among which fourteen were identified with the morphological method and sixteen with the PCR-DGGE method. The cyanobacteria community composition analysis showed a seasonal fluctuation from July to December. The cyanobacteria population peaked in August in both reservoirs, with cell abundances of 3.78 × 108 cells L-1 and 1.92 × 108 cells L-1 in the Tiegang and Shiyan reservoirs, respectively. Canonical Correspondence Analysis (CCA) was applied to further investigate the correlation between cyanobacteria community dynamics and environmental factors. The result indicated that the cyanobacteria community dynamics was mostly correlated with pH, temperature and total nitrogen. This study demonstrated that data obtained from PCR-DGGE combined with a traditional morphological method could reflect cyanobacteria community dynamics and its correlation with environmental factors in eutrophic freshwater bodies. PMID:24448632
Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard
2010-02-01
The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.
NASA Astrophysics Data System (ADS)
Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining
2017-12-01
We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.
The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children.
Djalal, Farah Mutiasari; Ameel, Eef; Storms, Gert
2016-01-01
An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children's category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults.
Global Optimal Trajectory in Chaos and NP-Hardness
NASA Astrophysics Data System (ADS)
Latorre, Vittorio; Gao, David Yang
This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.
The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children
Ameel, Eef; Storms, Gert
2016-01-01
An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371
NASA Astrophysics Data System (ADS)
Givianrad, M. H.; Saber-Tehrani, M.; Aberoomand-Azar, P.; Mohagheghian, M.
2011-03-01
The applicability of H-point standard additions method (HPSAM) to the resolving of overlapping spectra corresponding to the sulfamethoxazole and trimethoprim is verified by UV-vis spectrophotometry. The results show that the H-point standard additions method with simultaneous addition of both analytes is suitable for the simultaneous determination of sulfamethoxazole and trimethoprim in aqueous media. The results of applying the H-point standard additions method showed that the two drugs could be determined simultaneously with the concentration ratios of sulfamethoxazole to trimethoprim varying from 1:18 to 16:1 in the mixed samples. Also, the limits of detections were 0.58 and 0.37 μmol L -1 for sulfamethoxazole and trimethoprim, respectively. In addition the means of the calculated RSD (%) were 1.63 and 2.01 for SMX and TMP, respectively in synthetic mixtures. The proposed method has been successfully applied to the simultaneous determination of sulfamethoxazole and trimethoprim in some synthetic, pharmaceutical formulation and biological fluid samples.
Givianrad, M H; Saber-Tehrani, M; Aberoomand-Azar, P; Mohagheghian, M
2011-03-01
The applicability of H-point standard additions method (HPSAM) to the resolving of overlapping spectra corresponding to the sulfamethoxazole and trimethoprim is verified by UV-vis spectrophotometry. The results show that the H-point standard additions method with simultaneous addition of both analytes is suitable for the simultaneous determination of sulfamethoxazole and trimethoprim in aqueous media. The results of applying the H-point standard additions method showed that the two drugs could be determined simultaneously with the concentration ratios of sulfamethoxazole to trimethoprim varying from 1:18 to 16:1 in the mixed samples. Also, the limits of detections were 0.58 and 0.37 μmol L(-1) for sulfamethoxazole and trimethoprim, respectively. In addition the means of the calculated RSD (%) were 1.63 and 2.01 for SMX and TMP, respectively in synthetic mixtures. The proposed method has been successfully applied to the simultaneous determination of sulfamethoxazole and trimethoprim in some synthetic, pharmaceutical formulation and biological fluid samples. Copyright © 2011 Elsevier B.V. All rights reserved.
High-order time-marching reinitialization for regional level-set functions
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-02-01
In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.
Qian, Liwei; Zheng, Haoran; Zhou, Hong; Qin, Ruibin; Li, Jinlong
2013-01-01
The increasing availability of time series expression datasets, although promising, raises a number of new computational challenges. Accordingly, the development of suitable classification methods to make reliable and sound predictions is becoming a pressing issue. We propose, here, a new method to classify time series gene expression via integration of biological networks. We evaluated our approach on 2 different datasets and showed that the use of a hidden Markov model/Gaussian mixture models hybrid explores the time-dependence of the expression data, thereby leading to better prediction results. We demonstrated that the biclustering procedure identifies function-related genes as a whole, giving rise to high accordance in prognosis prediction across independent time series datasets. In addition, we showed that integration of biological networks into our method significantly improves prediction performance. Moreover, we compared our approach with several state-of–the-art algorithms and found that our method outperformed previous approaches with regard to various criteria. Finally, our approach achieved better prediction results on early-stage data, implying the potential of our method for practical prediction. PMID:23516469
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Zhou, Shenglu; Su, Quanlong; Yi, Haomin
2017-01-01
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution. PMID:29278363
NASA Astrophysics Data System (ADS)
Murshid, N.; Kamil, N. A. F. M.; Kadir, A. A.
2018-04-01
Petroleum sludge is one of the major solid wastes generated in the petroleum industry. Generally, there are numbers of heavy metals in petroleum sludge and one treatment that is gaining prominence to treat a variety of mixed organic and inorganic waste is solidification/stabilization (S/S) method. The treatment protects human health and the environment by immobilizing contaminants within the treated material and prevents migration of the contaminants. In this study, solidification/stabilization (S/S) method has been used to treat the petroleum sludge. The comparison of hydration days, namely, 7th and 28th days in these cement-based waste materials were studied by using Synthetic Precipitate Leaching Procedure (SPLP). The results were compared to the United States Environmental Protection Agency (USEPA) standards. The results for leaching test concluded that less percentage OPC gave maximum concentration of heavy metals leaching due to deficient in Calcium Oxide (CaO), which is can caused weak solidification in the mixture. Physical and mechanical properties conducted such as compressive strength and density test. From the results, it shows addition up to of 30percentage PS give results which comply with minimum landfill dispose limit. The results shows correlation between strength and density are strong regression coefficient of 82.7%. In conclusion, S/S method can be alternative disposal method for PS in the same time complies with standard for minimum landfill disposal limit. The results for leaching test concluded the less OPC percentage gave maximum concentration of heavy metals leaching.
Platelet counting using the Coulter electronic counter.
Eggleton, M J; Sharp, A A
1963-03-01
A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.(1) The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed.
How-to-Do-It: A Practical Method for Teaching Seed Stratification.
ERIC Educational Resources Information Center
Englert, Karen M.; Shontz, Nancy N.
1989-01-01
Described is a laboratory procedure for teaching seed stratification. Materials, methods, results, and applicability of the experiment are explained. Diagrams showing the percent of total germination as a function of stratification time and the germination rate of stratified seeds are included. (RT)
Boosting specificity of MEG artifact removal by weighted support vector machine.
Duan, Fang; Phothisonothai, Montri; Kikuchi, Mitsuru; Yoshimura, Yuko; Minabe, Yoshio; Watanabe, Kastumi; Aihara, Kazuyuki
2013-01-01
An automatic artifact removal method of magnetoencephalogram (MEG) was presented in this paper. The method proposed is based on independent components analysis (ICA) and support vector machine (SVM). However, different from the previous studies, in this paper we consider two factors which would influence the performance. First, the imbalance factor of independent components (ICs) of MEG is handled by weighted SVM. Second, instead of simply setting a fixed weight to each class, a re-weighting scheme is used for the preservation of useful MEG ICs. Experimental results on manually marked MEG dataset showed that the method proposed could correctly distinguish the artifacts from the MEG ICs. Meanwhile, 99.72% ± 0.67 of MEG ICs were preserved. The classification accuracy was 97.91% ± 1.39. In addition, it was found that this method was not sensitive to individual differences. The cross validation (leave-one-subject-out) results showed an averaged accuracy of 97.41% ± 2.14.
NASA Astrophysics Data System (ADS)
Cai, Feida; Li, Honglang; Tian, Yahui; Ke, Yabing; Cheng, Lina; Lou, Wei; He, Shitang
2018-03-01
Line-defect piezoelectric phononic crystals (PCs) show good potential applications in surface acoustic wave (SAW) MEMS devices for RF communication systems. To analyze the SAW characteristics in line-defect two-dimensional (2D) piezoelectric PCs, optical methods are commonly used. However, the optical instruments are complex and expensive, whereas conventional electrical methods can only measure SAW transmission of the whole device and lack spatial resolution. In this paper, we propose a new electrical experimental method with multiple receiving interdigital transducers (IDTs) to detect the SAW field distribution, in which an array of receiving IDTs of equal aperture was used to receive the SAW. For this new method, SAW delay lines with perfect and line-defect 2D Al/128°YXLiNbO3 piezoelectric PCs on the transmitting path were designed and fabricated. The experimental results showed that the SAW distributed mainly in the line-defect region, which agrees with the theoretical results.
Near-field noise of a single-rotation propfan at an angle of attack
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Envia, E.; Clark, B. J.; Groeneweg, J. F.
1990-01-01
The near field noise characteristics of a propfan operating at an angle of attack are examined utilizing the unsteady pressure field obtained from a 3-D Euler simulation of the propfan flowfield. The near field noise is calculated employing three different procedures: a direct computation method in which the noise field is extracted directly from the Euler solution, and two acoustic-analogy-based frequency domain methods which utilize the computed unsteady pressure distribution on the propfan blades as the source term. The inflow angles considered are -0.4, 1.6, and 4.6 degrees. The results of the direct computation method and one of the frequency domain methods show qualitative agreement with measurements. They show that an increase in the inflow angle is accompanied by an increase in the sound pressure level at the outboard wing boom locations and a decrease in the sound pressure level at the (inboard) fuselage locations. The trends in the computed azimuthal directivities of the noise field also conform to the measured and expected results.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
Dynamic Probabilistic Instability of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2009-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.
Sekhavati, Mohammad H; Mesgaran, Mohsen Danesh; Nassiri, Mohammad R; Mohammadabadi, Tahereh; Rezaii, Farkhondeh; Fani Maleki, Adham
2009-10-01
This paper describes the use of a quantitative competitive polymerase chain reaction (QC-PCR) assay; using PCR primers to the rRNA locus of rumen fungi and a standard-control DNA including design and validation. In order to test the efficiency of this method for quantifying anaerobic rumen fungi, it has been attempted to evaluate this method in in vitro conditions by comparing with an assay based on measuring cell wall chitin. The changes in fungal growth have been studied when they are grown in in vitro on either untreated (US) or sodium hydroxide treated wheat straw (TS). Results showed that rumen fungi growth was significantly higher in treated samples compared with untreated during the 12d incubation (P<0.05) and plotting the chitin assay's results against the competitive PCR's showed high positive correlation (R(2)> or =0.87). The low mean values of the coefficients of variance in repeatability in the QC-PCR method against the chitin assay demonstrated more reliability of this new approach. And finally, the efficiency of this method was investigated in in vivo conditions. Samples of rumen fluid were collected from four fistulated Holstein steers which were fed four different diets (basal diet, high starch, high sucrose and starch plus sucrose) in rotation. The results of QC-PCR showed that addition of these non-structural carbohydrates to the basal diets caused a significant decrease in rumen anaerobic fungi biomass. The QC-PCR method appears to be a reliable and can be used for rumen samples.
Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series
NASA Astrophysics Data System (ADS)
Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.
2009-04-01
This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.
Micro-scale temperature measurement method using fluorescence polarization
NASA Astrophysics Data System (ADS)
Tatsumi, K.; Hsu, C.-H.; Suzuki, A.; Nakabe, K.
2016-09-01
A novel method that can measure the fluid temperature in microscopic scale by measuring the fluorescence polarization is described in this paper. The measurement technique is not influenced by the quenching effects which appears in conventional LIF methods and is believed to show a higher reliability in temperature measurements. Experiment was performed using a microchannel flow and fluorescent molecule probes, and the effects of the fluid temperature, fluid viscosity, measurement time, and pH of the solution on the measured fluorescence polarization degree are discussed to understand the basic characteristics of the present method. The results showed that fluorescence polarization is considerably less sensible to these quenching factors. A good correlation with the fluid temperature, on the other hand, was obtained and agreed well with the theoretical values confirming the feasibility of the method.
Hörman, Ari; Hänninen, Marja-Liisa
2006-10-01
In this study we compared the reference membrane filtration (MF) lactose Tergitol-7 (LTTC) method ISO 9308-1:2000 with the MF m-Endo LES method SFS 3016:2001, the defined substrate chromogenic/fluorogenic Colilert 18, Readycult Coliforms and Water Check methods, and ready-made culture media, 3M Petrifilm EC and DryCult Coli methods for the detection of coliforms and Escherichia coli in various water samples. When the results of E. coli detection were compared between test methods, the highest agreement (both tests negative or positive) with the LTTC method was calculated for the m-Endo LES method (83.6%), followed by Colilert 18 (82.7%), Water-Check (81.8%) and Readycult (78.4%), whereas Petrifilm EC (70.6%) and DryCult Coli (68.9%) showed the weakest agreement. The m-Endo LES method was the only method showing no statistical difference in E. coli counts compared with the LTTC method, whereas the Colilert 18 and Readycult methods gave significantly higher counts for E. coli than the LTTC method. In general, those tests based on the analysis of a 1-ml sample (Petrifilm EC and DryCult Coli) showed weak sensitivity (39.5-52.5%) but high specificity (90.9-78.8%).
New methods for engineering site characterization using reflection and surface wave seismic survey
NASA Astrophysics Data System (ADS)
Chaiprakaikeow, Susit
This study presents two new seismic testing methods for engineering application, a new shallow seismic reflection method and Time Filtered Analysis of Surface Waves (TFASW). Both methods are described in this dissertation. The new shallow seismic reflection was developed to measure reflection at a single point using two to four receivers, assuming homogeneous, horizontal layering. It uses one or more shakers driven by a swept sine function as a source, and the cross-correlation technique to identify wave arrivals. The phase difference between the source forcing function and the ground motion due to the dynamic response of the shaker-ground interface was corrected by using a reference geophone. Attenuated high frequency energy was also recovered using the whitening in frequency domain. The new shallow seismic reflection testing was performed at the crest of Porcupine Dam in Paradise, Utah. The testing used two horizontal Vibroseis sources and four receivers for spacings between 6 and 300 ft. Unfortunately, the results showed no clear evidence of the reflectors despite correction of the magnitude and phase of the signals. However, an improvement in the shape of the cross-correlations was noticed after the corrections. The results showed distinct primary lobes in the corrected cross-correlated signals up to 150 ft offset. More consistent maximum peaks were observed in the corrected waveforms. TFASW is a new surface (Rayleigh) wave method to determine the shear wave velocity profile at a site. It is a time domain method as opposed to the Spectral Analysis of Surface Waves (SASW) method, which is a frequency domain method. This method uses digital filtering to optimize bandwidth used to determine the dispersion curve. Results from testings at three different sites in Utah indicated good agreement with the dispersion curves measured using both TFASW and SASW methods. The advantage of TFASW method is that the dispersion curves had less scatter at long wavelengths as a result from wider bandwidth used in those tests.
NASA Astrophysics Data System (ADS)
Korany, Mohamed A.; Mahgoub, Hoda; Haggag, Rim S.; Ragab, Marwa A. A.; Elmallah, Osama A.
2018-06-01
A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5 mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8 μg mL-1) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH 6.8 and dissimilarity in the other 2 pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH 1.2) and dissolution study in 3 pH media (HCl of pH 1.2, acetate buffer of pH 4.5 and phosphate buffer of pH 6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method.
Korany, Mohamed A; Mahgoub, Hoda; Haggag, Rim S; Ragab, Marwa A A; Elmallah, Osama A
2018-06-15
A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8μgmL -1 ) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH6.8 and dissimilarity in the other 2pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH1.2) and dissolution study in 3pH media (HCl of pH1.2, acetate buffer of pH4.5 and phosphate buffer of pH6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method. Copyright © 2018 Elsevier B.V. All rights reserved.
Relative contributions of three descriptive methods: implications for behavioral assessment.
Pence, Sacha T; Roscoe, Eileen M; Bourret, Jason C; Ahearn, William H
2009-01-01
This study compared the outcomes of three descriptive analysis methods-the ABC method, the conditional probability method, and the conditional and background probability method-to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior participated. Functional analyses indicated that participants' problem behavior was maintained by social positive reinforcement (n = 2), social negative reinforcement (n = 2), or automatic reinforcement (n = 2). Results showed that for all but 1 participant, descriptive analysis outcomes were similar across methods. In addition, for all but 1 participant, the descriptive analysis outcome differed substantially from the functional analysis outcome. This supports the general finding that descriptive analysis is a poor means of determining functional relations.
Color digital halftoning taking colorimetric color reproduction into account
NASA Astrophysics Data System (ADS)
Haneishi, Hideaki; Suzuki, Toshiaki; Shimoyama, Nobukatsu; Miyake, Yoichi
1996-01-01
Taking colorimetric color reproduction into account, the conventional error diffusion method is modified for color digital half-toning. Assuming that the input to a bilevel color printer is given in CIE-XYZ tristimulus values or CIE-LAB values instead of the more conventional RGB or YMC values, two modified versions based on vector operation in (1) the XYZ color space and (2) the LAB color space were tested. Experimental results show that the modified methods, especially the method using the LAB color space, resulted in better color reproduction performance than the conventional methods. Spatial artifacts that appear in the modified methods are presented and analyzed. It is also shown that the modified method (2) with a thresholding technique achieves a good spatial image quality.
Modelling rollover behaviour of exacavator-based forest machines
M.W. Veal; S.E. Taylor; Robert B. Rummer
2003-01-01
This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...
Fan, Qigao; Wu, Yaheng; Hui, Jing; Wu, Lei; Yu, Zhenzhong; Zhou, Lijuan
2014-01-01
In some GPS failure conditions, positioning for mobile target is difficult. This paper proposed a new method based on INS/UWB for attitude angle and position synchronous tracking of indoor carrier. Firstly, error model of INS/UWB integrated system is built, including error equation of INS and UWB. And combined filtering model of INS/UWB is researched. Simulation results show that the two subsystems are complementary. Secondly, integrated navigation data fusion strategy of INS/UWB based on Kalman filtering theory is proposed. Simulation results show that FAKF method is better than the conventional Kalman filtering. Finally, an indoor experiment platform is established to verify the integrated navigation theory of INS/UWB, which is geared to the needs of coal mine working environment. Static and dynamic positioning results show that the INS/UWB integrated navigation system is stable and real-time, positioning precision meets the requirements of working condition and is better than any independent subsystem.
Researching on Control Device of Prestressing Wire Reinforcement
NASA Astrophysics Data System (ADS)
Si, Jianhui; Guo, Yangbo; Liu, Maoshe
2017-06-01
This paper mainly introduces a device for controlling prestress and its related research methods, the advantage of this method is that the reinforcement process is easy to operate and control the prestress of wire rope accurately. The relationship between the stress and strain of the steel wire rope is monitored during the experiment, and the one - to - one relationship between the controllable position and the pretightening force of the steel wire rope is confirmed by the 5mm steel wire rope, and the results are analyzed theoretically by the measured elastic modulus. The results show that the method can effectively control the prestressing force, and the result provides a reference method for strengthening the concrete column with prestressed steel strand.
Bujold, M; El Sherif, R; Bush, P L; Johnson-Lafleur, J; Doray, G; Pluye, P
2018-02-01
This mixed methods study content validated the Information Assessment Method for parents (IAM-parent) that allows users to systematically rate and comment on online parenting information. Quantitative data and results: 22,407 IAM ratings were collected; of the initial 32 items, descriptive statistics showed that 10 had low relevance. Qualitative data and results: IAM-based comments were collected, and 20 IAM users were interviewed (maximum variation sample); the qualitative data analysis assessed the representativeness of IAM items, and identified items with problematic wording. Researchers, the program director, and Web editors integrated quantitative and qualitative results, which led to a shorter and clearer IAM-parent. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Water Quality Evaluation of the Yellow River Basin Based on Gray Clustering Method
NASA Astrophysics Data System (ADS)
Fu, X. Q.; Zou, Z. H.
2018-03-01
Evaluating the water quality of 12 monitoring sections in the Yellow River Basin comprehensively by grey clustering method based on the water quality monitoring data from the Ministry of environmental protection of China in May 2016 and the environmental quality standard of surface water. The results can reflect the water quality of the Yellow River Basin objectively. Furthermore, the evaluation results are basically the same when compared with the fuzzy comprehensive evaluation method. The results also show that the overall water quality of the Yellow River Basin is good and coincident with the actual situation of the Yellow River basin. Overall, gray clustering method for water quality evaluation is reasonable and feasible and it is also convenient to calculate.
van Middelaar, C E; Berentsen, P B M; Dijkstra, J; van Arendonk, J A M; de Boer, I J M
2014-01-01
Current decisions on breeding in dairy farming are mainly based on economic values of heritable traits, as earning an income is a primary objective of farmers. Recent literature, however, shows that breeding also has potential to reduce greenhouse gas (GHG) emissions. The objective of this paper was to compare 2 methods to determine GHG values of genetic traits. Method 1 calculates GHG values using the current strategy (i.e., maximizing labor income), whereas method 2 is based on minimizing GHG per kilogram of milk and shows what can be achieved if the breeding results are fully directed at minimizing GHG emissions. A whole-farm optimization model was used to determine results before and after 1 genetic standard deviation improvement (i.e., unit change) of milk yield and longevity. The objective function of the model differed between method 1 and 2. Method 1 maximizes labor income; method 2 minimizes GHG emissions per kilogram of milk while maintaining labor income and total milk production at least at the level before the change in trait. Results show that the full potential of the traits to reduce GHG emissions given the boundaries that were set for income and milk production (453 and 441kg of CO2 equivalents/unit change per cow per year for milk yield and longevity, respectively) is about twice as high as the reduction based on maximizing labor income (247 and 210kg of CO2 equivalents/unit change per cow per year for milk yield and longevity, respectively). The GHG value of milk yield is higher than that of longevity, especially when the focus is on maximizing labor income. Based on a sensitivity analysis, it was shown that including emissions from land use change and using different methods for handling the interaction between milk and meat production can change results, generally in favor of milk yield. Results can be used by breeding organizations that want to include GHG values in their breeding goal. To verify GHG values, the effect of prices and emissions factors should be considered, as well as the potential effect of variation between farm types. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams.
Audiffren, Julien; Contal, Emile
2016-08-01
During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB.
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
Preprocessing the Nintendo Wii Board Signal to Derive More Accurate Descriptors of Statokinesigrams
Audiffren, Julien; Contal, Emile
2016-01-01
During the past few years, the Nintendo Wii Balance Board (WBB) has been used in postural control research as an affordable but less reliable replacement for laboratory grade force platforms. However, the WBB suffers some limitations, such as a lower accuracy and an inconsistent sampling rate. In this study, we focus on the latter, namely the non uniform acquisition frequency. We show that this problem, combined with the poor signal to noise ratio of the WBB, can drastically decrease the quality of the obtained information if not handled properly. We propose a new resampling method, Sliding Window Average with Relevance Interval Interpolation (SWARII), specifically designed with the WBB in mind, for which we provide an open source implementation. We compare it with several existing methods commonly used in postural control, both on synthetic and experimental data. The results show that some methods, such as linear and piecewise constant interpolations should definitely be avoided, particularly when the resulting signal is differentiated, which is necessary to estimate speed, an important feature in postural control. Other methods, such as averaging on sliding windows or SWARII, perform significantly better on synthetic dataset, and produce results more similar to the laboratory-grade AMTI force plate (AFP) during experiments. Those methods should be preferred when resampling data collected from a WBB. PMID:27490545
Zanchetta, Priscilla Garozi; Heringer, Otávio; Scherer, Rodrigo; Pacheco, Henrique Poltronieri; Gonçalves, Ricardo; Pena, Angelina
2015-10-01
Pharmaceuticals are emerging contaminants and it must be noted that approximately 70 % of them are excreted via urine. Therefore, urine usage implies the risk of transfer of pharmaceutical residues to agricultural fields and environment contamination. Thus, this study aimed on the development and validation of a LC-MS/MS method for D-norgestrel (D-NOR) and progesterone (PRO) determination in human urine, as well as the evaluation of the removal efficiency of two methods (storage and evaporation), and the effects of acidification with sulfuric acid. The storage process was evaluated for 6 weeks, while the evaporation was assessed at three different temperatures (50, 75, and 100 °C). All experiments were done with normal urine (pH = 6.0) and acidified urine (pH = 2.0, with sulfuric acid). The results of validation showed good method efficiency. In the second week of storage, higher hormone degradation was observed. In the evaporation method, both D-NOR and PRO were almost completely degraded when the volume was reduced to the lowermost level. Results also indicate that acidification did not affect degradation. Overall, the results showed that combination of two methods can be employed for more efficient hormone removal in urine.
Yao, Yibin; Shan, Lulu; Zhao, Qingzhi
2017-09-29
Global Navigation Satellite System (GNSS) can effectively retrieve precipitable water vapor (PWV) with high precision and high-temporal resolution. GNSS-derived PWV can be used to reflect water vapor variation in the process of strong convection weather. By studying the relationship between time-varying PWV and rainfall, it can be found that PWV contents increase sharply before raining. Therefore, a short-term rainfall forecasting method is proposed based on GNSS-derived PWV. Then the method is validated using hourly GNSS-PWV data from Zhejiang Continuously Operating Reference Station (CORS) network of the period 1 September 2014 to 31 August 2015 and its corresponding hourly rainfall information. The results show that the forecasted correct rate can reach about 80%, while the false alarm rate is about 66%. Compared with results of the previous studies, the correct rate is improved by about 7%, and the false alarm rate is comparable. The method is also applied to other three actual rainfall events of different regions, different durations, and different types. The results show that the method has good applicability and high accuracy, which can be used for rainfall forecasting, and in the future study, it can be assimilated with traditional weather forecasting techniques to improve the forecasted accuracy.
Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆
Tang, Liansheng; Du, Pang; Wu, Chengqing
2012-01-01
Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484
Kwon, Yong Hwan; Kim, Nayoung; Lee, Ju Yup; Choi, Yoon Jin; Yoon, Kichul; Yoon, Hyuk; Shin, Cheol Min; Park, Young Soo; Lee, Dong Ho
2014-01-01
Background: This study was conducted to evaluate the diagnostic validity of the 13C-urea breath test (13C-UBT) in the remnant stomach after partial gastrectomy for gastric cancer. Methods: The 13C-UBT results after Helicobacter pylori eradication therapy was compared with the results of endoscopic biopsy-based methods in the patients who have received partial gastrectomy for the gastric cancer. Results: Among the gastrectomized patients who showed the positive 13C-UBT results (≥ 2.5‰, n = 47) and negative 13C-UBT results (< 2.5‰, n = 114) after H. pylori eradication, 26 patients (16.1%) and 4 patients (2.5%) were found to show false positive and false negative results based on biopsy-based methods, respectively. The sensitivity, specificity, false positive rate, and false negative rate for the cut-off value of 2.5‰ were 84.0%, 80.9%, 19.1%, and 16.0%, respectively. The positive and negative predictive values were 44.7% and 96.5%, respectively. In the multivariate analysis, two or more H. pylori eradication therapies (odds ratio = 3.248, 95% confidence interval= 1.088–9.695, P = 0.035) was associated with a false positive result of the 13C-UBT. Conclusions: After partial gastrectomy, a discordant result was shown in the positive 13C-UBT results compared to the endoscopic biopsy methods for confirming the H. pylori status after eradication. Additional endoscopic biopsy-based H. pylori tests would be helpful to avoid unnecessary treatment for H. pylori eradication in these cases. PMID:25574466
NASA Astrophysics Data System (ADS)
Alfianto, E.; Rusydi, F.; Aisyah, N. D.; Fadilla, R. N.; Dipojono, H. K.; Martoprawiro, M. A.
2017-05-01
This study implemented DFT method into the C++ programming language with object-oriented programming rules (expressive software). The use of expressive software results in getting a simple programming structure, which is similar to mathematical formula. This will facilitate the scientific community to develop the software. We validate our software by calculating the energy band structure of Silica, Carbon, and Germanium with FCC structure using the Projector Augmented Wave (PAW) method then compare the results to Quantum Espresso calculation’s results. This study shows that the accuracy of the software is 85% compared to Quantum Espresso.
Using Riemannian geometry to obtain new results on Dikin and Karmarkar methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira, P.; Joao, X.; Piaui, T.
1994-12-31
We are motivated by a 1990 Karmarkar paper on Riemannian geometry and Interior Point Methods. In this talk we show 3 results. (1) Karmarkar direction can be derived from the Dikin one. This is obtained by constructing a certain Z(x) representation of the null space of the unitary simplex (e, x) = 1; then the projective direction is the image under Z(x) of the affine-scaling one, when it is restricted to that simplex. (2) Second order information on Dikin and Karmarkar methods. We establish computable Hessians for each of the metrics corresponding to both directions, thus permitting the generation ofmore » {open_quotes}second order{close_quotes} methods. (3) Dikin and Karmarkar geodesic descent methods. For those directions, we make computable the theoretical Luenberger geodesic descent method, since we are able to explicit very accurate expressions of the corresponding geodesics. Convergence results are given.« less
Çetinkaya, S.; Çetinkara, H. A.; Bayansal, F.; Kahraman, S.
2013-01-01
CuO interlayers in the CuO/p-Si Schottky diodes were fabricated by using CBD and sol-gel methods. Deposited CuO layers were characterized by SEM and XRD techniques. From the SEM images, it was seen that the film grown by CBD method is denser than the film grown by sol-gel method. This result is compatible with XRD results which show that the crystallization in CBD method is higher than it is in sol-gel method. For the electrical investigations, current-voltage characteristics of the diodes have been studied at room temperature. Conventional I-V and Norde's methods were used in order to determine the ideality factor, barrier height, and series resistance values. It was seen that the morphological and structural analysis are compatible with the results of electrical investigations. PMID:23766670
BagReg: Protein inference through machine learning.
Zhao, Can; Liu, Dao; Teng, Ben; He, Zengyou
2015-08-01
Protein inference from the identified peptides is of primary importance in the shotgun proteomics. The target of protein inference is to identify whether each candidate protein is truly present in the sample. To date, many computational methods have been proposed to solve this problem. However, there is still no method that can fully utilize the information hidden in the input data. In this article, we propose a learning-based method named BagReg for protein inference. The method firstly artificially extracts five features from the input data, and then chooses each feature as the class feature to separately build models to predict the presence probabilities of proteins. Finally, the weak results from five prediction models are aggregated to obtain the final result. We test our method on six public available data sets. The experimental results show that our method is superior to the state-of-the-art protein inference algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Image quality evaluation of full reference algorithm
NASA Astrophysics Data System (ADS)
He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan
2018-03-01
Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.
Using the surface panel method to predict the steady performance of ducted propellers
NASA Astrophysics Data System (ADS)
Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long
2009-12-01
A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.
Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.
2017-01-01
With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.
Optodynamic Phenomena During Laser-Activated Irrigation Within Root Canals
NASA Astrophysics Data System (ADS)
Lukač, Nejc; Gregorčič, Peter; Jezeršek, Matija
2016-07-01
Laser-activated irrigation is a powerful endodontic treatment for smear layer, bacteria, and debris removal from the root canal. In this study, we use shadow photography and the laser-beam-transmission probe to examine the dynamics of laser-induced vapor bubbles inside a root canal model and compare ultrasonic needle irrigation to the laser method. Results confirm important phenomenological differences in the two endodontic methods with the laser method resulting in much deeper irrigation. Observations of simulated debris particles show liquid vorticity effects which in our opinion represents the major cleaning mechanism.
RCWA and FDTD modeling of light emission from internally structured OLEDs.
Callens, Michiel Koen; Marsman, Herman; Penninck, Lieven; Peeters, Patrick; de Groot, Harry; ter Meulen, Jan Matthijs; Neyts, Kristiaan
2014-05-05
We report on the fabrication and simulation of a green OLED with an Internal Light Extraction (ILE) layer. The optical behavior of these devices is simulated using both Rigorous Coupled Wave Analysis (RCWA) and Finite Difference Time-Domain (FDTD) methods. Results obtained using these two different techniques show excellent agreement and predict the experimental results with good precision. By verifying the validity of both simulation methods on the internal light extraction structure we pave the way to optimization of ILE layers using either of these methods.
Grid adaption for bluff bodies
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Tiwari, Surendra N.
1986-01-01
Methods of grid adaptation are reviewed and a method is developed with the capability of adaptation to several flow variables. This method is based on a variational approach and is an algebraic method which does not require the solution of partial differential equations. Also the method was formulated in such a way that there is no need for any matrix inversion. The method is used in conjunction with the calculation of hypersonic flow over a blunt nose. The equations of motion are the compressible Navier-Stokes equations where all viscous terms are retained. They are solved by the MacCormack time-splitting method and a movie was produced which shows simulataneously the transient behavior of the solution and the grid adaptation. The results are compared with the experimental and other numerical results.
On High-Order Upwind Methods for Advection
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
2017-01-01
Scheme III (piecewise linear) and V (piecewise parabolic) of Van Leer are shown to yield identical solutions provided the initial conditions are chosen in an appropriate manner. This result is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The result also shows a key connection between the approaches of discontinuous and continuous representations.
NASA Astrophysics Data System (ADS)
Mansouri, E.; Feizi, F.; Karbalaei Ramezanali, A. A.
2015-07-01
Ground magnetic anomaly separation using reduction-to-the-pole (RTP) technique and the fractal concentration-area (C-A) method has been applied to the Qoja-Kandi prosepecting area in NW Iran. The geophysical survey that resulted in the ground magnetic data was conducted for magnetic elements exploration. Firstly, RTP technique was applied for recognizing underground magnetic anomalies. RTP anomalies was classified to different populations based on this method. For this reason, drilling points determination with RTP technique was complicated. Next, C-A method was applied on the RTP-Magnetic-Anomalies (RTP-MA) for demonstrating magnetic susceptibility concentration. This identification was appropriate for increasing the resolution of the drilling points determination and decreasing the drilling risk, due to the economic costs of underground prospecting. In this study, the results of C-A Modeling on the RTP-MA are compared with 8 borehole data. The results show there is good correlation between anomalies derived via C-A method and log report of boreholes. Two boreholes were drilled in magnetic susceptibility concentration, based on multifractal modeling data analyses, between 63 533.1 and 66 296 nT. Drilling results show appropriate magnetite thickness with the grades greater than 20 % Fe total. Also, anomalies associated with andesite units host iron mineralization.
Yang, Xiao-hua; Guo, Qiao-sheng; Zhu, Zai-biao; Chen, Jun; Miao, Yuan-yuan; Yang, Ying; Sun, Yuan
2015-10-01
Effects of different drying methods including sun drying, steamed, boiled, constant temperature drying (at 40, 50, 60 °C) on appearance, hardness, rehydration ratio, dry rate, moisture, total ash, extractive and polysaccharides contents were studied to provide the basis of standard processing method for Tulipa edulis bulbus. The results showed that the treatments of sun drying and 40 °C drying showed higher rehydration ratios, but lower dry rate, higher hardness, worse color, longer time and obvious distortion and shrinkage in comparison with other drying methods. The treatments of 60 °C constant temperature drying resulted in shorter drying time, lower water and higher polysaccharides content. Drying time is shorter and appearance quality is better in the treatment of steaming and boiling compared with other treatments, but the content of extractive and polysaccharides decreased significantly. The treatments of 50 °C constant temperature drying led to similar appearance quality of bulb to commercial bulb, and it resulted in lowest hardness and highest dry rate as well as higher rehydration ratio, extractive and polysaccharides content, moderate moisture and total ash contents among these treatments. Based on the results obtained, 50 °C constant temperature drying is the better way for the processing of T. edulis bulbus.
NASA Astrophysics Data System (ADS)
Zheng, Guang; Nie, Hong; Luo, Min; Chen, Jinbao; Man, Jianfeng; Chen, Chuanzhi; Lee, Heow Pueh
2018-07-01
The purpose of this paper is to obtain the design parameter-landing response relation for designing the configuration of the landing gear in a planet lander quickly. To achieve this, parametric studies on the landing gear are carried out using the response surface method (RSM), based on a single landing gear landing model validated by experimental results. According to the design of experiment (DOE) results of the landing model, the RS (response surface)-functions of the three crucial landing responses are obtained, and the sensitivity analysis (SA) of the corresponding parameters is performed. Also, two multi-objective optimizations designs on the landing gear are carried out. The analysis results show that the RS (response surface)-model performs well for the landing response design process, with a minimum fitting accuracy of 98.99%. The most sensitive parameters for the three landing response are the design size of the buffers, struts friction and the diameter of the bending beam. Moreover, the good agreement between the simulated model and RS-model results are obtained in two optimized designs, which show that the RS-model coupled with the FE (finite element)-method is an efficient method to obtain the design configuration of the landing gear.
Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby
2016-01-01
Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.
Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby
2016-01-01
Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121
Qiu, Liuming; Wang, Pei; Liao, Ge; Zeng, Yanbo; Cai, Caihong; Kong, Fandong; Guo, Zhikai; Proksch, Peter; Dai, Haofu; Mei, Wenli
2018-03-28
Four new eudesmane-type sesquiterpenoids, penicieudesmol A-D ( 1 - 4 ), were isolated from the fermentation broth of the mangrove-derived endophytic fungus Penicillium sp. J-54. Their structures were determined by spectroscopic methods, the in situ dimolybdenum CD method, and modified Mosher's method. The bioassays results showed that 2 exhibited weak cytotoxicity against K-562 cells.
NASA Astrophysics Data System (ADS)
Ji, Yang; Chen, Hong; Tang, Hongwu
2017-06-01
A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger
ERIC Educational Resources Information Center
Deeb, Raid Mousa Al-Shaik
2016-01-01
The study aimed at identifying knowledge of parents of children with autism spectrum disorder of behavior modification methods and their training needs accordingly. The sample of the study consisted of (98) parents in Jordan. A scale of behavior modification methods was constructed, and then validated. The results of the study showed that the…
Automatic tracking of red blood cells in micro channels using OpenCV
NASA Astrophysics Data System (ADS)
Rodrigues, Vânia; Rodrigues, Pedro J.; Pereira, Ana I.; Lima, Rui
2013-10-01
The present study aims to developan automatic method able to track red blood cells (RBCs) trajectories flowing through a microchannel using the Open Source Computer Vision (OpenCV). The developed method is based on optical flux calculation assisted by the maximization of the template-matching product. The experimental results show a good functional performance of this method.
ERIC Educational Resources Information Center
Coya, Liliam de Barbosa; Perez-Coffie, Jorge
1982-01-01
"Mastery Learning" was compared with the "conventional" method of teaching reading skills to Puerto Rican children with specific learning disabilities. The "Mastery Learning" group showed significant gains in the cognitive and affective domains. Results suggested Mastery Learning is a more effective method of teaching…
Protein-protein interaction network-based detection of functionally similar proteins within species.
Song, Baoxing; Wang, Fen; Guo, Yang; Sang, Qing; Liu, Min; Li, Dengyun; Fang, Wei; Zhang, Deli
2012-07-01
Although functionally similar proteins across species have been widely studied, functionally similar proteins within species showing low sequence similarity have not been examined in detail. Identification of these proteins is of significant importance for understanding biological functions, evolution of protein families, progression of co-evolution, and convergent evolution and others which cannot be obtained by detection of functionally similar proteins across species. Here, we explored a method of detecting functionally similar proteins within species based on graph theory. After denoting protein-protein interaction networks using graphs, we split the graphs into subgraphs using the 1-hop method. Proteins with functional similarities in a species were detected using a method of modified shortest path to compare these subgraphs and to find the eligible optimal results. Using seven protein-protein interaction networks and this method, some functionally similar proteins with low sequence similarity that cannot detected by sequence alignment were identified. By analyzing the results, we found that, sometimes, it is difficult to separate homologous from convergent evolution. Evaluation of the performance of our method by gene ontology term overlap showed that the precision of our method was excellent. Copyright © 2012 Wiley Periodicals, Inc.
Zhao, Xin; Liu, Jun; Yao, Yong-Xin; ...
2018-01-23
Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less
NASA Astrophysics Data System (ADS)
Li, De-Chang; Ji, Bao-Hua
2012-06-01
Jarzynski' identity (JI) method was suggested a promising tool for reconstructing free energy landscape of biomolecular interactions in numerical simulations and experiments. However, JI method has not yet been well tested in complex systems such as ligand-receptor molecular pairs. In this paper, we applied a huge number of steered molecular dynamics (SMD) simulations to dissociate the protease of human immunodeficiency type I virus (HIV-1 protease) and its inhibitors. We showed that because of intrinsic complexity of the ligand-receptor system, the energy barrier predicted by JI method at high pulling rates is much higher than experimental results. However, with a slower pulling rate and fewer switch times of simulations, the predictions of JI method can approach to the experiments. These results suggested that the JI method is more appropriate for reconstructing free energy landscape using the data taken from experiments, since the pulling rates used in experiments are often much slower than those in SMD simulations. Furthermore, we showed that a higher loading stiffness can produce higher precision of calculation of energy landscape because it yields a lower mean value and narrower bandwidth of work distribution in SMD simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xin; Liu, Jun; Yao, Yong-Xin
Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less
Rantala, J; Raisamo, R; Lylykangas, J; Surakka, V; Raisamo, J; Salminen, K; Pakkanen, T; Hippula, A
2009-01-01
Three novel interaction methods were designed for reading six-dot Braille characters from the touchscreen of a mobile device. A prototype device with a piezoelectric actuator embedded under the touchscreen was used to create tactile feedback. The three interaction methods, scan, sweep, and rhythm, enabled users to read Braille characters one at a time either by exploring the characters dot by dot or by sensing a rhythmic pattern presented on the screen. The methods were tested with five blind Braille readers as a proof of concept. The results of the first experiment showed that all three methods can be used to convey information as the participants could accurately (91-97 percent) recognize individual characters. In the second experiment the presentation rate of the most efficient and preferred method, the rhythm, was varied. A mean recognition accuracy of 70 percent was found when the speed of presenting a single character was nearly doubled from the first experiment. The results showed that temporal tactile feedback and Braille coding can be used to transmit single-character information while further studies are still needed to evaluate the presentation of serial information, i.e., multiple Braille characters.
Hashimoto, Haruo; Mizushima, Tomoko; Chijiwa, Tsuyoshi; Nakamura, Masato; Suemizu, Hiroshi
2017-06-15
The purpose of this study was to establish an efficient method for the preparation of an adeno-associated viral (AAV), serotype DJ/8, carrying the GFP gene (AAV-DJ/8-GFP). We compared the yields of AAV-DJ/8 vector, which were produced by three different combination methods, consisting of two plasmid DNA transfection methods (lipofectamine and calcium phosphate co-precipitation; CaPi) and two virus DNA purification methods (iodixanol and cesium chloride; CsCl). The results showed that the highest yield of AAV-DJ/8-GFP vector was accomplished with the combination method of lipofectamine transfection and iodixanol purification. The viral protein expression levels and the transduction efficacy in HEK293 and CHO cells were not different among four different combination methods for AAV-DJ/8-GFP vectors. We confirmed that the AAV-DJ/8-GFP vector could transduce to human and murine hepatocyte-derived cell lines. These results show that AAV-DJ/8-GFP, purified by the combination of lipofectamine and iodixanol, produces an efficient yield without altering the characteristics of protein expression and AAV gene transduction. Copyright © 2017 Elsevier B.V. All rights reserved.
Sanada, Akira; Tanaka, Nobuo
2012-08-01
This study deals with the feedforward active control of sound transmission through a simply supported rectangular panel using vibration actuators. The control effect largely depends on the excitation method, including the number and locations of actuators. In order to obtain a large control effect at low frequencies over a wide frequency, an active transmission control method based on single structural mode actuation is proposed. Then, with the goal of examining the feasibility of the proposed method, the (1, 3) mode is selected as the target mode and a modal actuation method in combination with six point force actuators is considered. Assuming that a single input single output feedforward control is used, sound transmission in the case minimizing the transmitted sound power is calculated for some actuation methods. Simulation results showed that the (1, 3) modal actuation is globally effective at reducing the sound transmission by more than 10 dB in the low-frequency range for both normal and oblique incidences. Finally, experimental results also showed that a large reduction could be achieved in the low-frequency range, which proves the validity and feasibility of the proposed method.
Qiu, Xi-Zhen; Zhang, Fang-Hui
2013-01-01
The high-power white LED was prepared based on the high thermal conductivity aluminum, blue chips and YAG phosphor. By studying the spectral of different junction temperature, we found that the radiation spectrum of white LED has a minimum at 485 nm. The radiation intensity at this wavelength and the junction temperature show a good linear relationship. The LED junction temperature was measured based on the formula of relative spectral intensity and junction temperature. The result measured by radiation intensity method was compared with the forward voltage method and spectral method. The experiment results reveal that the junction temperature measured by this method was no more than 2 degrees C compared with the forward voltage method. It maintains the accuracy of the forward voltage method and overcomes the small spectral shift of spectral method, which brings the shortcoming on the results. It also had the advantages of practical, efficient and intuitive, noncontact measurement, and non-destruction to the lamp structure.
Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li
2010-07-01
The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.
Evaluation of normalization methods in mammalian microRNA-Seq data
Garmire, Lana Xia; Subramaniam, Shankar
2012-01-01
Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
NASA Astrophysics Data System (ADS)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
The premixed flame in uniform straining flow
NASA Technical Reports Server (NTRS)
Durbin, P. A.
1982-01-01
Characteristics of the premixed flame in uniform straining flow are investigated by the technique of activation-energy asymptotics. An inverse method is used, which avoids some of the restrictions of previous analyses. It is shown that this method recovers known results for adiabatic flames. New results for flames with heat loss are obtained, and it is shown that, in the presence of finite heat loss, straining can extinguish flames. A stability analysis shows that straining can suppress the cellular instability of flames with Lewis number less than unity. Strain can produce instability of flames with Lewis number greater than unity. A comparison shows quite good agreement between theoretical deductions and experimental observations of Ishizuka, Miyasaka & Law (1981).
Measurement of phenols dearomatization via electrolysis: the UV-Vis solid phase extraction method.
Vargas, Ronald; Borrás, Carlos; Mostany, Jorge; Scharifker, Benjamin R
2010-02-01
Dearomatization levels during electrochemical oxidation of p-methoxyphenol (PMP) and p-nitrophenol (PNP) have been determined through UV-Vis spectroscopy using solid phase extraction (UV-Vis/SPE). The results show that the method is satisfactory to determine the ratio between aromatic compounds and aliphatic acids and reaction kinetics parameters during treatment of wastewater, in agreement with results obtained from numerical deconvolution of UV-Vis spectra. Analysis of solutions obtained from electrolysis of substituted phenols on antimony-doped tin oxide (SnO(2)--Sb) showed that an electron acceptor substituting group favored the aromatic ring opening reaction, preventing formation of intermediate quinone during oxidation. (c) 2009 Elsevier Ltd. All rights reserved.
Water-assisted extrusion of bio-based PETG/clay nanocomposites
NASA Astrophysics Data System (ADS)
Lee, Naeun; Lee, Sangmook
2018-02-01
Bio-based polyethylene terephthalate glycol-modified (PETG)/clay nanocomposites were prepared using the water-assisted extrusion process. The effects of different types of clay and clay mixing methods (with or without the use of water) and the resulting nanocomposites properties were investigated by measuring the rheological and tensile properties and morphologies. The valuable properties were achieved when Cloisite 30B was mixed in a slurry state. The results of the X-ray diffraction (XRD) and transmission electron microscopy (TEM) studies showed that the nano-clay was well dispersed within the PETG matrix. This shows that the slurry process could be an effective exfoliation method for many nanocomposites systems as well as for bio-based PETG/clay nanocomposites.
Computation of neutron fluxes in clusters of fuel pins arranged in hexagonal assemblies (2D and 3D)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabha, H.; Marleau, G.
2012-07-01
For computations of fluxes, we have used Carvik's method of collision probabilities. This method requires tracking algorithms. An algorithm to compute tracks (in 2D and 3D) has been developed for seven hexagonal geometries with cluster of fuel pins. This has been implemented in the NXT module of the code DRAGON. The flux distribution in cluster of pins has been computed by using this code. For testing the results, they are compared when possible with the EXCELT module of the code DRAGON. Tracks are plotted in the NXT module by using MATLAB, these plots are also presented here. Results are presentedmore » with increasing number of lines to show the convergence of these results. We have numerically computed volumes, surface areas and the percentage errors in these computations. These results show that 2D results converge faster than 3D results. The accuracy on the computation of fluxes up to second decimal is achieved with fewer lines. (authors)« less
Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.
Li, Ben; Stenstrom, M K
2014-01-01
One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.
NASA Astrophysics Data System (ADS)
Lamie, Nesrine T.
2015-10-01
Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.
Ender, Andreas; Mehl, Albert
2015-01-01
To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.
Berenguer, Roberto; de la Vara, Victoria; Lopez-Honrubia, Veronica; Nuñez, Ana Teresa; Rivera, Miguel; Villas, Maria Victoria; Sabater, Sebastia
2018-01-01
To analyse the influence of the image registration method on the adaptive radiotherapy of an IMRT prostate treatment, and to compare the dose accumulation according to 3 different image registration methods with the planned dose. The IMRT prostate patient was CT imaged 3 times throughout his treatment. The prostate, PTV, rectum and bladder were segmented on each CT. A Rigid, a deformable (DIR) B-spline and a DIR with landmarks registration algorithms were employed. The difference between the accumulated doses and planned doses were evaluated by the gamma index. The Dice coefficient and Hausdorff distance was used to evaluate the overlap between volumes, to quantify the quality of the registration. When comparing adaptive vs no adaptive RT, the gamma index calculation showed large differences depending on the image registration method (as much as 87.6% in the case of DIR B-spline). The quality of the registration was evaluated using an index such as the Dice coefficient. This showed that the best result was obtained with DIR with landmarks compared with the rest and it was always above 0.77, reported as a recommended minimum value for prostate studies in a multi-centre review. Apart from showing the importance of the application of an adaptive RT protocol in a particular treatment, this work shows that the election of the registration method is decisive in the result of the adaptive radiotherapy and dose accumulation. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Extended release of hyaluronic acid from hydrogel contact lenses for dry eye syndrome.
Maulvi, Furqan A; Soni, Tejal G; Shah, Dinesh O
2015-01-01
Current dry eye treatment includes delivering comfort enhancing agents to the eye via eye drops, but low residence time of eye drops leads to low bioavailability. Frequent administration leads to incompliance in patients, so there is a great need for medical device such as contact lenses to treat dry eye. Studies in the past have demonstrated the efficacy of hyaluronic acid (HA) in the treatment of dry eyes using eye drops. In this paper, we present two methods to load HA in hydrogel contact lenses, soaking method and direct entrapment. The contact lenses were characterized by studying their optical and physical properties to determine their suitability as extended wear contact lenses. HA-laden hydrogel contact lenses prepared by soaking method showed release up to 48 h with acceptable physical and optical properties. Hydrogel contact lenses prepared by direct entrapment method showed significant sustained release in comparison to soaking method. HA entrapped in hydrogels resulted in reduction in % transmittance, sodium ion permeability and surface contact angle, while increase in % swelling. The impact on each of these properties was proportional to HA loading. The batch with 200-μg HA loading showed all acceptable values (parameters) for contact lens use. Results of cytotoxicity study indicated the safety of hydrogel contact lenses. In vivo pharmacokinetics studies in rabbit tear fluid showed dramatic increase in HA mean residence time and area under the curve with lenses in comparison to eye drop treatment. The study demonstrates the promising potential of delivering HA through contact lenses for the treatment of dry eye syndrome.