Sample records for presented method showed

  1. An entropy correction method for unsteady full potential flows with strong shocks

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.

    1986-01-01

    An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.

  2. Methods for Presenting Braille Characters on a Mobile Device with a Touchscreen and Tactile Feedback.

    PubMed

    Rantala, J; Raisamo, R; Lylykangas, J; Surakka, V; Raisamo, J; Salminen, K; Pakkanen, T; Hippula, A

    2009-01-01

    Three novel interaction methods were designed for reading six-dot Braille characters from the touchscreen of a mobile device. A prototype device with a piezoelectric actuator embedded under the touchscreen was used to create tactile feedback. The three interaction methods, scan, sweep, and rhythm, enabled users to read Braille characters one at a time either by exploring the characters dot by dot or by sensing a rhythmic pattern presented on the screen. The methods were tested with five blind Braille readers as a proof of concept. The results of the first experiment showed that all three methods can be used to convey information as the participants could accurately (91-97 percent) recognize individual characters. In the second experiment the presentation rate of the most efficient and preferred method, the rhythm, was varied. A mean recognition accuracy of 70 percent was found when the speed of presenting a single character was nearly doubled from the first experiment. The results showed that temporal tactile feedback and Braille coding can be used to transmit single-character information while further studies are still needed to evaluate the presentation of serial information, i.e., multiple Braille characters.

  3. A Revised Method of Presenting Wavenumber-Frequency Power Spectrum Diagrams That Reveals the Asymmetric Nature of Tropical Large-scale Waves

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Yang, Bo; Fu, Xiouhua

    2007-01-01

    The popular method of presenting wavenumber-frequency power spectrum diagrams for studying tropical large-scale waves in the literature is shown to give an incomplete presentation of these waves. The so-called "convectively-coupled Kelvin (mixed Rossby-gravity) waves" are presented as existing only in the symmetric (antisymmetric) component of the diagrams. This is obviously not consistent with the published composite/regression studies of "convectively-coupled Kelvin waves," which illustrate the asymmetric nature of these waves. The cause of this inconsistency is revealed in this note and a revised method of presenting the power spectrum diagrams is proposed. When this revised method is used, "convectively-coupled Kelvin waves" do show anti-symmetric components, and "convectively-coupled mixed Rossby-gravity waves (also known as Yanai waves)" do show a hint of symmetric components. These results bolster a published proposal that these waves be called "chimeric Kelvin waves," "chimeric mixed Rossby-gravity waves," etc. This revised method of presenting power spectrum diagrams offers a more rigorous means of comparing the General Circulation Models (GCM) output with observations by calling attention to the capability of GCMs in correctly simulating the asymmetric characteristics of the equatorial waves.

  4. Direct application of Padé approximant for solving nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  5. Engagement with Physics across Diverse Festival Audiences

    ERIC Educational Resources Information Center

    Roche, Joseph; Stanley, Jessica; Davis, Nicola

    2016-01-01

    Science shows provide a method of introducing large public audiences to physics concepts in a nonformal learning environment. While these shows have the potential to provide novel means of educational engagement, it is often difficult to measure that engagement. We present a method of producing an interactive physics show that seeks to provide…

  6. Remote air pollution measurement

    NASA Technical Reports Server (NTRS)

    Byer, R. L.

    1975-01-01

    This paper presents a discussion and comparison of the Raman method, the resonance and fluorescence backscatter method, long path absorption methods and the differential absorption method for remote air pollution measurement. A comparison of the above remote detection methods shows that the absorption methods offer the most sensitivity at the least required transmitted energy. Topographical absorption provides the advantage of a single ended measurement, and differential absorption offers the additional advantage of a fully depth resolved absorption measurement. Recent experimental results confirming the range and sensitivity of the methods are presented.

  7. Critical study of higher order numerical methods for solving the boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1978-01-01

    A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.

  8. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    PubMed

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  9. An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1990-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.

  10. A fast finite-difference algorithm for topology optimization of permanent magnets

    NASA Astrophysics Data System (ADS)

    Abert, Claas; Huber, Christian; Bruckner, Florian; Vogler, Christoph; Wautischer, Gregor; Suess, Dieter

    2017-09-01

    We present a finite-difference method for the topology optimization of permanent magnets that is based on the fast-Fourier-transform (FFT) accelerated computation of the stray-field. The presented method employs the density approach for topology optimization and uses an adjoint method for the gradient computation. Comparison to various state-of-the-art finite-element implementations shows a superior performance and accuracy. Moreover, the presented method is very flexible and easy to implement due to various preexisting FFT stray-field implementations that can be used.

  11. A revised method of presenting wavenumber-frequency power spectrum diagrams that reveals the asymmetric nature of tropical large-scale waves

    NASA Astrophysics Data System (ADS)

    Chao, Winston C.; Yang, Bo; Fu, Xiouhua

    2009-11-01

    The popular method of presenting wavenumber-frequency power spectrum diagrams for studying tropical large-scale waves in the literature is shown to give an incomplete presentation of these waves. The so-called “convectively coupled Kelvin (mixed Rossby-gravity) waves” are presented as existing only in the symmetric (anti-symmetric) component of the diagrams. This is obviously not consistent with the published composite/regression studies of “convectively coupled Kelvin waves,” which illustrate the asymmetric nature of these waves. The cause of this inconsistency is revealed in this note and a revised method of presenting the power spectrum diagrams is proposed. When this revised method is used, “convectively coupled Kelvin waves” do show anti-symmetric components, and “convectively coupled mixed Rossby-gravity waves (also known as Yanai waves)” do show a hint of symmetric components. These results bolster a published proposal that these waves should be called “chimeric Kelvin waves,” “chimeric mixed Rossby-gravity waves,” etc. This revised method of presenting power spectrum diagrams offers an additional means of comparing the GCM output with observations by calling attention to the capability of GCMs to correctly simulate the asymmetric characteristics of equatorial waves.

  12. Using the surface panel method to predict the steady performance of ducted propellers

    NASA Astrophysics Data System (ADS)

    Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long

    2009-12-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.

  13. A new multi-step technique with differential transform method for analytical solution of some nonlinear variable delay differential equations.

    PubMed

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2016-01-01

    This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.

  14. Seismic passive earth resistance using modified pseudo-dynamic method

    NASA Astrophysics Data System (ADS)

    Pain, Anindya; Choudhury, Deepankar; Bhattacharyya, S. K.

    2017-04-01

    In earthquake prone areas, understanding of the seismic passive earth resistance is very important for the design of different geotechnical earth retaining structures. In this study, the limit equilibrium method is used for estimation of critical seismic passive earth resistance for an inclined wall supporting horizontal cohesionless backfill. A composite failure surface is considered in the present analysis. Seismic forces are computed assuming the backfill soil as a viscoelastic material overlying a rigid stratum and the rigid stratum is subjected to a harmonic shaking. The present method satisfies the boundary conditions. The amplification of acceleration depends on the properties of the backfill soil and on the characteristics of the input motion. The acceleration distribution along the depth of the backfill is found to be nonlinear in nature. The present study shows that the horizontal and vertical acceleration distribution in the backfill soil is not always in-phase for the critical value of the seismic passive earth pressure coefficient. The effect of different parameters on the seismic passive earth pressure is studied in detail. A comparison of the present method with other theories is also presented, which shows the merits of the present study.

  15. Laser ultrasonics for measurements of high-temperature elastic properties and internal temperature distribution

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro

    2001-06-01

    We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.

  16. A Radioactivity Based Quantitative Analysis of the Amount of Thorium Present in Ores and Metallurgical Products; ANALYSE QUANTITATIVE DU THORIUM DANS LES MINERAIS ET LES PRODUITS THORIFERES PAR UNE METHODE BASEE SUR LA RADIOACTIVITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collee, R.; Govaerts, J.; Winand, L.

    1959-10-31

    A brief resume of the classical methods of quantitative determination of thorium in ores and thoriferous products is given to show that a rapid, accurate, and precise physical method based on the radioactivity of thorium would be of great utility. A method based on the utilization of the characteristic spectrum of the thorium gamma radiation is presented. The preparation of the samples and the instruments needed for the measurements is discussed. The experimental results show that the reproducibility is very satisfactory and that it is possible to detect Th contents of 1% or smaller. (J.S.R.)

  17. Ethanol production from lignocellulose

    DOEpatents

    Ingram, Lonnie O.; Wood, Brent E.

    2001-01-01

    This invention presents a method of improving enzymatic degradation of lignocellulose, as in the production of ethanol from lignocellulosic material, through the use of ultrasonic treatment. The invention shows that ultrasonic treatment reduces cellulase requirements by 1/3 to 1/2. With the cost of enzymes being a major problem in the cost-effective production of ethanol from lignocellulosic material, this invention presents a significant improvement over presently available methods.

  18. Making chaotic behavior in a damped linear harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Konishi, Keiji

    2001-06-01

    The present Letter proposes a simple control method which makes chaotic behavior in a damped linear harmonic oscillator. This method is a modified scheme proposed in paper by Wang and Chen (IEEE CAS-I 47 (2000) 410) which presents an anti-control method for making chaotic behavior in discrete-time linear systems. We provide a systematic procedure to design parameters and sampling period of a feedback controller. Furthermore, we show that our method works well on numerical simulations.

  19. Wavelet imaging cleaning method for atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.

    2002-07-01

    We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.

  20. Re-imagining a Stata/Python Combination

    NASA Technical Reports Server (NTRS)

    Fiedler, James

    2013-01-01

    At last year's Stata Conference, I presented some ideas for combining Stata and the Python programming language within a single interface. Two methods were presented: in one, Python was used to automate Stata; in the other, Python was used to send simulated keystrokes to the Stata GUI. The first method has the drawback of only working in Windows, and the second can be slow and subject to character input limits. In this presentation, I will demonstrate a method for achieving interaction between Stata and Python that does not suffer these drawbacks, and I will present some examples to show how this interaction can be useful.

  1. Towards a robust HDR imaging system

    NASA Astrophysics Data System (ADS)

    Long, Xin; Zeng, Xiangrong; Huangpeng, Qizi; Zhou, Jinglun; Feng, Jing

    2016-07-01

    High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.

  2. A Data Driven Model for Predicting RNA-Protein Interactions based on Gradient Boosting Machine.

    PubMed

    Jain, Dharm Skandh; Gupte, Sanket Rajan; Aduri, Raviprasad

    2018-06-22

    RNA protein interactions (RPI) play a pivotal role in the regulation of various biological processes. Experimental validation of RPI has been time-consuming, paving the way for computational prediction methods. The major limiting factor of these methods has been the accuracy and confidence of the predictions, and our in-house experiments show that they fail to accurately predict RPI involving short RNA sequences such as TERRA RNA. Here, we present a data-driven model for RPI prediction using a gradient boosting classifier. Amino acids and nucleotides are classified based on the high-resolution structural data of RNA protein complexes. The minimum structural unit consisting of five residues is used as the descriptor. Comparative analysis of existing methods shows the consistently higher performance of our method irrespective of the length of RNA present in the RPI. The method has been successfully applied to map RPI networks involving both long noncoding RNA as well as TERRA RNA. The method is also shown to successfully predict RNA and protein hubs present in RPI networks of four different organisms. The robustness of this method will provide a way for predicting RPI networks of yet unknown interactions for both long noncoding RNA and microRNA.

  3. The Use of Religious Coping Methods in a Secular Society: A Survey Study Among Cancer Patients in Sweden.

    PubMed

    Ahmadi, Nader; Ahmadi, Fereshteh

    2017-07-01

    In the present article, based on results from a survey study in Sweden among 2,355 cancer patients, the role of religion in coping is discussed. The survey study, in turn, was based on earlier findings from a qualitative study of cancer patients in Sweden. The purpose of the present survey study was to determine to what extent results obtained in the qualitative study can be applied to a wider population of cancer patients in Sweden. The present study shows that use of religious coping methods is infrequent among cancer patients in Sweden. Besides the two methods that are ranked in 12th and 13th place, that is, in the middle (Listening to religious music and Praying to God to make things better), the other religious coping methods receive the lowest rankings, showing how nonsignificant such methods are in coping with cancer in Sweden. However, the question of who turns to God and who is self-reliant in a critical situation is too complicated to be resolved solely in terms of the strength of individuals' religious commitments. In addition to background and situational factors, the culture in which the individual was socialized is an important factor. Regarding the influence of background variables, the present results show that gender, age , and area of upbringing played an important role in almost all of the religious coping methods our respondents used. In general, people in the oldest age-group, women, and people raised in places with 20,000 or fewer residents had a higher average use of religious coping methods than did younger people, men, and those raised in larger towns.

  4. The Use of Religious Coping Methods in a Secular Society

    PubMed Central

    Ahmadi, Nader

    2015-01-01

    In the present article, based on results from a survey study in Sweden among 2,355 cancer patients, the role of religion in coping is discussed. The survey study, in turn, was based on earlier findings from a qualitative study of cancer patients in Sweden. The purpose of the present survey study was to determine to what extent results obtained in the qualitative study can be applied to a wider population of cancer patients in Sweden. The present study shows that use of religious coping methods is infrequent among cancer patients in Sweden. Besides the two methods that are ranked in 12th and 13th place, that is, in the middle (Listening to religious music and Praying to God to make things better), the other religious coping methods receive the lowest rankings, showing how nonsignificant such methods are in coping with cancer in Sweden. However, the question of who turns to God and who is self-reliant in a critical situation is too complicated to be resolved solely in terms of the strength of individuals’ religious commitments. In addition to background and situational factors, the culture in which the individual was socialized is an important factor. Regarding the influence of background variables, the present results show that gender, age, and area of upbringing played an important role in almost all of the religious coping methods our respondents used. In general, people in the oldest age-group, women, and people raised in places with 20,000 or fewer residents had a higher average use of religious coping methods than did younger people, men, and those raised in larger towns. PMID:28690385

  5. Implementation of equivalent domain integral method in the two-dimensional analysis of mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.

  6. Predicting chaos in memristive oscillator via harmonic balance method.

    PubMed

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  7. An iterative method for obtaining the optimum lightning location on a spherical surface

    NASA Technical Reports Server (NTRS)

    Chao, Gao; Qiming, MA

    1991-01-01

    A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.

  8. A vectorized Lanczos eigensolver for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1990-01-01

    The computational strategies used to implement a Lanczos-based-method eigensolver on the latest generation of supercomputers are described. Several examples of structural vibration and buckling problems are presented that show the effects of using optimization techniques to increase the vectorization of the computational steps. The data storage and access schemes and the tools and strategies that best exploit the computer resources are presented. The method is implemented on the Convex C220, the Cray 2, and the Cray Y-MP computers. Results show that very good computation rates are achieved for the most computationally intensive steps of the Lanczos algorithm and that the Lanczos algorithm is many times faster than other methods extensively used in the past.

  9. An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes

    NASA Astrophysics Data System (ADS)

    Ding, Deng; Chong U, Sio

    2010-05-01

    An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.

  10. Experimental validation of the intrinsic spatial efficiency method over a wide range of sizes for cylindrical sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Ramŕez, Pablo, E-mail: rapeitor@ug.uchile.cl; Larroquette, Philippe; Camilla, S.

    The intrinsic spatial efficiency method is a new absolute method to determine the efficiency of a gamma spectroscopy system for any extended source. In the original work the method was experimentally demonstrated and validated for homogeneous cylindrical sources containing {sup 137}Cs, whose sizes varied over a small range (29.5 mm radius and 15.0 to 25.9 mm height). In this work we present an extension of the validation over a wide range of sizes. The dimensions of the cylindrical sources vary between 10 to 40 mm height and 8 to 30 mm radius. The cylindrical sources were prepared using the referencemore » material IAEA-372, which had a specific activity of 11320 Bq/kg at july 2006. The obtained results were better for the sources with 29 mm radius showing relative bias lesser than 5% and for the sources with 10 mm height showing relative bias lesser than 6%. In comparison with the obtained results in the work where we present the method, the majority of these results show an excellent agreement.« less

  11. Graphic report of the results from propensity score method analyses.

    PubMed

    Shrier, Ian; Pang, Menglan; Platt, Robert W

    2017-08-01

    To increase transparency in studies reporting propensity scores by using graphical methods that clearly illustrate (1) the number of participant exclusions that occur as a consequence of the analytic strategy and (2) whether treatment effects are constant or heterogeneous across propensity scores. We applied graphical methods to a real-world pharmacoepidemiologic study that evaluated the effect of initiating statin medication on the 1-year all-cause mortality post-myocardial infarction. We propose graphical methods to show the consequences of trimming and matching on the exclusion of participants from the analysis. We also propose the use of meta-analytical forest plots to show the magnitude of effect heterogeneity. A density plot with vertical lines demonstrated the proportion of subjects excluded because of trimming. A frequency plot with horizontal lines demonstrated the proportion of subjects excluded because of matching. An augmented forest plot illustrates the amount of effect heterogeneity present in the data. Our proposed techniques present additional and useful information that helps readers understand the sample that is analyzed with propensity score methods and whether effect heterogeneity is present. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Engagement with physics across diverse festival audiences

    NASA Astrophysics Data System (ADS)

    Roche, Joseph; Stanley, Jessica; Davis, Nicola

    2016-07-01

    Science shows provide a method of introducing large public audiences to physics concepts in a nonformal learning environment. While these shows have the potential to provide novel means of educational engagement, it is often difficult to measure that engagement. We present a method of producing an interactive physics show that seeks to provide effective and measurable audience engagement. We share our results from piloting this method at a leading music and arts festival as well as a science festival. This method also facilitated the collection of opinions and feedback directly from the audience which helps explore the benefits and limitations of this type of nonformal engagement in physics education.

  13. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  14. Method of preliminary localization of the iris in biometric access control systems

    NASA Astrophysics Data System (ADS)

    Minacova, N.; Petrov, I.

    2015-10-01

    This paper presents a method of preliminary localization of the iris, based on the stable brightness features of the iris in images of the eye. In tests on images of eyes from publicly available databases method showed good accuracy and speed compared to existing methods preliminary localization.

  15. Finite difference time domain (FDTD) method for modeling the effect of switched gradients on the human body in MRI.

    PubMed

    Zhao, Huawei; Crozier, Stuart; Liu, Feng

    2002-12-01

    Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model. Copyright 2002 Wiley-Liss, Inc.

  16. A New Multimedia Application for Teaching and Learning Chemical Equilibrium

    ERIC Educational Resources Information Center

    Ollino, Mario; Aldoney, Jenny; Domínguez, Ana M.; Merino, Cristian

    2018-01-01

    This study presents a method for teaching the subject of chemical equilibrium in which students engage in self-learning mediated by the use of a new multimedia animation (SEQ-alfa©). This method is presented together with evidence supporting its advantages. At a microscopic level, the simulator shows the mutual transformation of A molecules into B…

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Ch.; Gao, X. W.; Sladek, J.

    This paper reports our recent research works on crack analysis in continuously non-homogeneous and linear elastic functionally graded materials. A meshless boundary element method is developed for this purpose. Numerical examples are presented and discussed to demonstrate the efficiency and the accuracy of the present numerical method, and to show the effects of the material gradation on the crack-opening-displacements and the stress intensity factors.

  18. A Unified Approach to Teaching Quadratic and Cubic Equations.

    ERIC Educational Resources Information Center

    Ward, A. J. B.

    2003-01-01

    Presents a simple method for teaching the algebraic solution of cubic equations via completion of the cube. Shows that this method is readily accepted by students already familiar with completion of the square as a method for quadratic equations. (Author/KHR)

  19. Evaluation of physicochemical properties of root-end filling materials using conventional and Micro-CT tests

    PubMed Central

    TORRES, Fernanda Ferrari Esteves; BOSSO-MARTELO, Roberta; ESPIR, Camila Galletti; CIRELLI, Joni Augusto; GUERREIRO-TANOMARU, Juliane Maria; TANOMARU-FILHO, Mario

    2017-01-01

    Abstract Objective To evaluate solubility, dimensional stability, filling ability and volumetric change of root-end filling materials using conventional tests and new Micro-CT-based methods. Material and Methods 7 Results The results suggested correlated or complementary data between the proposed tests. At 7 days, BIO showed higher solubility and at 30 days, showed higher volumetric change in comparison with MTA (p<0.05). With regard to volumetric change, the tested materials were similar (p>0.05) at 7 days. At 30 days, they presented similar solubility. BIO and MTA showed higher dimensional stability than ZOE (p<0.05). ZOE and BIO showed higher filling ability (p<0.05). Conclusions ZOE presented a higher dimensional change, and BIO had greater solubility after 7 days. BIO presented filling ability and dimensional stability, but greater volumetric change than MTA after 30 days. Micro-CT can provide important data on the physicochemical properties of materials complementing conventional tests. PMID:28877275

  20. Stopping power of dense plasmas: The collisional method and limitations of the dielectric formalism.

    PubMed

    Clauser, C F; Arista, N R

    2018-02-01

    We present a study of the stopping power of plasmas using two main approaches: the collisional (scattering theory) and the dielectric formalisms. In the former case, we use a semiclassical method based on quantum scattering theory. In the latter case, we use the full description given by the extension of the Lindhard dielectric function for plasmas of all degeneracies. We compare these two theories and show that the dielectric formalism has limitations when it is used for slow heavy ions or atoms in dense plasmas. We present a study of these limitations and show the regimes where the dielectric formalism can be used, with appropriate corrections to include the usual quantum and classical limits. On the other hand, the semiclassical method shows the correct behavior for all plasma conditions and projectile velocity and charge. We consider different models for the ion charge distributions, including bare and dressed ions as well as neutral atoms.

  1. On finite element methods for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Aziz, A. K.; Werschulz, A. G.

    1979-01-01

    The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.

  2. Weak Presentations in Introductory Physics Texts.

    ERIC Educational Resources Information Center

    Jacobs, Samuel

    1978-01-01

    Presents a few illustrations of physics areas such as capacitors, free fall, vectors, and waves, to show that methods of presentation of specific topics, in some physics textbooks, produce in the average student the wrong impression and ignorance of important scientific facts. (GA)

  3. Practical security and privacy attacks against biometric hashing using sparse recovery

    NASA Astrophysics Data System (ADS)

    Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan

    2016-12-01

    Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.

  4. Analytical and quasi-Bayesian methods as development of the iterative approach for mixed radiation biodosimetry.

    PubMed

    Słonecka, Iwona; Łukasik, Krzysztof; Fornalski, Krzysztof W

    2018-06-04

    The present paper proposes two methods of calculating components of the dose absorbed by the human body after exposure to a mixed neutron and gamma radiation field. The article presents a novel approach to replace the common iterative method in its analytical form, thus reducing the calculation time. It also shows a possibility of estimating the neutron and gamma doses when their ratio in a mixed beam is not precisely known.

  5. Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.

    PubMed

    Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A

    2013-11-01

    We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.

  6. Time delayed Ensemble Nudging Method

    NASA Astrophysics Data System (ADS)

    An, Zhe; Abarbanel, Henry

    Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.

  7. Gradients estimation from random points with volumetric tensor in turbulence

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  8. 76 FR 38282 - Federal Employees Health Benefits Program: New Premium Rating Method for Most Community Rated Plans

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ...-AM39 Federal Employees Health Benefits Program: New Premium Rating Method for Most Community Rated... TCR will be required to continue using the SSSG methodology. Background There are two methods of... groups; standardized presentation of the carrier's rating method (age, sex, etc.) showing that the factor...

  9. Moxibustion and other acupuncture point stimulation methods to treat breech presentation: a systematic review of clinical trials

    PubMed Central

    Li, Xun; Hu, Jun; Wang, Xiaoyi; Zhang, Huirui; Liu, Jianping

    2009-01-01

    Background Moxibustion, acupuncture and other acupoint stimulations are commonly used for the correction of breech presentation. This systematic review aims to evaluate the efficacy and safety of moxibustion and other acupoint stimulations to treat breech presentation. Methods We included randomized controlled trials (RCTs) and controlled clinical trials (CCTs) on moxibustion, acupuncture or any other acupoint stimulating methods for breech presentation in pregnant women. All searches in PubMed, the Cochrane Library (2008 Issue 2), China National Knowledge Information (CNKI), Chinese Scientific Journal Database (VIP) and WanFang Database ended in July 2008. Two authors extracted and analyzed the data independently. Results Ten RCTs involving 2090 participants and seven CCTs involving 1409 participants were included in the present study. Meta-analysis showed significant differences between moxibustion and no treatment (RR 1.35, 95% CI 1.20 to 1.51; 3 RCTs). Comparison between moxibustion and knee-chest position did not show significant differences (RR 1.30, 95% CI 0.95 to 1.79; 3 RCTs). Moxibustion plus other therapeutic methods showed significant beneficial effects (RR 1.36, 95% CI 1.21 to 1.54; 2 RCTs). Laser stimulation was more effective than assuming the knee-chest position plus pelvis rotating. Moxibustion was more effective than no treatment (RR 1.29, 95% CI 1.17 to 1.42; 2 CCTs) but was not more effective than the knee-chest position treatment (RR 1.22, 95% CI 1.11 to 1.34; 2 CCTs). Laser stimulation at Zhiyin (BL67) was more effective than the knee-chest position treatment (RR 1.30, 95% CI 1.10 to 1.54; 2 CCTs,). Conclusion Moxibustion, acupuncture and laser acupoint stimulation tend to be effective in the correction of breech presentation. PMID:19245719

  10. General framework for dynamic large deformation contact problems based on phantom-node X-FEM

    NASA Astrophysics Data System (ADS)

    Broumand, P.; Khoei, A. R.

    2018-04-01

    This paper presents a general framework for modeling dynamic large deformation contact-impact problems based on the phantom-node extended finite element method. The large sliding penalty contact formulation is presented based on a master-slave approach which is implemented within the phantom-node X-FEM and an explicit central difference scheme is used to model the inertial effects. The method is compared with conventional contact X-FEM; advantages, limitations and implementational aspects are also addressed. Several numerical examples are presented to show the robustness and accuracy of the proposed method.

  11. A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations

    PubMed Central

    Motsa, S. S.; Magagula, V. M.; Sibanda, P.

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252

  12. A bivariate Chebyshev spectral collocation quasilinearization method for nonlinear evolution parabolic equations.

    PubMed

    Motsa, S S; Magagula, V M; Sibanda, P

    2014-01-01

    This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.

  13. On a partial differential equation method for determining the free energies and coexisting phase compositions of ternary mixtures from light scattering data.

    PubMed

    Ross, David S; Thurston, George M; Lutzer, Carl V

    2008-08-14

    In this paper we present a method for determining the free energies of ternary mixtures from light scattering data. We use an approximation that is appropriate for liquid mixtures, which we formulate as a second-order nonlinear partial differential equation. This partial differential equation (PDE) relates the Hessian of the intensive free energy to the efficiency of light scattering in the forward direction. This basic equation applies in regions of the phase diagram in which the mixtures are thermodynamically stable. In regions in which the mixtures are unstable or metastable, the appropriate PDE is the nonlinear equation for the convex hull. We formulate this equation along with continuity conditions for the transition between the two equations at cloud point loci. We show how to discretize this problem to obtain a finite-difference approximation to it, and we present an iterative method for solving the discretized problem. We present the results of calculations that were done with a computer program that implements our method. These calculations show that our method is capable of reconstructing test free energy functions from simulated light scattering data. If the cloud point loci are known, the method also finds the tie lines and tie triangles that describe thermodynamic equilibrium between two or among three liquid phases. A robust method for solving this PDE problem, such as the one presented here, can be a basis for optical, noninvasive means of characterizing the thermodynamics of multicomponent mixtures.

  14. How To Solve Problems. For Success in Freshman Physics, Engineering, and Beyond. Third Edition.

    ERIC Educational Resources Information Center

    Scarl, Donald

    To expertly solve engineering and science problems one needs to know science and engineering as well as have a tool kit of problem-solving methods. This book is about problem-solving methods: it presents the methods professional problem solvers use, explains why these methods have evolved, and shows how a student can make these methods his/her…

  15. Clinical applications of angiocardiography

    NASA Technical Reports Server (NTRS)

    Dodge, H. T.; Sandler, H.

    1974-01-01

    Several tables are presented giving left ventricular (LV) data for normal patients and patients with heart disease of varied etiologies, pointing out the salient features. Graphs showing LV pressure-volume relationships (compliance) are presented and discussed. The method developed by Rackley et al. (1964) for determining left ventricular mass in man is described, and limitations to the method are discussed. Some clinical methods for determining LV oxygen consumption are briefly described, and the relation of various abnormalities of ventricular performance to coronary artery disease and ischemic heart disease is characterized.

  16. Comparative Evaluation of Dimensional Accuracy of Elastomeric Impression Materials when Treated with Autoclave, Microwave, and Chemical Disinfection

    PubMed Central

    Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju

    2015-01-01

    Background: Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. Materials and Methods: The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer’s instructions. Dimensional changes were measured before and after different disinfection procedures. Result: Dentsply aquasil showed smallest dimensional change (−0.0046%) and impregum penta soft highest linear dimensional changes (−0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. Conclusion: The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method. PMID:26435611

  17. Comparative Evaluation of Dimensional Accuracy of Elastomeric Impression Materials when Treated with Autoclave, Microwave, and Chemical Disinfection.

    PubMed

    Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju

    2015-09-01

    Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer's instructions. Dimensional changes were measured before and after different disinfection procedures. Dentsply aquasil showed smallest dimensional change (-0.0046%) and impregum penta soft highest linear dimensional changes (-0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method.

  18. Decoding spike timing: the differential reverse correlation method

    PubMed Central

    Tkačik, Gašper; Magnasco, Marcelo O.

    2009-01-01

    It is widely acknowledged that detailed timing of action potentials is used to encode information, for example in auditory pathways; however the computational tools required to analyze encoding through timing are still in their infancy. We present a simple example of encoding, based on a recent model of time-frequency analysis, in which units fire action potentials when a certain condition is met, but the timing of the action potential depends also on other features of the stimulus. We show that, as a result, spike-triggered averages are smoothed so much they do not represent the true features of the encoding. Inspired by this example, we present a simple method, differential reverse correlations, that can separate an analysis of what causes a neuron to spike, and what controls its timing. We analyze with this method the leaky integrate-and-fire neuron and show the method accurately reconstructs the model's kernel. PMID:18597928

  19. Experimental and Monte Carlo evaluation of Eclipse treatment planning system for effects on dose distribution of the hip prostheses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Çatlı, Serap, E-mail: serapcatli@hotmail.com; Tanır, Güneş

    2013-10-01

    The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the presentmore » study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.« less

  20. Crossing statistic: reconstructing the expansion history of the universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman, E-mail: arman@ewha.ac.kr

    2012-08-01

    We present that by combining Crossing Statistic [1,2] and Smoothing method [3-5] one can reconstruct the expansion history of the universe with a very high precision without considering any prior on the cosmological quantities such as the equation of state of dark energy. We show that the presented method performs very well in reconstruction of the expansion history of the universe independent of the underlying models and it works well even for non-trivial dark energy models with fast or slow changes in the equation of state of dark energy. Accuracy of the reconstructed quantities along with independence of the methodmore » to any prior or assumption gives the proposed method advantages to the other non-parametric methods proposed before in the literature. Applying on the Union 2.1 supernovae combined with WiggleZ BAO data we present the reconstructed results and test the consistency of the two data sets in a model independent manner. Results show that latest available supernovae and BAO data are in good agreement with each other and spatially flat ΛCDM model is in concordance with the current data.« less

  1. Teaching the Indirect Method of the Statement of Cash Flows in Introductory Financial Accounting: A Comprehensive, Problem-Based Approach

    ERIC Educational Resources Information Center

    Brickner, Daniel R.; McCombs, Gary B.

    2004-01-01

    In this article, the authors provide an instructional resource for presenting the indirect method of the statement of cash flows (SCF) in an introductory financial accounting course. The authors focus primarily on presenting a comprehensive example that illustrates the "why" of SCF preparation and show how journal entries and T-accounts can be…

  2. Turboexpander calculations using a generalized equation of state correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, M.S.; Starling, K.E.

    1975-01-01

    A generalized method for predicting the thermodynamic properties of natural gas fluids has been developed and tested. The results of several comparisons between thermodynamic property values predicted by the method and experimental data are presented. Comparisons of predicted and experimental vapor-liquid equilibrium are presented. These comparisons indicate that the generalized correlation can be used to predict many thermodynamic properties of natural gas and LNG. Turboexpander calculations are presented to show the utility of the generalized correlation for process design calculations.

  3. Numerical analysis for the fractional diffusion and fractional Buckmaster equation by the two-step Laplace Adam-Bashforth method

    NASA Astrophysics Data System (ADS)

    Jain, Sonal

    2018-01-01

    In this paper, we aim to use the alternative numerical scheme given by Gnitchogna and Atangana for solving partial differential equations with integer and non-integer differential operators. We applied this method to fractional diffusion model and fractional Buckmaster models with non-local fading memory. The method yields a powerful numerical algorithm for fractional order derivative to implement. Also we present in detail the stability analysis of the numerical method for solving the diffusion equation. This proof shows that this method is very stable and also converges very quickly to exact solution and finally some numerical simulation is presented.

  4. Noninvasive vacuum integrity tests on fast warm-up traveling-wave tubes

    NASA Astrophysics Data System (ADS)

    Dallos, A.; Carignan, R. G.

    1989-04-01

    A method of tube vacuum monitoring that uses the tube's existing internal electrodes as an ion gage is discussed. This method has been refined using present-day instrumentation and has proved to be a precise, simple, and fast method of tube vacuum measurement. The method is noninvasive due to operation of the cathode at low temperature, which minimizes pumping or outgassing. Because of the low current levels to be measured, anode insulator leakage must be low, and the leads must be properly shielded to minimize charging effects. A description of the method, instrumentation used, limitations, and data showing results over a period of 600 days are presented.

  5. An improved self-adaptive ant colony algorithm based on genetic strategy for the traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Wang, Pan; Zhang, Yi; Yan, Dong

    2018-05-01

    Ant Colony Algorithm (ACA) is a powerful and effective algorithm for solving the combination optimization problem. Moreover, it was successfully used in traveling salesman problem (TSP). But it is easy to prematurely converge to the non-global optimal solution and the calculation time is too long. To overcome those shortcomings, a new method is presented-An improved self-adaptive Ant Colony Algorithm based on genetic strategy. The proposed method adopts adaptive strategy to adjust the parameters dynamically. And new crossover operation and inversion operation in genetic strategy was used in this method. We also make an experiment using the well-known data in TSPLIB. The experiment results show that the performance of the proposed method is better than the basic Ant Colony Algorithm and some improved ACA in both the result and the convergence time. The numerical results obtained also show that the proposed optimization method can achieve results close to the theoretical best known solutions at present.

  6. SVMs for Vibration-Based Terrain Classification

    NASA Astrophysics Data System (ADS)

    Weiss, Christian; Stark, Matthias; Zell, Andreas

    When an outdoor mobile robot traverses different types of ground surfaces, different types of vibrations are induced in the body of the robot. These vibrations can be used to learn a discrimination between different surfaces and to classify the current terrain. Recently, we presented a method that uses Support Vector Machines for classification, and we showed results on data collected with a hand-pulled cart. In this paper, we show that our approach also works well on an outdoor robot. Furthermore, we more closely investigate in which direction the vibration should be measured. Finally, we present a simple but effective method to improve the classification by combining measurements taken in multiple directions.

  7. Using an Outranking Method Supporting the Acquisition of Military Equipment

    DTIC Science & Technology

    2009-10-01

    selection methodology, taking several criteria into account. We show to what extent the class of PROMETHEE methods is presenting these features. We...functions, the indifference and preference thresholds and some other technical parameters. Then we discuss the capabilities of the PROMETHEE methods to...discuss the interpretation of the results given by these PROMETHEE methods. INTRODUCTION Outranking methods for multicriteria decision aid belong

  8. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less

  9. A stabilized Runge-Kutta-Legendre method for explicit super-time-stepping of parabolic and mixed equations

    NASA Astrophysics Data System (ADS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.

  10. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  11. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  12. A unified convergence theory of a numerical method, and applications to the replenishment policies.

    PubMed

    Mi, Xiang-jiang; Wang, Xing-hua

    2004-01-01

    In determining the replenishment policy for an inventory system, some researchers advocated that the iterative method of Newton could be applied to the derivative of the total cost function in order to get the optimal solution. But this approach requires calculation of the second derivative of the function. Avoiding this complex computation we use another iterative method presented by the second author. One of the goals of this paper is to present a unified convergence theory of this method. Then we give a numerical example to show the application of our theory.

  13. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  14. Symmetric functions and wavefunctions of XXZ-type six-vertex models and elliptic Felderhof models by Izergin-Korepin analysis

    NASA Astrophysics Data System (ADS)

    Motegi, Kohei

    2018-05-01

    We present a method to analyze the wavefunctions of six-vertex models by extending the Izergin-Korepin analysis originally developed for domain wall boundary partition functions. First, we apply the method to the case of the basic wavefunctions of the XXZ-type six-vertex model. By giving the Izergin-Korepin characterization of the wavefunctions, we show that these wavefunctions can be expressed as multiparameter deformations of the quantum group deformed Grothendieck polynomials. As a second example, we show that the Izergin-Korepin analysis is effective for analysis of the wavefunctions for a triangular boundary and present the explicit forms of the symmetric functions representing these wavefunctions. As a third example, we apply the method to the elliptic Felderhof model which is a face-type version and an elliptic extension of the trigonometric Felderhof model. We show that the wavefunctions can be expressed as one-parameter deformations of an elliptic analog of the Vandermonde determinant and elliptic symmetric functions.

  15. Numerical investigation of velocity slip and temperature jump effects on unsteady flow over a stretching permeable surface

    NASA Astrophysics Data System (ADS)

    Hosseini, E.; Loghmani, G. B.; Heydari, M.; Rashidi, M. M.

    2017-02-01

    In this paper, the boundary layer flow and heat transfer of unsteady flow over a porous accelerating stretching surface in the presence of the velocity slip and temperature jump effects are investigated numerically. A new effective collocation method based on rational Bernstein functions is applied to solve the governing system of nonlinear ordinary differential equations. This method solves the problem on the semi-infinite domain without truncating or transforming it to a finite domain. In addition, the presented method reduces the solution of the problem to the solution of a system of algebraic equations. Graphical and tabular results are presented to investigate the influence of the unsteadiness parameter A , Prandtl number Pr, suction parameter fw, velocity slip parameter γ and thermal slip parameter φ on the velocity and temperature profiles of the fluid. The numerical experiments are reported to show the accuracy and efficiency of the novel proposed computational procedure. Comparisons of present results are made with those obtained by previous works and show excellent agreement.

  16. Communication: Overcoming the root search problem in complex quantum trajectory calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamstein, Noa; Tannor, David J.

    2014-01-28

    Three new developments are presented regarding the semiclassical coherent state propagator. First, we present a conceptually different derivation of Huber and Heller's method for identifying complex root trajectories and their equations of motion [D. Huber and E. J. Heller, J. Chem. Phys. 87, 5302 (1987)]. Our method proceeds directly from the time-dependent Schrödinger equation and therefore allows various generalizations of the formalism. Second, we obtain an analytic expression for the semiclassical coherent state propagator. We show that the prefactor can be expressed in a form that requires solving significantly fewer equations of motion than in alternative expressions. Third, the semiclassicalmore » coherent state propagator is used to formulate a final value representation of the time-dependent wavefunction that avoids the root search, eliminates problems with caustics and automatically includes interference. We present numerical results for the 1D Morse oscillator showing that the method may become an attractive alternative to existing semiclassical approaches.« less

  17. Investigation on a coupled CFD/DSMC method for continuum-rarefied flows

    NASA Astrophysics Data System (ADS)

    Tang, Zhenyu; He, Bijiao; Cai, Guobiao

    2012-11-01

    The purpose of the present work is to investigate the coupled CFD/DSMC method using the existing CFD and DSMC codes developed by the authors. The interface between the continuum and particle regions is determined by the gradient-length local Knudsen number. A coupling scheme combining both state-based and flux-based coupling methods is proposed in the current study. Overlapping grids are established between the different grid systems of CFD and DSMC codes. A hypersonic flow over a 2D cylinder has been simulated using the present coupled method. Comparison has been made between the results obtained from both methods, which shows that the coupled CFD/DSMC method can achieve the same precision as the pure DSMC method and obtain higher computational efficiency.

  18. A simple mass-conserved level set method for simulation of multiphase flows

    NASA Astrophysics Data System (ADS)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  19. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    ERIC Educational Resources Information Center

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  20. Deuterium depth profile quantification in a ASDEX Upgrade divertor tile using secondary ion mass spectrometry

    NASA Astrophysics Data System (ADS)

    Ghezzi, F.; Caniello, R.; Giubertoni, D.; Bersani, M.; Hakola, A.; Mayer, M.; Rohde, V.; Anderle, M.; ASDEX Upgrade Team

    2014-10-01

    We present the results of a study where secondary ion mass spectrometry (SIMS) has been used to obtain depth profiles of deuterium concentration on plasma facing components of the first wall of the ASDEX Upgrade tokamak. The method uses primary and secondary standards to quantify the amount of deuterium retained. Samples of bulk graphite coated with tungsten or tantalum-doped tungsten are independently profiled with three different SIMS instruments. Their deuterium concentration profiles are compared showing good agreement. In order to assess the validity of the method, the integrated deuterium concentrations in the coatings given by one of the SIMS devices is compared with nuclear reaction analysis (NRA) data. Although in the case of tungsten the agreement between NRA and SIMS is satisfactory, for tantalum-doped tungsten samples the discrepancy is significant because of matrix effect induced by tantalum and differently eroded surface (W + Ta always exposed to plasma, W largely shadowed). A further comparison where the SIMS deuterium concentration is obtained by calibrating the measurements against NRA values is also presented. For the tungsten samples, where no Ta induced matrix effects are present, the two methods are almost equivalent.The results presented show the potential of the method provided that the standards used for the calibration reproduce faithfully the matrix nature of the samples.

  1. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  2. Telomerecat: A ploidy-agnostic method for estimating telomere length from whole genome sequencing data.

    PubMed

    Farmery, James H R; Smith, Mike L; Lynch, Andy G

    2018-01-22

    Telomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously unimaginable scale. To this end, a number of approaches for estimating telomere length from whole-genome sequencing data have been proposed. Here we present Telomerecat, a novel approach to the estimation of telomere length. Previous methods have been dependent on the number of telomeres present in a cell being known, which may be problematic when analysing aneuploid cancer data and non-human samples. Telomerecat is designed to be agnostic to the number of telomeres present, making it suited for the purpose of estimating telomere length in cancer studies. Telomerecat also accounts for interstitial telomeric reads and presents a novel approach to dealing with sequencing errors. We show that Telomerecat performs well at telomere length estimation when compared to leading experimental and computational methods. Furthermore, we show that it detects expected patterns in longitudinal data, repeated measurements, and cross-species comparisons. We also apply the method to a cancer cell data, uncovering an interesting relationship with the underlying telomerase genotype.

  3. CART V: recent advancements in computer-aided camouflage assessment

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Müller, Markus

    2011-05-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.

  4. Using Toulmin Analysis to Analyse an Instructor's Proof Presentation in Abstract Algebra

    ERIC Educational Resources Information Center

    Fukawa-Connelly, Timothy

    2014-01-01

    This paper provides a method for analysing undergraduate teaching of proof-based courses using Toulmin's model (1969) of argumentation. It presents a case study of one instructor's presentation of proofs. The analysis shows that the instructor presents different levels of detail in different proofs; thus, the students have an inconsistent set of…

  5. Identifying online user reputation of user-object bipartite networks

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Lu; Liu, Jian-Guo; Yang, Kai; Guo, Qiang; Han, Jing-Ti

    2017-02-01

    Identifying online user reputation based on the rating information of the user-object bipartite networks is important for understanding online user collective behaviors. Based on the Bayesian analysis, we present a parameter-free algorithm for ranking online user reputation, where the user reputation is calculated based on the probability that their ratings are consistent with the main part of all user opinions. The experimental results show that the AUC values of the presented algorithm could reach 0.8929 and 0.8483 for the MovieLens and Netflix data sets, respectively, which is better than the results generated by the CR and IARR methods. Furthermore, the experimental results for different user groups indicate that the presented algorithm outperforms the iterative ranking methods in both ranking accuracy and computation complexity. Moreover, the results for the synthetic networks show that the computation complexity of the presented algorithm is a linear function of the network size, which suggests that the presented algorithm is very effective and efficient for the large scale dynamic online systems.

  6. Multi-instrumental characterization of carbon nanotubes dispersed in aqueous solutions

    EPA Science Inventory

    Previous studies showed that the dispersion extent and physicochemical properties of carbon nanotubes are highly dependent upon the preparation methods (e.g., dispersion methods and dispersants). In the present work, multiwalled carbon nanotubes (MWNTs) are dispersed in aqueous s...

  7. A new sampling method for fibre length measurement

    NASA Astrophysics Data System (ADS)

    Wu, Hongyan; Li, Xianghong; Zhang, Junying

    2018-06-01

    This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.

  8. Multi-parametric centrality method for graph network models

    NASA Astrophysics Data System (ADS)

    Ivanov, Sergei Evgenievich; Gorlushkina, Natalia Nikolaevna; Ivanova, Lubov Nikolaevna

    2018-04-01

    The graph model networks are investigated to determine centrality, weights and the significance of vertices. For centrality analysis appliesa typical method that includesany one of the properties of graph vertices. In graph theory, methods of analyzing centrality are used: in terms by degree, closeness, betweenness, radiality, eccentricity, page-rank, status, Katz and eigenvector. We have proposed a new method of multi-parametric centrality, which includes a number of basic properties of the network member. The mathematical model of multi-parametric centrality method is developed. Comparison of results for the presented method with the centrality methods is carried out. For evaluate the results for the multi-parametric centrality methodthe graph model with hundreds of vertices is analyzed. The comparative analysis showed the accuracy of presented method, includes simultaneously a number of basic properties of vertices.

  9. Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Shu, C.; Tan, D.

    2018-05-01

    An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.

  10. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  11. Prediction of noise field of a propfan at angle of attack

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    1991-01-01

    A method for predicting the noise field of a propfan operating at an angle of attack to the oncoming flow is presented. The method takes advantage of the high-blade-count of the advanced propeller designs to provide an accurate and efficient formula for predicting their noise field. The formula, which is written in terms of the Airy function and its derivative, provides a very attractive alternative to the use of numerical integration. A preliminary comparison shows rather favorable agreement between the predictions from the present method and the experimental data.

  12. Theory and Computation of Optimal Low- and Medium- Thrust Orbit Transfers

    NASA Technical Reports Server (NTRS)

    Goodson, Troy D.; Chuang, Jason C. H.; Ledsinger, Laura A.

    1996-01-01

    This report presents new theoretical results which lead to new algorithms for the computation of fuel-optimal multiple-burn orbit transfers of low and medium thrust. Theoretical results introduced herein show how to add burns to an optimal trajectory and show that the traditional set of necessary conditions may be replaced with a much simpler set of equations. Numerical results are presented to demonstrate the utility of the theoretical results and the new algorithms. Two indirect methods from the literature are shown to be effective for the optimal orbit transfer problem with relatively small numbers of burns. These methods are the Minimizing Boundary Condition Method (MBCM) and BOUNDSCO. Both of these methods make use of the first-order necessary conditions exactly as derived by optimal control theory. Perturbations due to Earth's oblateness and atmospheric drag are considered. These perturbations are of greatest interest for transfers that take place between low Earth orbit altitudes and geosynchronous orbit altitudes. Example extremal solutions including these effects and computed by the aforementioned methods are presented. An investigation is also made into a suboptimal multiple-burn guidance scheme. The FORTRAN code developed for this study has been collected together in a package named ORBPACK. ORBPACK's user manual is provided as an appendix to this report.

  13. On a new iterative method for solving linear systems and comparison results

    NASA Astrophysics Data System (ADS)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  14. A concept for holistic whole body MRI data analysis, Imiomics

    PubMed Central

    Malmberg, Filip; Johansson, Lars; Lind, Lars; Sundbom, Magnus; Ahlström, Håkan; Kullberg, Joel

    2017-01-01

    Purpose To present and evaluate a whole-body image analysis concept, Imiomics (imaging–omics) and an image registration method that enables Imiomics analyses by deforming all image data to a common coordinate system, so that the information in each voxel can be compared between persons or within a person over time and integrated with non-imaging data. Methods The presented image registration method utilizes relative elasticity constraints of different tissue obtained from whole-body water-fat MRI. The registration method is evaluated by inverse consistency and Dice coefficients and the Imiomics concept is evaluated by example analyses of importance for metabolic research using non-imaging parameters where we know what to expect. The example analyses include whole body imaging atlas creation, anomaly detection, and cross-sectional and longitudinal analysis. Results The image registration method evaluation on 128 subjects shows low inverse consistency errors and high Dice coefficients. Also, the statistical atlas with fat content intensity values shows low standard deviation values, indicating successful deformations to the common coordinate system. The example analyses show expected associations and correlations which agree with explicit measurements, and thereby illustrate the usefulness of the proposed Imiomics concept. Conclusions The registration method is well-suited for Imiomics analyses, which enable analyses of relationships to non-imaging data, e.g. clinical data, in new types of holistic targeted and untargeted big-data analysis. PMID:28241015

  15. Phased Array Ultrasound: Initial Development of PAUT Inspection of Self-Reacting Friction Stir Welds

    NASA Technical Reports Server (NTRS)

    Rairigh, Ryan

    2008-01-01

    This slide presentation reviews the development of Phased Array Ultrasound (PAUT) as a non-destructive examination method for Self Reacting Friction Stir Welds (SR-FSW). PAUT is the only NDE method which has been shown to detect detrimental levels of Residual Oxide Defect (ROD), which can result in significant decrease in weld strength. The presentation reviews the PAUT process, and shows the results in comparison with x-ray radiography.

  16. New Learning Method of a Lecture of ‘Machine Fabrication’ by Self-study with Investigation and Presentation Incorporated

    NASA Astrophysics Data System (ADS)

    Kasuga, Yukio

    A new teaching method was developed in learning ‘machine fabrication’ for the undergraduate students. This consists of a few times of lectures, grouping, decision of industrial products which each group wants to investigate, investigation work by library books and internet, arrangement of data containing characteristics of the products, employed materials and processing methods, presentation, discussions and revision followed by another presentation. This new method is derived from one of the Finland‧s way of primary school education. Their way of education is believed to have boosted up to the top ranking in PISA tests by OECD. After starting the new way of learning, students have fresh impressions on this lesson, especially for self-study, the way of investigation, collaborate work and presentation. Also, after four years of implementation, some improvements have been made including less use of internet, and determination of products and fabricating methods in advance which should be investigated. By this, students‧ lecture assessment shows further encouraging results.

  17. Educating for Critical Thinking: Thought-Encouraging Questions in a Community of Inquiry

    ERIC Educational Resources Information Center

    Golding, Clinton

    2011-01-01

    This paper presents one method for educating for critical thinking in Higher Education. It elaborates Richard Paul's method of Socratic questioning to show how students can learn to be critical thinkers. This method combines and uses the wider pedagogical and critical thinking literature in a new way: it emphasises a thinking-encouraging approach…

  18. A robust, efficient equidistribution 2D grid generation method

    NASA Astrophysics Data System (ADS)

    Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni

    2007-11-01

    We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).

  19. Dyeing Insects for Behavioral Assays: the Mating Behavior of Anesthetized Drosophila

    PubMed Central

    Verspoor, Rudi L.; Heys, Chloe; Price, Thomas A. R.

    2015-01-01

    Mating experiments using Drosophila have contributed greatly to the understanding of sexual selection and behavior. Experiments often require simple, easy and cheap methods to distinguish between individuals in a trial. A standard technique for this is CO2 anaesthesia and then labelling or wing clipping each fly. However, this is invasive and has been shown to affect behavior. Other techniques have used coloration to identify flies. This article presents a simple and non-invasive method for labelling Drosophila that allows them to be individually identified within experiments, using food coloring. This method is used in trials where two males compete to mate with a female. Dyeing allowed quick and easy identification. There was, however, some difference in the strength of the coloration across the three species tested. Data is presented showing the dye has a lower impact on mating behavior than CO2 in Drosophila melanogaster. The impact of CO2 anaesthesia is shown to depend on the species of Drosophila, with D. pseudoobscura and D. subobscura showing no impact, whereas D. melanogaster males had reduced mating success. The dye method presented is applicable to a wide range of experimental designs. PMID:25938821

  20. Automated cloud screening of AVHRR imagery using split-and-merge clustering

    NASA Technical Reports Server (NTRS)

    Gallaudet, Timothy C.; Simpson, James J.

    1991-01-01

    Previous methods to segment clouds from ocean in AVHRR imagery have shown varying degrees of success, with nighttime approaches being the most limited. An improved method of automatic image segmentation, the principal component transformation split-and-merge clustering (PCTSMC) algorithm, is presented and applied to cloud screening of both nighttime and daytime AVHRR data. The method combines spectral differencing, the principal component transformation, and split-and-merge clustering to sample objectively the natural classes in the data. This segmentation method is then augmented by supervised classification techniques to screen clouds from the imagery. Comparisons with other nighttime methods demonstrate its improved capability in this application. The sensitivity of the method to clustering parameters is presented; the results show that the method is insensitive to the split-and-merge thresholds.

  1. Comparison Of Methods Used In Cartography For The Skeletonisation Of Areal Objects

    NASA Astrophysics Data System (ADS)

    Szombara, Stanisław

    2015-12-01

    The article presents a method that would compare skeletonisation methods for areal objects. The skeleton of an areal object, being its linear representation, is used, among others, in cartographic visualisation. The method allows us to compare between any skeletonisation methods in terms of the deviations of distance differences between the skeleton of the object and its border from one side and the distortions of skeletonisation from another. In the article, 5 methods were compared: Voronoi diagrams, densified Voronoi diagrams, constrained Delaunay triangulation, Straight Skeleton and Medial Axis (Transform). The results of comparison were presented on the example of several areal objects. The comparison of the methods showed that in all the analysed objects the Medial Axis (Transform) gives the smallest distortion and deviation values, which allows us to recommend it.

  2. New methods for indexing multi-lattice diffraction data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gildea, Richard J.; Waterman, David G.; CCP4, Research Complex at Harwell, Rutherford Appleton Laboratory, Didcot OX11 0FA

    2014-10-01

    A new indexing method is presented which is capable of indexing multiple crystal lattices from narrow wedges of data. The efficacy of this method is demonstrated with both semi-synthetic multi-lattice data and real multi-lattice data recorded from microcrystals of ∼1 µm in size. A new indexing method is presented which is capable of indexing multiple crystal lattices from narrow wedges of diffraction data. The method takes advantage of a simplification of Fourier transform-based methods that is applicable when the unit-cell dimensions are known a priori. The efficacy of this method is demonstrated with both semi-synthetic multi-lattice data and real multi-latticemore » data recorded from crystals of ∼1 µm in size, where it is shown that up to six lattices can be successfully indexed and subsequently integrated from a 1° wedge of data. Analysis is presented which shows that improvements in data-quality indicators can be obtained through accurate identification and rejection of overlapping reflections prior to scaling.« less

  3. Alternative Asbestos Control Method and the Asbestos Releasability Research

    EPA Science Inventory

    Alternative Asbestos Control Method shows promise in speed, cost, and efficiency if equally protective. ORD conducted side by side test of AACM vs NESHAP on identical asbestos-containing buildings at Fort Chaffee. This abstract and presentation are based, at least in part, on pr...

  4. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  5. Ohmic Heating: An Emerging Concept in Organic Synthesis.

    PubMed

    Silva, Vera L M; Santos, Luis M N B F; Silva, Artur M S

    2017-06-12

    The ohmic heating also known as direct Joule heating, is an advanced thermal processing method, mainly used in the food industry to rapidly increase the temperature for either cooking or sterilization purposes. Its use in organic synthesis, in the heating of chemical reactors, is an emerging method that shows great potential, the development of which has started recently. This Concept article focuses on the use of ohmic heating as a new tool for organic synthesis. It presents the fundamentals of ohmic heating and makes a qualitative and quantitative comparison with other common heating methods. A brief description of the ohmic reactor prototype in operation is presented as well as recent examples of its use in organic synthesis at laboratory scale, thus showing the current state of the research. The advantages and limitations of this heating method, as well as its main current applications are also discussed. Finally, the prospects and potential implications of ohmic heating in future research in chemical synthesis are proposed. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Highly efficient phosphorescent, TADF, and fluorescent OLEDs (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Jang-Joo; Kim, Kwon-Hyeon; Moon, Chang-Ki; Shin, Hyun

    2016-09-01

    High efficiency OLEDs based on phosphorescent, thermally activated delayed fluorescent (TADF) and fluorescent emitters will be presented. We will show that EQEs over 60% is achievable if OLEDs are fabricated using organic semiconductors with the refractive indices of 1.5 and fully horizontal emitting dipoles without any extra light extracting structure. We will also show that reverse intersystem crossing RISC rate plays an important role to reduce the efficiency roll-off in efficient TADF and fluorescent OLEDs and a couple to methods will be presented to increase the RISC rate in the devices.

  7. Comparison of concepts in easy-to-use methods for MSD risk assessment.

    PubMed

    Roman-Liu, Danuta

    2014-05-01

    This article presents a comparative analysis of easy-to-use methods for assessing musculoskeletal load and the risk for developing musculoskeletal disorders. In all such methods, assessment of load consists in defining input data, the procedure and the system of assessment. This article shows what assessment steps the methods have in common; it also shows how those methods differ in each step. In addition, the methods are grouped according to their characteristic features. The conclusion is that the concepts of assessing risk in different methods can be used to develop solutions leading to a comprehensive method appropriate for all work tasks and all parts of the body. However, studies are necessary to verify the accepted premises and to introduce some standardization that would make consolidation possible. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. Numerical evaluation of discontinuous and nonconforming finite element methods in nonlinear solid mechanics

    NASA Astrophysics Data System (ADS)

    Bayat, Hamid Reza; Krämer, Julian; Wunderlich, Linus; Wulfinghoff, Stephan; Reese, Stefanie; Wohlmuth, Barbara; Wieners, Christian

    2018-03-01

    This work presents a systematic study of discontinuous and nonconforming finite element methods for linear elasticity, finite elasticity, and small strain plasticity. In particular, we consider new hybrid methods with additional degrees of freedom on the skeleton of the mesh and allowing for a local elimination of the element-wise degrees of freedom. We show that this process leads to a well-posed approximation scheme. The quality of the new methods with respect to locking and anisotropy is compared with standard and in addition locking-free conforming methods as well as established (non-) symmetric discontinuous Galerkin methods with interior penalty. For several benchmark configurations, we show that all methods converge asymptotically for fine meshes and that in many cases the hybrid methods are more accurate for a fixed size of the discrete system.

  9. An unsupervised method for estimating the global horizontal irradiance from photovoltaic power measurements

    NASA Astrophysics Data System (ADS)

    Nespoli, Lorenzo; Medici, Vasco

    2017-12-01

    In this paper, we present a method to determine the global horizontal irradiance (GHI) from the power measurements of one or more PV systems, located in the same neighborhood. The method is completely unsupervised and is based on a physical model of a PV plant. The precise assessment of solar irradiance is pivotal for the forecast of the electric power generated by photovoltaic (PV) plants. However, on-ground measurements are expensive and are generally not performed for small and medium-sized PV plants. Satellite-based services represent a valid alternative to on site measurements, but their space-time resolution is limited. Results from two case studies located in Switzerland are presented. The performance of the proposed method at assessing GHI is compared with that of free and commercial satellite services. Our results show that the presented method is generally better than satellite-based services, especially at high temporal resolutions.

  10. Comparative study of 2D ultrasound imaging methods in the f-k domain and evaluation of their performances in a realistic NDT configuration

    NASA Astrophysics Data System (ADS)

    Merabet, Lucas; Robert, Sébastien; Prada, Claire

    2018-04-01

    In this paper, we present two frequency-domain algorithms for 2D imaging with plane wave emissions, namely Stolt's migration and Lu's method. The theoretical background is first presented, followed by an analysis of the algorithm complexities. The frequency-domain methods are then compared to the time-domain plane wave imaging in a realistic inspection configuration where the array elements are not in contact with the specimen. Imaging defects located far away from the array aperture is assessed and computation times for the three methods are presented as a function of the number of pixels of the reconstructed image. We show that Lu's method provides a time gain of up to 33 compared to the time-domain algorithm, and demonstrate the limitations of Stolt's migration for defects far away from the aperture.

  11. Global Optimal Trajectory in Chaos and NP-Hardness

    NASA Astrophysics Data System (ADS)

    Latorre, Vittorio; Gao, David Yang

    This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.

  12. The Research of Regression Method for Forecasting Monthly Electricity Sales Considering Coupled Multi-factor

    NASA Astrophysics Data System (ADS)

    Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui

    2018-01-01

    The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.

  13. A k-Space Method for Moderately Nonlinear Wave Propagation

    PubMed Central

    Jing, Yun; Wang, Tianren; Clement, Greg T.

    2013-01-01

    A k-space method for moderately nonlinear wave propagation in absorptive media is presented. The Westervelt equation is first transferred into k-space via Fourier transformation, and is solved by a modified wave-vector time-domain scheme. The present approach is not limited to forward propagation or parabolic approximation. One- and two-dimensional problems are investigated to verify the method by comparing results to analytic solutions and finite-difference time-domain (FDTD) method. It is found that to obtain accurate results in homogeneous media, the grid size can be as little as two points per wavelength, and for a moderately nonlinear problem, the Courant–Friedrichs–Lewy number can be as large as 0.4. Through comparisons with the conventional FDTD method, the k-space method for nonlinear wave propagation is shown here to be computationally more efficient and accurate. The k-space method is then employed to study three-dimensional nonlinear wave propagation through the skull, which shows that a relatively accurate focusing can be achieved in the brain at a high frequency by sending a low frequency from the transducer. Finally, implementations of the k-space method using a single graphics processing unit shows that it required about one-seventh the computation time of a single-core CPU calculation. PMID:22899114

  14. Natural Environment Illumination: Coherent Interactive Augmented Reality for Mobile and Non-Mobile Devices.

    PubMed

    Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten

    2017-11-01

    Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.

  15. Markov chain sampling of the O(n) loop models on the infinite plane

    NASA Astrophysics Data System (ADS)

    Herdeiro, Victor

    2017-07-01

    A numerical method was recently proposed in Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] showing a precise sampling of the infinite plane two-dimensional critical Ising model for finite lattice subsections. The present note extends the method to a larger class of models, namely the O(n) loop gas models for n ∈(1 ,2 ] . We argue that even though the Gibbs measure is nonlocal, it is factorizable on finite subsections when sufficient information on the loops touching the boundaries is stored. Our results attempt to show that provided an efficient Markov chain mixing algorithm and an improved discrete lattice dilation procedure the planar limit of the O(n) models can be numerically studied with efficiency similar to the Ising case. This confirms that scale invariance is the only requirement for the present numerical method to work.

  16. A comparative study of cultural methods for the detection of Salmonella in feed and feed ingredients

    PubMed Central

    Koyuncu, Sevinc; Haggblom, Per

    2009-01-01

    Background Animal feed as a source of infection to food producing animals is much debated. In order to increase our present knowledge about possible feed transmission it is important to know that the present isolation methods for Salmonella are reliable also for feed materials. In a comparative study the ability of the standard method used for isolation of Salmonella in feed in the Nordic countries, the NMKL71 method (Nordic Committee on Food Analysis) was compared to the Modified Semisolid Rappaport Vassiliadis method (MSRV) and the international standard method (EN ISO 6579:2002). Five different feed materials were investigated, namely wheat grain, soybean meal, rape seed meal, palm kernel meal, pellets of pig feed and also scrapings from a feed mill elevator. Four different levels of the Salmonella serotypes S. Typhimurium, S. Cubana and S. Yoruba were added to each feed material, respectively. For all methods pre-enrichment in Buffered Peptone Water (BPW) were carried out followed by enrichments in the different selective media and finally plating on selective agar media. Results The results obtained with all three methods showed no differences in detection levels, with an accuracy and sensitivity of 65% and 56%, respectively. However, Müller-Kauffmann tetrathionate-novobiocin broth (MKTTn), performed less well due to many false-negative results on Brilliant Green agar (BGA) plates. Compared to other feed materials palm kernel meal showed a higher detection level with all serotypes and methods tested. Conclusion The results of this study showed that the accuracy, sensitivity and specificity of the investigated cultural methods were equivalent. However, the detection levels for different feed and feed ingredients varied considerably. PMID:19192298

  17. Recent advances in PDF modeling of turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Leonard, Andrew D.; Dai, F.

    1995-01-01

    This viewgraph presentation concludes that a Monte Carlo probability density function (PDF) solution successfully couples with an existing finite volume code; PDF solution method applied to turbulent reacting flows shows good agreement with data; and PDF methods must be run on parallel machines for practical use.

  18. Approaching the Limit in Atomic Spectrochemical Analysis.

    ERIC Educational Resources Information Center

    Hieftje, Gary M.

    1982-01-01

    To assess the ability of current analytical methods to approach the single-atom detection level, theoretical and experimentally determined detection levels are presented for several chemical elements. A comparison of these methods shows that the most sensitive atomic spectrochemical technique currently available is based on emission from…

  19. A Summary of the Space-Time Conservation Element and Solution Element (CESE) Method

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.

    2015-01-01

    The space-time Conservation Element and Solution Element (CESE) method for solving conservation laws is examined for its development motivation and design requirements. The characteristics of the resulting scheme are discussed. The discretization of the Euler equations is presented to show readers how to construct a scheme based on the CESE method. The differences and similarities between the CESE method and other traditional methods are discussed. The strengths and weaknesses of the method are also addressed.

  20. Correction of energy-dependent systematic errors in dual-energy X-ray CT using a basis material coefficients transformation method

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Liew, S. C.; Hasegawa, B. H.

    1997-12-01

    Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.

  1. Girls Not Boys Show Gender-Connotation Encoding from Print.

    ERIC Educational Resources Information Center

    Perez, Susan M.; Kee, Daniel W.

    2000-01-01

    Investigated possible gender differences in third grade students' encoding of gender-connotation from words using the release from proactive interference method to measure gender-connotation encoding. Students were presented with stimulus word triads in print. Results showed reliable proactive interference buildup and release for…

  2. An analytic data analysis method for oscillatory slug tests.

    PubMed

    Chen, Chia-Shyun

    2006-01-01

    An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.

  3. A Novel Numerical Method for Fuzzy Boundary Value Problems

    NASA Astrophysics Data System (ADS)

    Can, E.; Bayrak, M. A.; Hicdurmaz

    2016-05-01

    In the present paper, a new numerical method is proposed for solving fuzzy differential equations which are utilized for the modeling problems in science and engineering. Fuzzy approach is selected due to its important applications on processing uncertainty or subjective information for mathematical models of physical problems. A second-order fuzzy linear boundary value problem is considered in particular due to its important applications in physics. Moreover, numerical experiments are presented to show the effectiveness of the proposed numerical method on specific physical problems such as heat conduction in an infinite plate and a fin.

  4. Mapcurves: a quantitative method for comparing categorical maps.

    Treesearch

    William W. Hargrove; M. Hoffman Forrest; Paul F. Hessburg

    2006-01-01

    We present Mapcurves, a quantitative goodness-of-fit (GOF) method that unambiguously shows the degree of spatial concordance between two or more categorical maps. Mapcurves graphically and quantitatively evaluate the degree of fit among any number of maps and quantify a GOF for each polygon, as well as the entire map. The Mapcurve method indicates a perfect fit even if...

  5. A Novel WA-BPM Based on the Generalized Multistep Scheme in the Propagation Direction in the Waveguide

    NASA Astrophysics Data System (ADS)

    Ji, Yang; Chen, Hong; Tang, Hongwu

    2017-06-01

    A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger

  6. Automatic tracking of red blood cells in micro channels using OpenCV

    NASA Astrophysics Data System (ADS)

    Rodrigues, Vânia; Rodrigues, Pedro J.; Pereira, Ana I.; Lima, Rui

    2013-10-01

    The present study aims to developan automatic method able to track red blood cells (RBCs) trajectories flowing through a microchannel using the Open Source Computer Vision (OpenCV). The developed method is based on optical flux calculation assisted by the maximization of the template-matching product. The experimental results show a good functional performance of this method.

  7. Recent statistical methods for orientation data

    NASA Technical Reports Server (NTRS)

    Batschelet, E.

    1972-01-01

    The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.

  8. Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.

    PubMed

    Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick

    2009-08-17

    In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America

  9. New methods for indexing multi-lattice diffraction data

    PubMed Central

    Gildea, Richard J.; Waterman, David G.; Parkhurst, James M.; Axford, Danny; Sutton, Geoff; Stuart, David I.; Sauter, Nicholas K.; Evans, Gwyndaf; Winter, Graeme

    2014-01-01

    A new indexing method is presented which is capable of indexing multiple crystal lattices from narrow wedges of diffraction data. The method takes advantage of a simplification of Fourier transform-based methods that is applicable when the unit-cell dimensions are known a priori. The efficacy of this method is demonstrated with both semi-synthetic multi-lattice data and real multi-lattice data recorded from crystals of ∼1 µm in size, where it is shown that up to six lattices can be successfully indexed and subsequently integrated from a 1° wedge of data. Analysis is presented which shows that improvements in data-quality indicators can be obtained through accurate identification and rejection of overlapping reflections prior to scaling. PMID:25286849

  10. New methods for indexing multi-lattice diffraction data.

    PubMed

    Gildea, Richard J; Waterman, David G; Parkhurst, James M; Axford, Danny; Sutton, Geoff; Stuart, David I; Sauter, Nicholas K; Evans, Gwyndaf; Winter, Graeme

    2014-10-01

    A new indexing method is presented which is capable of indexing multiple crystal lattices from narrow wedges of diffraction data. The method takes advantage of a simplification of Fourier transform-based methods that is applicable when the unit-cell dimensions are known a priori. The efficacy of this method is demonstrated with both semi-synthetic multi-lattice data and real multi-lattice data recorded from crystals of ∼1 µm in size, where it is shown that up to six lattices can be successfully indexed and subsequently integrated from a 1° wedge of data. Analysis is presented which shows that improvements in data-quality indicators can be obtained through accurate identification and rejection of overlapping reflections prior to scaling.

  11. New methods for indexing multi-lattice diffraction data

    DOE PAGES

    Gildea, Richard J.; Waterman, David G.; Parkhurst, James M.; ...

    2014-09-27

    A new indexing method is presented which is capable of indexing multiple crystal lattices from narrow wedges of diffraction data. The method takes advantage of a simplification of Fourier transform-based methods that is applicable when the unit-cell dimensions are known a priori. The efficacy of this method is demonstrated with both semi-synthetic multi-lattice data and real multi-lattice data recorded from crystals of ~1 µm in size, where it is shown that up to six lattices can be successfully indexed and subsequently integrated from a 1° wedge of data. Analysis is presented which shows that improvements in data-quality indicators can bemore » obtained through accurate identification and rejection of overlapping reflections prior to scaling.« less

  12. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  13. Investigation of diffusion length distribution on polycrystalline silicon wafers via photoluminescence methods

    PubMed Central

    Lou, Shishu; Zhu, Huishi; Hu, Shaoxu; Zhao, Chunhua; Han, Peide

    2015-01-01

    Characterization of the diffusion length of solar cells in space has been widely studied using various methods, but few studies have focused on a fast, simple way to obtain the quantified diffusion length distribution on a silicon wafer. In this work, we present two different facile methods of doing this by fitting photoluminescence images taken in two different wavelength ranges or from different sides. These methods, which are based on measuring the ratio of two photoluminescence images, yield absolute values of the diffusion length and are less sensitive to the inhomogeneity of the incident laser beam. A theoretical simulation and experimental demonstration of this method are presented. The diffusion length distributions on a polycrystalline silicon wafer obtained by the two methods show good agreement. PMID:26364565

  14. Sound reproduction in personal audio systems using the least-squares approach with acoustic contrast control constraint.

    PubMed

    Cai, Yefeng; Wu, Ming; Yang, Jun

    2014-02-01

    This paper describes a method for focusing the reproduced sound in the bright zone without disturbing other people in the dark zone in personal audio systems. The proposed method combines the least-squares and acoustic contrast criteria. A constrained parameter is introduced to tune the balance between two performance indices, namely, the acoustic contrast and the spatial average error. An efficient implementation of this method using convex optimization is presented. Offline simulations and real-time experiments using a linear loudspeaker array are conducted to evaluate the performance of the presented method. Results show that compared with the traditional acoustic contrast control method, the proposed method can improve the flatness of response in the bright zone by sacrificing the level of acoustic contrast.

  15. Real-time absorption and scattering characterization of slab-shaped turbid samples obtained by a combination of angular and spatially resolved measurements.

    PubMed

    Dam, Jan S; Yavari, Nazila; Sørensen, Søren; Andersson-Engels, Stefan

    2005-07-10

    We present a fast and accurate method for real-time determination of the absorption coefficient, the scattering coefficient, and the anisotropy factor of thin turbid samples by using simple continuous-wave noncoherent light sources. The three optical properties are extracted from recordings of angularly resolved transmittance in addition to spatially resolved diffuse reflectance and transmittance. The applied multivariate calibration and prediction techniques are based on multiple polynomial regression in combination with a Newton--Raphson algorithm. The numerical test results based on Monte Carlo simulations showed mean prediction errors of approximately 0.5% for all three optical properties within ranges typical for biological media. Preliminary experimental results are also presented yielding errors of approximately 5%. Thus the presented methods show a substantial potential for simultaneous absorption and scattering characterization of turbid media.

  16. Unconditionally stable finite-difference time-domain methods for modeling the Sagnac effect

    NASA Astrophysics Data System (ADS)

    Novitski, Roman; Scheuer, Jacob; Steinberg, Ben Z.

    2013-02-01

    We present two unconditionally stable finite-difference time-domain (FDTD) methods for modeling the Sagnac effect in rotating optical microsensors. The methods are based on the implicit Crank-Nicolson scheme, adapted to hold in the rotating system reference frame—the rotating Crank-Nicolson (RCN) methods. The first method (RCN-2) is second order accurate in space whereas the second method (RCN-4) is fourth order accurate. Both methods are second order accurate in time. We show that the RCN-4 scheme is more accurate and has better dispersion isotropy. The numerical results show good correspondence with the expression for the classical Sagnac resonant frequency splitting when using group refractive indices of the resonant modes of a microresonator. Also we show that the numerical results are consistent with the perturbation theory for the rotating degenerate microcavities. We apply our method to simulate the effect of rotation on an entire Coupled Resonator Optical Waveguide (CROW) consisting of a set of coupled microresonators. Preliminary results validate the formation of a rotation-induced gap at the center of a transfer function of a CROW.

  17. Identifiability and identification of trace continuous pollutant source.

    PubMed

    Qu, Hongquan; Liu, Shouwen; Pang, Liping; Hu, Tao

    2014-01-01

    Accidental pollution events often threaten people's health and lives, and a pollutant source is very necessary so that prompt remedial actions can be taken. In this paper, a trace continuous pollutant source identification method is developed to identify a sudden continuous emission pollutant source in an enclosed space. The location probability model is set up firstly, and then the identification method is realized by searching a global optimal objective value of the location probability. In order to discuss the identifiability performance of the presented method, a conception of a synergy degree of velocity fields is presented in order to quantitatively analyze the impact of velocity field on the identification performance. Based on this conception, some simulation cases were conducted. The application conditions of this method are obtained according to the simulation studies. In order to verify the presented method, we designed an experiment and identified an unknown source appearing in the experimental space. The result showed that the method can identify a sudden trace continuous source when the studied situation satisfies the application conditions.

  18. A penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography.

    PubMed

    Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.

  19. A composition joint PDF method for the modeling of spray flames

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1995-01-01

    This viewgraph presentation discusses an extension of the probability density function (PDF) method to the modeling of spray flames to evaluate the limitations and capabilities of this method in the modeling of gas-turbine combustor flows. The comparisons show that the general features of the flowfield are correctly predicted by the present solution procedure. The present solution appears to provide a better representation of the temperature field, particularly, in the reverse-velocity zone. The overpredictions in the centerline velocity could be attributed to the following reasons: (1) the use of k-epsilon turbulence model is known to be less precise in highly swirling flows and (2) the swirl number used here is reported to be estimated rather than measured.

  20. The application of virtual prototyping methods to determine the dynamic parameters of mobile robot

    NASA Astrophysics Data System (ADS)

    Kurc, Krzysztof; Szybicki, Dariusz; Burghardt, Andrzej; Muszyńska, Magdalena

    2016-04-01

    The paper presents methods used to determine the parameters necessary to build a mathematical model of an underwater robot with a crawler drive. The parameters present in the dynamics equation will be determined by means of advanced mechatronic design tools, including: CAD/CAE software andMES modules. The virtual prototyping process is described as well as the various possible uses (design adaptability) depending on the optional accessories added to the vehicle. A mathematical model is presented to show the kinematics and dynamics of the underwater crawler robot, essential for the design stage.

  1. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Adaptive nonlinear control for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Black, William S.

    We present the background and motivation for ground vehicle autonomy, and focus on uses for space-exploration. Using a simple design example of an autonomous ground vehicle we derive the equations of motion. After providing the mathematical background for nonlinear systems and control we present two common methods for exactly linearizing nonlinear systems, feedback linearization and backstepping. We use these in combination with three adaptive control methods: model reference adaptive control, adaptive sliding mode control, and extremum-seeking model reference adaptive control. We show the performances of each combination through several simulation results. We then consider disturbances in the system, and design nonlinear disturbance observers for both single-input-single-output and multi-input-multi-output systems. Finally, we show the performance of these observers with simulation results.

  3. Discrete Molecular Dynamics Approach to the Study of Disordered and Aggregating Proteins.

    PubMed

    Emperador, Agustí; Orozco, Modesto

    2017-03-14

    We present a refinement of the Coarse Grained PACSAB force field for Discrete Molecular Dynamics (DMD) simulations of proteins in aqueous conditions. As the original version, the refined method provides good representation of the structure and dynamics of folded proteins but provides much better representations of a variety of unfolded proteins, including some very large, impossible to analyze by atomistic simulation methods. The PACSAB/DMD method also reproduces accurately aggregation properties, providing good pictures of the structural ensembles of proteins showing a folded core and an intrinsically disordered region. The combination of accuracy and speed makes the method presented here a good alternative for the exploration of unstructured protein systems.

  4. A biologically inspired neural network for dynamic programming.

    PubMed

    Francelin Romero, R A; Kacpryzk, J; Gomide, F

    2001-12-01

    An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.

  5. Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations

    NASA Astrophysics Data System (ADS)

    Detrixhe, Miles; Gibou, Frédéric

    2016-10-01

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.

  6. Renormalized stress-energy tensor for stationary black holes

    NASA Astrophysics Data System (ADS)

    Levi, Adam

    2017-01-01

    We continue the presentation of the pragmatic mode-sum regularization (PMR) method for computing the renormalized stress-energy tensor (RSET). We show in detail how to employ the t -splitting variant of the method, which was first presented for ⟨ϕ2⟩ren , to compute the RSET in a stationary, asymptotically flat background. This variant of the PMR method was recently used to compute the RSET for an evaporating spinning black hole. As an example for regularization, we demonstrate here the computation of the RSET for a minimally coupled, massless scalar field on Schwarzschild background in all three vacuum states. We discuss future work and possible improvements of the regularization schemes in the PMR method.

  7. Polarization ratio property and material classification method in passive millimeter wave polarimetric imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Yayun; Qi, Bo; Liu, Siyuan; Hu, Fei; Gui, Liangqi; Peng, Xiaohui

    2016-10-01

    Polarimetric measurements can provide additional information as compared to unpolarized ones. In this paper, linear polarization ratio (LPR) is created to be a feature discriminator. The LPR properties of several materials are investigated using Fresnel theory. The theoretical results show that LPR is sensitive to the material type (metal or dielectric). Then a linear polarization ratio-based (LPR-based) method is presented to distinguish between metal and dielectric materials. In order to apply this method to practical applications, the optimal range of incident angle have been discussed. The typical outdoor experiments including various objects such as aluminum plate, grass, concrete, soil and wood, have been conducted to validate the presented classification method.

  8. Risk Evaluation of Bogie System Based on Extension Theory and Entropy Weight Method

    PubMed Central

    Du, Yanping; Zhang, Yuan; Zhao, Xiaogang; Wang, Xiaohui

    2014-01-01

    A bogie system is the key equipment of railway vehicles. Rigorous practical evaluation of bogies is still a challenge. Presently, there is overreliance on part-specific experiments in practice. In the present work, a risk evaluation index system of a bogie system has been established based on the inspection data and experts' evaluation. Then, considering quantitative and qualitative aspects, the risk state of a bogie system has been evaluated using an extension theory and an entropy weight method. Finally, the method has been used to assess the bogie system of four different samples. Results show that this method can assess the risk state of a bogie system exactly. PMID:25574159

  9. Risk evaluation of bogie system based on extension theory and entropy weight method.

    PubMed

    Du, Yanping; Zhang, Yuan; Zhao, Xiaogang; Wang, Xiaohui

    2014-01-01

    A bogie system is the key equipment of railway vehicles. Rigorous practical evaluation of bogies is still a challenge. Presently, there is overreliance on part-specific experiments in practice. In the present work, a risk evaluation index system of a bogie system has been established based on the inspection data and experts' evaluation. Then, considering quantitative and qualitative aspects, the risk state of a bogie system has been evaluated using an extension theory and an entropy weight method. Finally, the method has been used to assess the bogie system of four different samples. Results show that this method can assess the risk state of a bogie system exactly.

  10. ANOVA with Rasch Measures.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Various methods of estimating main effects from ordinal data are presented and contrasted. Problems discussed include: (1) at what level to accumulate ordinal data into linear measures; (2) how to maintain scaling across analyses; and (3) the inevitable confounding of within cell variance with measurement error. An example shows three methods of…

  11. Percentage Problems in Bridging Courses

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    Research on teaching high school mathematics shows that the topic of percentages often causes learning difficulties. This article describes a method of teaching percentages that the authors used in university bridging courses. In this method, the information from a word problem about percentages is presented in a two-way table. Such a table gives…

  12. Intraocular lens power calculations for cataract surgery after phototherapeutic keratectomy in granular corneal dystrophy type 2.

    PubMed

    Jung, Se Hwan; Han, Kyung Eun; Sgrignoli, Bradford; Kim, Tae-Im; Lee, Hyung Keun; Kim, Eung Kweon

    2012-10-01

    To investigate the predictability of various intraocular lens (IOL) power calculation methods in granular corneal dystrophy type 2 (GCD2) with prior phototherapeutic keratectomy (PTK) and to suggest the more predictable IOL power calculation method. Medical records of 20 eyes from 16 patients with GCD2, all having undergone cataract surgery after PTK, were retrospectively evaluated. Postoperative cataract refractive errors were compared with target diopters (D) using IOL power calculation methods as follows: 1) myopic and 2) hyperopic Haigis-L formula in IOLMaster (Carl Zeiss Meditec); 3) SRK/T formula using 4.5-mm zone Holladay equivalent keratometry readings (EKRs) (single-K Holladay EKRs method); 4) central keratometry power of true net power map in the Pentacam system (Oculus Optikgeräte GmbH); and 5) clinical history, Aramberri double-K, and double-K Holladay EKRs methods. Topographic status of corneal curvature after PTK was evaluated. Fourteen (70%) of 20 eyes showed central island formation after PTK. When central island was present, the mean absolute error (MAE) using the hyperopic Haigis-L formula was 0.25±0.15 D. When central island was not present, the myopic Haigis-L formula showed MAE of 0.33±0.16 D. When central island formation and IOLMaster keratometry underestimation were present, the hyperopic Haigis-L formula showed the least MAE of 0.26±0.08 D when switching the IOL-Master keratometry values equal to 4.5-mm zone Holladay EKRs. In planning for cataract surgery after PTK in GCD2, topographic analysis for central island formation is necessary. With or without central island formation, the hyperopic or myopic Haigis-L formula can be applied. When IOLMaster keratometry shows underestimation, the Haigis-L formula using 4.5-mm zone Holladay EKRs can be considered. Copyright 2012, SLACK Incorporated.

  13. Determination of the human spine curve based on laser triangulation.

    PubMed

    Poredoš, Primož; Čelan, Dušan; Možina, Janez; Jezeršek, Matija

    2015-02-05

    The main objective of the present method was to automatically obtain a spatial curve of the thoracic and lumbar spine based on a 3D shape measurement of a human torso with developed scoliosis. Manual determination of the spine curve, which was based on palpation of the thoracic and lumbar spinous processes, was found to be an appropriate way to validate the method. Therefore a new, noninvasive, optical 3D method for human torso evaluation in medical practice is introduced. Twenty-four patients with confirmed clinical diagnosis of scoliosis were scanned using a specially developed 3D laser profilometer. The measuring principle of the system is based on laser triangulation with one-laser-plane illumination. The measurement took approximately 10 seconds at 700 mm of the longitudinal translation along the back. The single point measurement accuracy was 0.1 mm. Computer analysis of the measured surface returned two 3D curves. The first curve was determined by manual marking (manual curve), and the second was determined by detecting surface curvature extremes (automatic curve). The manual and automatic curve comparison was given as the root mean square deviation (RMSD) for each patient. The intra-operator study involved assessing 20 successive measurements of the same person, and the inter-operator study involved assessing measurements from 8 operators. The results obtained for the 24 patients showed that the typical RMSD between the manual and automatic curve was 5.0 mm in the frontal plane and 1.0 mm in the sagittal plane, which is a good result compared with palpatory accuracy (9.8 mm). The intra-operator repeatability of the presented method in the frontal and sagittal planes was 0.45 mm and 0.06 mm, respectively. The inter-operator repeatability assessment shows that that the presented method is invariant to the operator of the computer program with the presented method. The main novelty of the presented paper is the development of a new, non-contact method that provides a quick, precise and non-invasive way to determine the spatial spine curve for patients with developed scoliosis and the validation of the presented method using the palpation of the spinous processes, where no harmful ionizing radiation is present.

  14. Teaching Introductory Chemistry with Videocassette Presentations.

    ERIC Educational Resources Information Center

    Enger, John; And Others

    Reported here is the development and evaluation of an extensive series of video-cassette presentations developed for introductory chemical education. In measures of course achievement, students instructed by the video-cassette-discussion format received higher average scores than those taught by live lecture methods. A survey showed that the…

  15. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  16. Stochastic rainfall synthesis for urban applications using different regionalization methods

    NASA Astrophysics Data System (ADS)

    Callau Poduje, A. C.; Leimbach, S.; Haberlandt, U.

    2017-12-01

    The proper design and efficient operation of urban drainage systems require long and continuous rainfall series in a high temporal resolution. Unfortunately, these time series are usually available in a few locations and it is therefore suitable to develop a stochastic precipitation model to generate rainfall in locations without observations. The model presented is based on an alternating renewal process and involves an external and an internal structure. The members of these structures are described by probability distributions which are site specific. Different regionalization methods based on site descriptors are presented which are used for estimating the distributions for locations without observations. Regional frequency analysis, multiple linear regressions and a vine-copula method are applied for this purpose. An area located in the north-west of Germany is used to compare the different methods and involves a total of 81 stations with 5 min rainfall records. The site descriptors include information available for the whole region: position, topography and hydrometeorologic characteristics which are estimated from long term observations. The methods are compared directly by cross validation of different rainfall statistics. Given that the model is stochastic the evaluation is performed based on ensembles of many long synthetic time series which are compared with observed ones. The performance is as well indirectly evaluated by setting up a fictional urban hydrological system to test the capability of the different methods regarding flooding and overflow characteristics. The results show a good representation of the seasonal variability and good performance in reproducing the sample statistics of the rainfall characteristics. The copula based method shows to be the most robust of the three methods. Advantages and disadvantages of the different methods are presented and discussed.

  17. A family of conjugate gradient methods for large-scale nonlinear equations.

    PubMed

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  18. Mathematic models for a ray tracing method and its applications in wireless optical communications.

    PubMed

    Zhang, Minglun; Zhang, Yangan; Yuan, Xueguang; Zhang, Jinnan

    2010-08-16

    This paper presents a new ray tracing method, which contains a whole set of mathematic models, and its validity is verified by simulations. In addition, both theoretical analysis and simulation results show that the computational complexity of the method is much lower than that of previous ones. Therefore, the method can be used to rapidly calculate the impulse response of wireless optical channels for complicated systems.

  19. A Simple Joint Estimation Method of Residual Frequency Offset and Sampling Frequency Offset for DVB Systems

    NASA Astrophysics Data System (ADS)

    Kwon, Ki-Won; Cho, Yongsoo

    This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.

  20. Non-LTE line-blanketed model atmospheres of hot stars. 1: Hybrid complete linearization/accelerated lambda iteration method

    NASA Technical Reports Server (NTRS)

    Hubeny, I.; Lanz, T.

    1995-01-01

    A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.

  1. Modal method for Second Harmonic Generation in nanostructures

    NASA Astrophysics Data System (ADS)

    Héron, S.; Pardo, F.; Bouchon, P.; Pelouard, J.-L.; Haïdar, R.

    2015-05-01

    Nanophotonic devices show interesting features for nonlinear response enhancement but numerical tools are mandatory to fully determine their behaviour. To address this need, we present a numerical modal method dedicated to nonlinear optics calculations under the undepleted pump approximation. It is brie y explained in the frame of Second Harmonic Generation for both plane waves and focused beams. The nonlinear behaviour of selected nanostructures is then investigated to show comparison with existing analytical results and study the convergence of the code.

  2. Indispensable finite time corrections for Fokker-Planck equations from time series data.

    PubMed

    Ragwitz, M; Kantz, H

    2001-12-17

    The reconstruction of Fokker-Planck equations from observed time series data suffers strongly from finite sampling rates. We show that previously published results are degraded considerably by such effects. We present correction terms which yield a robust estimation of the diffusion terms, together with a novel method for one-dimensional problems. We apply these methods to time series data of local surface wind velocities, where the dependence of the diffusion constant on the state variable shows a different behavior than previously suggested.

  3. An efficient transport solver for tokamak plasmas

    DOE PAGES

    Park, Jin Myung; Murakami, Masanori; St. John, H. E.; ...

    2017-01-03

    A simple approach to efficiently solve a coupled set of 1-D diffusion-type transport equations with a stiff transport model for tokamak plasmas is presented based on the 4th order accurate Interpolated Differential Operator scheme along with a nonlinear iteration method derived from a root-finding algorithm. Here, numerical tests using the Trapped Gyro-Landau-Fluid model show that the presented high order method provides an accurate transport solution using a small number of grid points with robust nonlinear convergence.

  4. Design and fabrication of multimode interference couplers based on digital micro-mirror system

    NASA Astrophysics Data System (ADS)

    Wu, Sumei; He, Xingdao; Shen, Chenbo

    2008-03-01

    Multimode interference (MMI) couplers, based on the self-imaging effect (SIE), are accepted popularly in integrated optics. According to the importance of MMI devices, in this paper, we present a novel method to design and fabricate MMI couplers. A technology of maskless lithography to make MMI couplers based on a smart digital micro-mirror device (DMD) system is proposed. A 1×4 MMI device is designed as an example, which shows the present method is efficient and cost-effective.

  5. Time evolution of a Gaussian class of quasi-distribution functions under quadratic Hamiltonian.

    PubMed

    Ginzburg, D; Mann, A

    2014-03-10

    A Lie algebraic method for propagation of the Wigner quasi-distribution function (QDF) under quadratic Hamiltonian was presented by Zoubi and Ben-Aryeh. We show that the same method can be used in order to propagate a rather general class of QDFs, which we call the "Gaussian class." This class contains as special cases the well-known Wigner, Husimi, Glauber, and Kirkwood-Rihaczek QDFs. We present some examples of the calculation of the time evolution of those functions.

  6. Theoretical analysis of incompressible flow through a radial-inlet centrifugal impeller at various weight flows

    NASA Technical Reports Server (NTRS)

    Kramer, James J; Prian, Vasily D; Wu, Chung-Hua

    1956-01-01

    A method for the solution of the incompressible nonviscous flow through a centrifugal impeller, including the inlet region, is presented. Several numerical solutions are obtained for four weight flows through an impeller at one operating speed. These solutions are refined in the leading-edge region. The results are presented in a series of figures showing streamlines and relative velocity contours. A comparison is made with the results obtained by using a rapid approximate method of analysis.

  7. Cost-volume-profit and net present value analysis of health information systems.

    PubMed

    McLean, R A

    1998-08-01

    The adoption of any information system should be justified by an economic analysis demonstrating that its projected benefits outweigh its projected costs. Analysis differ, however, on which methods to employ for such a justification. Accountants prefer cost-volume-profit analysis, and economists prefer net present value analysis. The article explains the strengths and weaknesses of each method and shows how they can be used together so that well-informed investments in information systems can be made.

  8. Comparative analysis for strength serum sodium and potassium in three different methods: Flame photometry, ion-selective electrode (ISE) and colorimetric enzymatic.

    PubMed

    Garcia, Rafaela Alvim; Vanelli, Chislene Pereira; Pereira Junior, Olavo Dos Santos; Corrêa, José Otávio do Amaral

    2018-06-19

    Hydroelectrolytic disorders are common in clinical situations and may be harmful to the patient, especially those involving plasma sodium and potassium dosages. Among the possible methods for the dosages are flame photometry, ion-selective electrode (ISE) and colorimetric enzymatic method. We analyzed 175 samples in the three different methods cited from patients attending the laboratory of the University Hospital of the Federal University of Juiz de Fora. The values obtained were statistically treated using SPSS 19.0 software. The present study aims to evaluate the impact of the use of these different methods in the determination of plasma sodium and potassium. The averages obtained for sodium and potassium measurements by flame photometry were similar (P > .05) to the means obtained for the two electrolytes by ISE. The averages obtained by the colorimetric enzymatic method presented statistical difference in relation to ISE, both for sodium and potassium. In the correlation analysis, both flame photometry and colorimetric enzymatic showed a strong correlation with the ISE method for both dosages. At the first time in the same work sodium and potassium were analyzed by three different methods and the results allowed us to conclude that the methods showed a positive and strong correlation, and can be applied in the clinical routine. © 2018 Wiley Periodicals, Inc.

  9. A fourth-order box method for solving the boundary layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1977-01-01

    A fourth order box method for calculating high accuracy numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations is presented. The method is the natural extension of the second order Keller Box scheme to fourth order and is demonstrated with application to the incompressible, laminar and turbulent boundary layer equations. Numerical results for high accuracy test cases show the method to be significantly faster than other higher order and second order methods.

  10. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  11. Efficient method for computing the electronic transport properties of a multiterminal system

    NASA Astrophysics Data System (ADS)

    Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio

    2018-04-01

    We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.

  12. An accurate method for solving a class of fractional Sturm-Liouville eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Kashkari, Bothayna S. H.; Syam, Muhammed I.

    2018-06-01

    This article is devoted to both theoretical and numerical study of the eigenvalues of nonsingular fractional second-order Sturm-Liouville problem. In this paper, we implement a fractional-order Legendre Tau method to approximate the eigenvalues. This method transforms the Sturm-Liouville problem to a sparse nonsingular linear system which is solved using the continuation method. Theoretical results for the considered problem are provided and proved. Numerical results are presented to show the efficiency of the proposed method.

  13. Fremdsprachenerwerb in einer individualisierten Lernsituation. Eine Beschreibung von Lernverhalten (Foreign Language Acquisition in an Individualized Learning Situation. A Description of Learning Behavior)

    ERIC Educational Resources Information Center

    Extra, G.

    1974-01-01

    The introduction reviews and compares the audiolingual and cognitive code-learning methods. An experiment was conducted using audiolingual methods to show that learning behavior diverges considerably from the expectations set up by that method. Several charts and diagrams present the analyzed results. (Text is in German.) See FL 507 969 for…

  14. A Mixed Methods Approach for Identifying Influence on Public Policy

    ERIC Educational Resources Information Center

    Weaver-Hightower, Marcus B.

    2014-01-01

    Fields from political science to critical education policy studies have long explored power relations in policy processes, showing who influences policy agendas, policy creation, and policy implementation. Yet showing particular actors' influence on specific points in a policy text remains a methodological challenge. This article presents a…

  15. A New Way of Presenting Atomic Orbitals

    ERIC Educational Resources Information Center

    Bordass, W. T.; Linnett, J. W.

    1970-01-01

    Describes how the isometric projection with a transparent grid showing the x, y, and z axes drawn at 120 degrees each other is used. This method of presenting atomic orbitals was developed using the Cambridge University Titan computer and has the advantage over contour maps in that there is no distortion. (LS)

  16. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  17. Fluorescent Nanomaterials for the Development of Latent Fingerprints in Forensic Sciences

    PubMed Central

    Li, Ming; Yu, Aoyang; Zhu, Ye

    2018-01-01

    This review presents an overview on the application of latent fingerprint development techniques in forensic sciences. At present, traditional developing methods such as powder dusting, cyanoacrylate fuming, chemical method, and small particle reagent method, have all been gradually compromised given their emerging drawbacks such as low contrast, sensitivity, and selectivity, as well as high toxicity. Recently, much attention has been paid to the use of fluorescent nanomaterials including quantum dots (QDs) and rare earth upconversion fluorescent nanomaterials (UCNMs) due to their unique optical and chemical properties. Thus, this review lays emphasis on latent fingerprint development based on QDs and UCNMs. Compared to latent fingerprint development by traditional methods, the new methods using fluorescent nanomaterials can achieve high contrast, sensitivity, and selectivity while showing reduced toxicity. Overall, this review provides a systematic overview on such methods. PMID:29657570

  18. Micro-scale temperature measurement method using fluorescence polarization

    NASA Astrophysics Data System (ADS)

    Tatsumi, K.; Hsu, C.-H.; Suzuki, A.; Nakabe, K.

    2016-09-01

    A novel method that can measure the fluid temperature in microscopic scale by measuring the fluorescence polarization is described in this paper. The measurement technique is not influenced by the quenching effects which appears in conventional LIF methods and is believed to show a higher reliability in temperature measurements. Experiment was performed using a microchannel flow and fluorescent molecule probes, and the effects of the fluid temperature, fluid viscosity, measurement time, and pH of the solution on the measured fluorescence polarization degree are discussed to understand the basic characteristics of the present method. The results showed that fluorescence polarization is considerably less sensible to these quenching factors. A good correlation with the fluid temperature, on the other hand, was obtained and agreed well with the theoretical values confirming the feasibility of the method.

  19. Mode-Stirred Method Implementation for HIRF Susceptibility Testing and Results Comparison with Anechoic Method

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.

    2001-01-01

    This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.

  20. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  1. Optimal homotopy asymptotic method for flow and heat transfer of a viscoelastic fluid in an axisymmetric channel with a porous wall.

    PubMed

    Mabood, Fazle; Khan, Waqar A; Ismail, Ahmad Izani Md

    2013-01-01

    In this article, an approximate analytical solution of flow and heat transfer for a viscoelastic fluid in an axisymmetric channel with porous wall is presented. The solution is obtained through the use of a powerful method known as Optimal Homotopy Asymptotic Method (OHAM). We obtained the approximate analytical solution for dimensionless velocity and temperature for various parameters. The influence and effect of different parameters on dimensionless velocity, temperature, friction factor, and rate of heat transfer are presented graphically. We also compared our solution with those obtained by other methods and it is found that OHAM solution is better than the other methods considered. This shows that OHAM is reliable for use to solve strongly nonlinear problems in heat transfer phenomena.

  2. Comparison of wind tunnel test results at free stream Mach 0.7 with results from the Boeing TEA-230 subsonic flow method. [wing flow method tests

    NASA Technical Reports Server (NTRS)

    Mohn, L. W.

    1975-01-01

    The use of the Boeing TEA-230 Subsonic Flow Analysis method as a primary design tool in the development of cruise overwing nacelle configurations is presented. Surface pressure characteristics at 0.7 Mach number were determined by the TEA-230 method for a selected overwing flow-through nacelle configuration. Results of this analysis show excellent overall agreement with corresponding wind tunnel data. Effects of the presence of the nacelle on the wing pressure field were predicted accurately by the theoretical method. Evidence is provided that differences between theoretical and experimental pressure distributions in the present study would not result in significant discrepancies in the nacelle lines or nacelle drag estimates.

  3. Optimal Homotopy Asymptotic Method for Flow and Heat Transfer of a Viscoelastic Fluid in an Axisymmetric Channel with a Porous Wall

    PubMed Central

    Mabood, Fazle; Khan, Waqar A.; Ismail, Ahmad Izani

    2013-01-01

    In this article, an approximate analytical solution of flow and heat transfer for a viscoelastic fluid in an axisymmetric channel with porous wall is presented. The solution is obtained through the use of a powerful method known as Optimal Homotopy Asymptotic Method (OHAM). We obtained the approximate analytical solution for dimensionless velocity and temperature for various parameters. The influence and effect of different parameters on dimensionless velocity, temperature, friction factor, and rate of heat transfer are presented graphically. We also compared our solution with those obtained by other methods and it is found that OHAM solution is better than the other methods considered. This shows that OHAM is reliable for use to solve strongly nonlinear problems in heat transfer phenomena. PMID:24376722

  4. Phased Array Beamforming and Imaging in Composite Laminates Using Guided Waves

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara A. C.; Yu, Lingyu

    2016-01-01

    This paper presents the phased array beamforming and imaging using guided waves in anisotropic composite laminates. A generic phased array beamforming formula is presented, based on the classic delay-and-sum principle. The generic formula considers direction-dependent guided wave properties induced by the anisotropic material properties of composites. Moreover, the array beamforming and imaging are performed in frequency domain where the guided wave dispersion effect has been considered. The presented phased array method is implemented with a non-contact scanning laser Doppler vibrometer (SLDV) to detect multiple defects at different locations in an anisotropic composite plate. The array is constructed of scan points in a small area rapidly scanned by the SLDV. Using the phased array method, multiple defects at different locations are successfully detected. Our study shows that the guided wave phased array method is a potential effective method for rapid inspection of large composite structures.

  5. Fiber tracking of brain white matter based on graph theory.

    PubMed

    Lu, Meng

    2015-01-01

    Brain white matter tractography is reconstructed via diffusion-weighted magnetic resonance images. Due to the complex structure of brain white matter fiber bundles, fiber crossing and fiber branching are abundant in human brain. And regular methods with diffusion tensor imaging (DTI) can't accurately handle this problem. the biggest problems of the brain tractography. Therefore, this paper presented a novel brain white matter tractography method based on graph theory, so the fiber tracking between two voxels is transformed into locating the shortest path in a graph. Besides, the presented method uses Q-ball imaging (QBI) as the source data instead of DTI, because QBI can provide accurate information about multiple fiber crossing and branching in one voxel using orientation distribution function (ODF). Experiments showed that the presented method can accurately handle the problem of brain white matter fiber crossing and branching, and reconstruct brain tractograhpy both in phantom data and real brain data.

  6. A Novel Defect Inspection Method for Semiconductor Wafer Based on Magneto-Optic Imaging

    NASA Astrophysics Data System (ADS)

    Pan, Z.; Chen, L.; Li, W.; Zhang, G.; Wu, P.

    2013-03-01

    The defects of semiconductor wafer may be generated from the manufacturing processes. A novel defect inspection method of semiconductor wafer is presented in this paper. The method is based on magneto-optic imaging, which involves inducing eddy current into the wafer under test, and detecting the magnetic flux associated with eddy current distribution in the wafer by exploiting the Faraday rotation effect. The magneto-optic image being generated may contain some noises that degrade the overall image quality, therefore, in this paper, in order to remove the unwanted noise present in the magneto-optic image, the image enhancement approach using multi-scale wavelet is presented, and the image segmentation approach based on the integration of watershed algorithm and clustering strategy is given. The experimental results show that many types of defects in wafer such as hole and scratch etc. can be detected by the method proposed in this paper.

  7. Initial interlaboratory validation of an analytical method for the determination of lead in canned tuna to be used for monitoring and regulatory purposes.

    PubMed

    Santiago, E C; Bello, F B B

    2003-06-01

    The Association of Official Analytical Chemists (AOAC) Standard Method 972.23 (dry ashing and flame atomic absorption spectrophotometry (FAAS)), applied to the analysis of lead in tuna, was validated in three selected local laboratories to determine the acceptability of the method to both the Codex Alimentarius Commission (Codex) and the European Union (EU) Commission for monitoring lead in canned tuna. Initial validation showed that the standard AOAC method as performed in the three participating laboratories cannot satisfy the Codex/EU proposed criteria for the method detection limit for monitoring lead in fish at the present regulation level of 0.5 mg x kg(-1). Modification of the standard method by chelation/concentration of the digest solution before FAAS analysis showed that the modified method has the potential to meet Codex/EU criteria on sensitivity, accuracy and precision at the specified regulation level.

  8. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    NASA Astrophysics Data System (ADS)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  9. PSQP: Puzzle Solving by Quadratic Programming.

    PubMed

    Andalo, Fernanda A; Taubin, Gabriel; Goldenstein, Siome

    2017-02-01

    In this article we present the first effective method based on global optimization for the reconstruction of image puzzles comprising rectangle pieces-Puzzle Solving by Quadratic Programming (PSQP). The proposed novel mathematical formulation reduces the problem to the maximization of a constrained quadratic function, which is solved via a gradient ascent approach. The proposed method is deterministic and can deal with arbitrary identical rectangular pieces. We provide experimental results showing its effectiveness when compared to state-of-the-art approaches. Although the method was developed to solve image puzzles, we also show how to apply it to the reconstruction of simulated strip-shredded documents, broadening its applicability.

  10. Computational aspects of geometric correction data generation in the LANDSAT-D imagery processing

    NASA Technical Reports Server (NTRS)

    Levine, I.

    1981-01-01

    A method is presented for systematic and geodetic correction data calculation. It is based on presentation of image distortions as a sum of nominal distortions and linear effects caused by variation of the spacecraft position and attitude variables from their nominals. The method may be used for both MSS and TM image data and it is incorporated into the processing by means of mostly offline calculations. Modeling shows that the maximal of the method are of the order of 5m at the worst point in a frame; the standard deviations of the average errors less than .8m.

  11. Noise-free recovery of optodigital encrypted and multiplexed images.

    PubMed

    Henao, Rodrigo; Rueda, Edgar; Barrera, John F; Torroba, Roberto

    2010-02-01

    We present a method that allows storing multiple encrypted data using digital holography and a joint transform correlator architecture with a controllable angle reference wave. In this method, the information is multiplexed by using a key and a different reference wave angle for each object. In the recovering process, the use of different reference wave angles prevents noise produced by the nonrecovered objects from being superimposed on the recovered object; moreover, the position of the recovered object in the exit plane can be fully controlled. We present the theoretical analysis and the experimental results that show the potential and applicability of the method.

  12. Creative display of museum objects within their cultural context

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Osanlou, Ardieshir; Excell, Peter

    2014-02-01

    Most existing holographic display methods concentrate on real object reconstruction, but there is a lack of research on object stories (revealing and presenting histories). To address this challenge, we propose a method, called 4 `ER' (leader, manager, implementer, presenter) to experience and respond objects in a special immersive environment. The key innovation of the 4'ER' method is to introduce the stories (political, historical, etc.) into hard copy holography, so as to synergy art and science for museum objects display. The hologram of an imitation of a blue and white porcelain jar from The Palace Museum, Beijing, China has been made, showing good performance and reflecting different pathway to knowledge.

  13. Density measurements in low pressure, weakly magnetized, RF plasmas: experimental verification of the sheath expansion effect

    NASA Astrophysics Data System (ADS)

    Zhang, Yunchao; Charles, Christine; Boswell, Roderick W.

    2017-07-01

    This experimental study shows the validity of Sheridan's method in determining plasma density in low pressure, weakly magnetized, RF plasmas using ion saturation current data measured by a planar Langmuir probe. The ion density derived from Sheridan's method which takes into account the sheath expansion around the negatively biased probe tip, presents a good consistency with the electron density measured by a cylindrical RF-compensated Langmuir probe using the Druyvesteyn theory. The ion density obtained from the simplified method which neglects the sheath expansion effect, overestimates the true density magnitude, e.g., by a factor of 3 to 12 for the present experiment.

  14. Fast optimization of glide vehicle reentry trajectory based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Jia, Jun; Dong, Ruixing; Yuan, Xuejun; Wang, Chuangwei

    2018-02-01

    An optimization method of reentry trajectory based on genetic algorithm is presented to meet the need of reentry trajectory optimization for glide vehicle. The dynamic model for the glide vehicle during reentry period is established. Considering the constraints of heat flux, dynamic pressure, overload etc., the optimization of reentry trajectory is investigated by utilizing genetic algorithm. The simulation shows that the method presented by this paper is effective for the optimization of reentry trajectory of glide vehicle. The efficiency and speed of this method is comparative with the references. Optimization results meet all constraints, and the on-line fast optimization is potential by pre-processing the offline samples.

  15. Partitioning Ocean Wave Spectra Obtained from Radar Observations

    NASA Astrophysics Data System (ADS)

    Delaye, Lauriane; Vergely, Jean-Luc; Hauser, Daniele; Guitton, Gilles; Mouche, Alexis; Tison, Celine

    2016-08-01

    2D wave spectra of ocean waves can be partitioned into several wave components to better characterize the scene. We present here two methods of component detection: one based on watershed algorithm and the other based on a Bayesian approach. We tested both methods on a set of simulated SWIM data, the Ku-band real aperture radar embarked on the CFOSAT (China- France Oceanography Satellite) mission which launch is planned mid-2018. We present the results and the limits of both approaches and show that Bayesian method can also be applied to other kind of wave spectra observations as those obtained with the radar KuROS, an airborne radar wave spectrometer.

  16. Inhibition of recombinase polymerase amplification by background DNA: a lateral flow-based method for enriching target DNA.

    PubMed

    Rohrman, Brittany; Richards-Kortum, Rebecca

    2015-02-03

    Recombinase polymerase amplification (RPA) may be used to detect a variety of pathogens, often after minimal sample preparation. However, previous work has shown that whole blood inhibits RPA. In this paper, we show that the concentrations of background DNA found in whole blood prevent the amplification of target DNA by RPA. First, using an HIV-1 RPA assay with known concentrations of nonspecific background DNA, we show that RPA tolerates more background DNA when higher HIV-1 target concentrations are present. Then, using three additional assays, we demonstrate that the maximum amount of background DNA that may be tolerated in RPA reactions depends on the DNA sequences used in the assay. We also show that changing the RPA reaction conditions, such as incubation time and primer concentration, has little effect on the ability of RPA to function when high concentrations of background DNA are present. Finally, we develop and characterize a lateral flow-based method for enriching the target DNA concentration relative to the background DNA concentration. This sample processing method enables RPA of 10(4) copies of HIV-1 DNA in a background of 0-14 μg of background DNA. Without lateral flow sample enrichment, the maximum amount of background DNA tolerated is 2 μg when 10(6) copies of HIV-1 DNA are present. This method requires no heating or other external equipment, may be integrated with upstream DNA extraction and purification processes, is compatible with the components of lysed blood, and has the potential to detect HIV-1 DNA in infant whole blood with high proviral loads.

  17. Knowledge acquisition for a simple expert controller

    NASA Technical Reports Server (NTRS)

    Bieker, B.

    1987-01-01

    A method is presented for process control which has the properties of being incremental, cyclic and top-down. It is described on the basis of the development of an expert controller for a simple, but nonlinear control route. A quality comparison between expert controller and process operator shows the ability of the method for knowledge acquisition.

  18. A Rationale for Mixed Methods (Integrative) Research Programmes in Education

    ERIC Educational Resources Information Center

    Niaz, Mansoor

    2008-01-01

    Recent research shows that research programmes (quantitative, qualitative and mixed) in education are not displaced (as suggested by Kuhn) but rather lead to integration. The objective of this study is to present a rationale for mixed methods (integrative) research programs based on contemporary philosophy of science (Lakatos, Giere, Cartwright,…

  19. The Application of Selected Network Methods for Reliable and Safe Transport by Small Commercial Vehicles

    NASA Astrophysics Data System (ADS)

    Matuszak, Zbigniew; Bartosz, Michał; Barta, Dalibor

    2016-09-01

    In the article are characterized two network methods (critical path method - CPM and program evaluation and review technique - PERT). On the example of an international furniture company's product, it presented the exemplification of methods to transport cargos (furniture elements). Moreover, the study showed diagrams for transportation of cargos from individual components' producers to the final destination - the showroom. Calculations were based on the transportation of furniture elements via small commercial vehicles.

  20. Lattice Boltzmann model for simulation of magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Chen, Shiyi; Chen, Hudong; Martinez, Daniel; Matthaeus, William

    1991-01-01

    A numerical method, based on a discrete Boltzmann equation, is presented for solving the equations of magnetohydrodynamics (MHD). The algorithm provides advantages similar to the cellular automaton method in that it is local and easily adapted to parallel computing environments. Because of much lower noise levels and less stringent requirements on lattice size, the method appears to be more competitive with traditional solution methods. Examples show that the model accurately reproduces both linear and nonlinear MHD phenomena.

  1. Deterministic analysis of extrinsic and intrinsic noise in an epidemiological model.

    PubMed

    Bayati, Basil S

    2016-05-01

    We couple a stochastic collocation method with an analytical expansion of the canonical epidemiological master equation to analyze the effects of both extrinsic and intrinsic noise. It is shown that depending on the distribution of the extrinsic noise, the master equation yields quantitatively different results compared to using the expectation of the distribution for the stochastic parameter. This difference is incident to the nonlinear terms in the master equation, and we show that the deviation away from the expectation of the extrinsic noise scales nonlinearly with the variance of the distribution. The method presented here converges linearly with respect to the number of particles in the system and exponentially with respect to the order of the polynomials used in the stochastic collocation calculation. This makes the method presented here more accurate than standard Monte Carlo methods, which suffer from slow, nonmonotonic convergence. In epidemiological terms, the results show that extrinsic fluctuations should be taken into account since they effect the speed of disease breakouts and that the gamma distribution should be used to model the basic reproductive number.

  2. Full-field stress determination in photoelasticity with phase shifting technique

    NASA Astrophysics Data System (ADS)

    Guo, Enhai; Liu, Yonggang; Han, Yongsheng; Arola, Dwayne; Zhang, Dongsheng

    2018-04-01

    Photoelasticity is an effective method for evaluating the stress and its spatial variations within a stressed body. In the present study, a method to determine the stress distribution by means of phase shifting and a modified shear-difference is proposed. First, the orientation of the first principal stress and the retardation between the principal stresses are determined in the full-field through phase shifting. Then, through bicubic interpolation and derivation of a modified shear-difference method, the internal stress is calculated from the point with a free boundary along its normal direction. A method to reduce integration error in the shear difference scheme is proposed and compared to the existing methods; the integration error is reduced when using theoretical photoelastic parameters to calculate the stress component with the same points. Results show that when the value of Δx/Δy approaches one, the error is minimum, and although the interpolation error is inevitable, it has limited influence on the accuracy of the result. Finally, examples are presented for determining the stresses in a circular plate and ring subjected to diametric loading. Results show that the proposed approach provides a complete solution for determining the full-field stresses in photoelastic models.

  3. Molecular Characterization of Thiols in Fossil Fuels by Michael Addition Reaction Derivatization and Electrospray Ionization Fourier Transform Ion Cyclotron Resonance Mass Spectrometry.

    PubMed

    Wang, Meng; Zhao, Suoqi; Liu, Xuxia; Shi, Quan

    2016-10-04

    Thiols widely occur in sediments and fossil fuels. However, the molecular composition of these compounds is unclear due to the lack of appropriate analytical methods. In this work, a characterization method for thiols in fossil fuels was developed on the basis of Michael addition reaction derivatization followed by electrospray ionization Fourier transform ion cyclotron resonance mass spectrometry (ESI FT-ICR MS). Model thiol compound studies showed that thiols were selectively reacted with phenylvinylsulfone and transformed to sulfones with greater than 98% conversions. This method was applied to a coker naphtha, light and heavy gas oils, and crude oils from various geological sources. The results showed that long alkyl chain thiols are readily present in petroleum, which have up to 30 carbon atoms. Large DBE dispersity of thiols indicates that naphthenic and aromatic thiols are also present in the petroleum. This method is capable of detecting thiol compounds in the part per million range by weight. This method allows characterization of thiols in a complex hydrocarbon matrix, which is complementary to the comprehensive analysis of sulfur compounds in fossil fuels.

  4. Effect of Facet Displacement on Radiation Field and Its Application for Panel Adjustment of Large Reflector Antenna

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Lian, Peiyuan; Zhang, Shuxin; Xiang, Binbin; Xu, Qian

    2017-05-01

    Large reflector antennas are widely used in radars, satellite communication, radio astronomy, and so on. The rapid developments in these fields have created demands for development of better performance and higher surface accuracy. However, low accuracy and low efficiency are the common disadvantages for traditional panel alignment and adjustment. In order to improve the surface accuracy of large reflector antenna, a new method is presented to determinate panel adjustment values from far field pattern. Based on the method of Physical Optics (PO), the effect of panel facet displacement on radiation field value is derived. Then the linear system is constructed between panel adjustment vector and far field pattern. Using the method of Singular Value Decomposition (SVD), the adjustment value for all panel adjustors are obtained by solving the linear equations. An experiment is conducted on a 3.7 m reflector antenna with 12 segmented panels. The results of simulation and test are similar, which shows that the presented method is feasible. Moreover, the discussion about validation shows that the method can be used for many cases of reflector shape. The proposed research provides the instruction to adjust surface panels efficiently and accurately.

  5. Comparison of analytical methods for the determination of histamine in reference canned fish samples

    NASA Astrophysics Data System (ADS)

    Jakšić, S.; Baloš, M. Ž.; Mihaljev, Ž.; Prodanov Radulović, J.; Nešić, K.

    2017-09-01

    Two screening methods for histamine in canned fish, an enzymatic test and a competitive direct enzyme-linked immunosorbent assay (CD-ELISA), were compared with the reversed-phase liquid chromatography (RP-HPLC) standard method. For enzymatic and CD-ELISA methods, determination was conducted according to producers’ manuals. For RP-HPLC, histamine was derivatized with dansyl-chloride, followed by RP-HPLC and diode array detection. Results of analysis of canned fish, supplied as reference samples for proficiency testing, showed good agreement when histamine was present at higher concentrations (above 100 mg kg-1). At a lower level (16.95 mg kg-1), the enzymatic test produced some higher results. Generally, analysis of four reference samples according to CD-ELISA and RP-HPLC showed good agreement for histamine determination (r=0.977 in concentration range 16.95-216 mg kg-1) The results show that the applied enzymatic test and CD-ELISA appeared to be suitable screening methods for the determination of histamine in canned fish.

  6. Historical overfishing and the recent collapse of coastal ecosystems

    USGS Publications Warehouse

    Jackson, J.B.C.; Kirby, M.X.; Berger, W.H.; Bjorndal, K.A.; Botsford, L.W.; Bourque, B.J.; Bradbury, R.; Cooke, R.; Erlandson, J.; Estes, J.A.; Hughes, T.P.; Kidwell, S.; Lange, C.B.; Lenihan, H.S.; Pandolfi, J.M.; Peterson, C.H.; Steneck, R.S.; Tegner, M.J.; Warner, R.

    2001-01-01

    A method for calculating parameters necessary to maintain stable populations is described and the management implications of the method are discussed. This method depends upon knowledge of the population mortality rate schedule, the age at which the species reaches maturity, and recruitment rates or age ratios in the population. Four approaches are presented which yield information about the status of the population: (1) necessary production for a stable population, (2) allowable mortality for a stable population, (3) annual rate of change in population size, and (4) age ratios in the population which yield a stable condition. General formulas for these relationships, and formulas for several special cases, are presented. Tables are also presented showing production required to maintain a stable population with the simpler (more common) mortality and fecundity schedules.

  7. Using Toulmin analysis to analyse an instructor's proof presentation in abstract algebra

    NASA Astrophysics Data System (ADS)

    Fukawa-connelly, Timothy

    2014-01-01

    This paper provides a method for analysing undergraduate teaching of proof-based courses using Toulmin's model (1969) of argumentation. It presents a case study of one instructor's presentation of proofs. The analysis shows that the instructor presents different levels of detail in different proofs; thus, the students have an inconsistent set of written models for their work. Similarly, the analysis shows that the details the instructor says aloud differ from what she writes down. Although her verbal commentary provides additional detail and appears to have pedagogical value, for instance, by modelling thinking that supports proof writing, this value might be better realized if she were to change her teaching practices.

  8. Simulation of two-phase flow in horizontal fracture networks with numerical manifold method

    NASA Astrophysics Data System (ADS)

    Ma, G. W.; Wang, H. D.; Fan, L. F.; Wang, B.

    2017-10-01

    The paper presents simulation of two-phase flow in discrete fracture networks with numerical manifold method (NMM). Each phase of fluids is considered to be confined within the assumed discrete interfaces in the present method. The homogeneous model is modified to approach the mixed fluids. A new mathematical cover formation for fracture intersection is proposed to satisfy the mass conservation. NMM simulations of two-phase flow in a single fracture, intersection, and fracture network are illustrated graphically and validated by the analytical method or the finite element method. Results show that the motion status of discrete interface significantly depends on the ratio of mobility of two fluids rather than the value of the mobility. The variation of fluid velocity in each fracture segment and the driven fluid content are also influenced by the ratio of mobility. The advantages of NMM in the simulation of two-phase flow in a fracture network are demonstrated in the present study, which can be further developed for practical engineering applications.

  9. A new method for constructing networks from binary data

    NASA Astrophysics Data System (ADS)

    van Borkulo, Claudia D.; Borsboom, Denny; Epskamp, Sacha; Blanken, Tessa F.; Boschloo, Lynn; Schoevers, Robert A.; Waldorp, Lourens J.

    2014-08-01

    Network analysis is entering fields where network structures are unknown, such as psychology and the educational sciences. A crucial step in the application of network models lies in the assessment of network structure. Current methods either have serious drawbacks or are only suitable for Gaussian data. In the present paper, we present a method for assessing network structures from binary data. Although models for binary data are infamous for their computational intractability, we present a computationally efficient model for estimating network structures. The approach, which is based on Ising models as used in physics, combines logistic regression with model selection based on a Goodness-of-Fit measure to identify relevant relationships between variables that define connections in a network. A validation study shows that this method succeeds in revealing the most relevant features of a network for realistic sample sizes. We apply our proposed method to estimate the network of depression and anxiety symptoms from symptom scores of 1108 subjects. Possible extensions of the model are discussed.

  10. DBS-LC-MS/MS assay for caffeine: validation and neonatal application.

    PubMed

    Bruschettini, Matteo; Barco, Sebastiano; Romantsik, Olga; Risso, Francesco; Gennai, Iulian; Chinea, Benito; Ramenghi, Luca A; Tripodi, Gino; Cangemi, Giuliana

    2016-09-01

    DBS might be an appropriate microsampling technique for therapeutic drug monitoring of caffeine in infants. Nevertheless, its application presents several issues that still limit its use. This paper describes a validated DBS-LC-MS/MS method for caffeine. The results of the method validation showed an hematocrit dependence. In the analysis of 96 paired plasma and DBS clinical samples, caffeine levels measured in DBS were statistically significantly lower than in plasma but the observed differences were independent from hematocrit. These results clearly showed the need for extensive validation with real-life samples for DBS-based methods. DBS-LC-MS/MS can be considered to be a good alternative to traditional methods for therapeutic drug monitoring or PK studies in preterm infants.

  11. Comparison of direct boiling method with commercial kits for extracting fecal microbiome DNA by Illumina sequencing of 16S rRNA tags.

    PubMed

    Peng, Xin; Yu, Ke-Qiang; Deng, Guan-Hua; Jiang, Yun-Xia; Wang, Yu; Zhang, Guo-Xia; Zhou, Hong-Wei

    2013-12-01

    Low cost and high throughput capacity are major advantages of using next generation sequencing (NGS) techniques to determine metagenomic 16S rRNA tag sequences. These methods have significantly changed our view of microorganisms in the fields of human health and environmental science. However, DNA extraction using commercial kits has shortcomings of high cost and time constraint. In the present study, we evaluated the determination of fecal microbiomes using a direct boiling method compared with 5 different commercial extraction methods, e.g., Qiagen and MO BIO kits. Principal coordinate analysis (PCoA) using UniFrac distances and clustering showed that direct boiling of a wide range of feces concentrations gave a similar pattern of bacterial communities as those obtained from most of the commercial kits, with the exception of the MO BIO method. Fecal concentration by boiling method affected the estimation of α-diversity indices, otherwise results were generally comparable between boiling and commercial methods. The operational taxonomic units (OTUs) determined through direct boiling showed highly consistent frequencies with those determined through most of the commercial methods. Even those for the MO BIO kit were also obtained by the direct boiling method with high confidence. The present study suggested that direct boiling could be used to determine the fecal microbiome and using this method would significantly reduce the cost and improve the efficiency of the sample preparation for studying gut microbiome diversity. © 2013 Elsevier B.V. All rights reserved.

  12. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  13. Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu

    The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less

  14. Particle Swarm-Based Translation Control for Immersed Tunnel Element in the Hong Kong-Zhuhai-Macao Bridge Project

    NASA Astrophysics Data System (ADS)

    Li, Jun-jun; Yang, Xiao-jun; Xiao, Ying-jie; Xu, Bo-wei; Wu, Hua-feng

    2018-03-01

    Immersed tunnel is an important part of the Hong Kong-Zhuhai-Macao Bridge (HZMB) project. In immersed tunnel floating, translation which includes straight and transverse movements is the main working mode. To decide the magnitude and direction of the towing force for each tug, a particle swarm-based translation control method is presented for non-power immersed tunnel element. A sort of linear weighted logarithmic function is exploited to avoid weak subgoals. In simulation, the particle swarm-based control method is evaluated and compared with traditional empirical method in the case of the HZMB project. Simulation results show that the presented method delivers performance improvement in terms of the enhanced surplus towing force.

  15. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    NASA Astrophysics Data System (ADS)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  16. Ranking the spreading ability of nodes in network core

    NASA Astrophysics Data System (ADS)

    Tong, Xiao-Lei; Liu, Jian-Guo; Wang, Jiang-Pan; Guo, Qiang; Ni, Jing

    2015-11-01

    Ranking nodes by their spreading ability in complex networks is of vital significance to better understand the network structure and more efficiently spread information. The k-shell decomposition method could identify the most influential nodes, namely network core, with the same ks values regardless to their different spreading influence. In this paper, we present an improved method based on the k-shell decomposition method and closeness centrality (CC) to rank the node spreading influence of the network core. Experiment results on the data from the scientific collaboration network and U.S. aviation network show that the accuracy of the presented method could be increased by 31% and 45% than the one obtained by the degree k, 32% and 31% than the one by the betweenness.

  17. A new Lagrangian method for three-dimensional steady supersonic flows

    NASA Technical Reports Server (NTRS)

    Loh, Ching-Yuen; Liou, Meng-Sing

    1993-01-01

    In this report, the new Lagrangian method introduced by Loh and Hui is extended for three-dimensional, steady supersonic flow computation. The derivation of the conservation form and the solution of the local Riemann solver using the Godunov and the high-resolution TVD (total variation diminished) scheme is presented. This new approach is accurate and robust, capable of handling complicated geometry and interactions between discontinuous waves. Test problems show that the extended Lagrangian method retains all the advantages of the two-dimensional method (e.g., crisp resolution of a slip-surface (contact discontinuity) and automatic grid generation). In this report, we also suggest a novel three dimensional Riemann problem in which interesting and intricate flow features are present.

  18. [Development and validation of an analytical method to quantify residues of cleaning products able to inactivate prion].

    PubMed

    Briot, T; Robelet, A; Morin, N; Riou, J; Lelièvre, B; Lebelle-Dehaut, A-V

    2016-07-01

    In this study, a novel analytical method to quantify prion inactivating detergent in rinsing waters coming from the washer-disinfector of a hospital sterilization unit has been developed. The final aim was to obtain an easy and functional method in a routine hospital process which does not need the cleaning product manufacturer services. An ICP-MS method based on the potassium dosage of the washer-disinfector's rinsing waters was developed. Potassium hydroxide is present on the composition of the three prion inactivating detergent currently on the French market. The detergent used in this study was the Actanios LDI(®) (Anios laboratories). A Passing and Bablok regression compares concentrations measured with this developed method and with the HPLC-UV manufacturer method. According to results obtained, the developed method is easy to use in a routine hospital process. The Passing and Bablok regression showed that there is no statistical difference between the two analytical methods during the second rinsing step. Besides, both methods were linear on the third rinsing step, with a 1.5ppm difference between the concentrations measured for each method. This study shows that the ICP-MS method developed is nonspecific for the detergent, but specific for the potassium element which is present in all prion inactivating detergent currently on the French market. This method should be functional for all the prion inactivating detergent containing potassium, if the sensibility of the method is sufficient when the potassium concentration is very low in the prion inactivating detergent formulation. Copyright © 2016. Published by Elsevier Masson SAS.

  19. Radiation Heat Transfer Between Diffuse-Gray Surfaces Using Higher Order Finite Elements

    NASA Technical Reports Server (NTRS)

    Gould, Dana C.

    2000-01-01

    This paper presents recent work on developing methods for analyzing radiation heat transfer between diffuse-gray surfaces using p-version finite elements. The work was motivated by a thermal analysis of a High Speed Civil Transport (HSCT) wing structure which showed the importance of radiation heat transfer throughout the structure. The analysis also showed that refining the finite element mesh to accurately capture the temperature distribution on the internal structure led to very large meshes with unacceptably long execution times. Traditional methods for calculating surface-to-surface radiation are based on assumptions that are not appropriate for p-version finite elements. Two methods for determining internal radiation heat transfer are developed for one and two-dimensional p-version finite elements. In the first method, higher-order elements are divided into a number of sub-elements. Traditional methods are used to determine radiation heat flux along each sub-element and then mapped back to the parent element. In the second method, the radiation heat transfer equations are numerically integrated over the higher-order element. Comparisons with analytical solutions show that the integration scheme is generally more accurate than the sub-element method. Comparison to results from traditional finite elements shows that significant reduction in the number of elements in the mesh is possible using higher-order (p-version) finite elements.

  20. Measurement of Intramolecular Energy Dissipation and Stiffness of a Single Peptide Molecule by Magnetically Modulated Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Kageshima, Masami; Takeda, Seiji; Ptak, Arkadiusz; Nakamura, Chikashi; Jarvis, Suzanne P.; Tokumoto, Hiroshi; Miyake, Jun

    2004-12-01

    A method for measuring intramolecular energy dissipation as well as stiffness variation in a single biomolecule in situ by atomic force microscopy (AFM) is presented. An AFM cantilever is magnetically modulated at an off-resonance frequency while it elongates a single peptide molecule in buffer solution. The molecular stiffness and the energy dissipation are measured via the amplitude and phase lag in the response signal. Data showing a peculiar feature in both profiles of stiffness and dissipation is presented. This suggests that the present method is more sensitive to the state of the molecule than the conventional force-elongation measurement is.

  1. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  2. Application of the string method to the study of critical nuclei in capillary condensation.

    PubMed

    Qiu, Chunyin; Qian, Tiezheng; Ren, Weiqing

    2008-10-21

    We adopt a continuum description for liquid-vapor phase transition in the framework of mean-field theory and use the string method to numerically investigate the critical nuclei for capillary condensation in a slit pore. This numerical approach allows us to determine the critical nuclei corresponding to saddle points of the grand potential function in which the chemical potential is given in the beginning. The string method locates the minimal energy path (MEP), which is the most probable transition pathway connecting two metastable/stable states in configuration space. From the MEP, the saddle point is determined and the corresponding energy barrier also obtained (for grand potential). Moreover, the MEP shows how the new phase (liquid) grows out of the old phase (vapor) along the most probable transition pathway, from the birth of a critical nucleus to its consequent expansion. Our calculations run from partial wetting to complete wetting with a variable strength of attractive wall potential. In the latter case, the string method presents a unified way for computing the critical nuclei, from film formation at solid surface to bulk condensation via liquid bridge. The present application of the string method to the numerical study of capillary condensation shows the great power of this method in evaluating the critical nuclei in various liquid-vapor phase transitions.

  3. Using Semantic Association to Extend and Infer Literature-Oriented Relativity Between Terms.

    PubMed

    Cheng, Liang; Li, Jie; Hu, Yang; Jiang, Yue; Liu, Yongzhuang; Chu, Yanshuo; Wang, Zhenxing; Wang, Yadong

    2015-01-01

    Relative terms often appear together in the literature. Methods have been presented for weighting relativity of pairwise terms by their co-occurring literature and inferring new relationship. Terms in the literature are also in the directed acyclic graph of ontologies, such as Gene Ontology and Disease Ontology. Therefore, semantic association between terms may help for establishing relativities between terms in literature. However, current methods do not use these associations. In this paper, an adjusted R-scaled score (ARSS) based on information content (ARSSIC) method is introduced to infer new relationship between terms. First, set inclusion relationship between terms of ontology was exploited to extend relationships between these terms and literature. Next, the ARSS method was presented to measure relativity between terms across ontologies according to these extensional relationships. Then, the ARSSIC method using ratios of information shared of term's ancestors was designed to infer new relationship between terms across ontologies. The result of the experiment shows that ARSS identified more pairs of statistically significant terms based on corresponding gene sets than other methods. And the high average area under the receiver operating characteristic curve (0.9293) shows that ARSSIC achieved a high true positive rate and a low false positive rate. Data is available at http://mlg.hit.edu.cn/ARSSIC/.

  4. An unconditionally stable method for numerically solving solar sail spacecraft equations of motion

    NASA Astrophysics Data System (ADS)

    Karwas, Alex

    Solar sails use the endless supply of the Sun's radiation to propel spacecraft through space. The sails use the momentum transfer from the impinging solar radiation to provide thrust to the spacecraft while expending zero fuel. Recently, the first solar sail spacecraft, or sailcraft, named IKAROS completed a successful mission to Venus and proved the concept of solar sail propulsion. Sailcraft experimental data is difficult to gather due to the large expenses of space travel, therefore, a reliable and accurate computational method is needed to make the process more efficient. Presented in this document is a new approach to simulating solar sail spacecraft trajectories. The new method provides unconditionally stable numerical solutions for trajectory propagation and includes an improved physical description over other methods. The unconditional stability of the new method means that a unique numerical solution is always determined. The improved physical description of the trajectory provides a numerical solution and time derivatives that are continuous throughout the entire trajectory. The error of the continuous numerical solution is also known for the entire trajectory. Optimal control for maximizing thrust is also provided within the framework of the new method. Verification of the new approach is presented through a mathematical description and through numerical simulations. The mathematical description provides details of the sailcraft equations of motion, the numerical method used to solve the equations, and the formulation for implementing the equations of motion into the numerical solver. Previous work in the field is summarized to show that the new approach can act as a replacement to previous trajectory propagation methods. A code was developed to perform the simulations and it is also described in this document. Results of the simulations are compared to the flight data from the IKAROS mission. Comparison of the two sets of data show that the new approach is capable of accurately simulating sailcraft motion. Sailcraft and spacecraft simulations are compared to flight data and to other numerical solution techniques. The new formulation shows an increase in accuracy over a widely used trajectory propagation technique. Simulations for two-dimensional, three-dimensional, and variable attitude trajectories are presented to show the multiple capabilities of the new technique. An element of optimal control is also part of the new technique. An additional equation is added to the sailcraft equations of motion that maximizes thrust in a specific direction. A technical description and results of an example optimization problem are presented. The spacecraft attitude dynamics equations take the simulation a step further by providing control torques using the angular rate and acceleration outputs of the numerical formulation.

  5. The Formation of Initial Components of Number Concepts in Mexican Children

    ERIC Educational Resources Information Center

    Solovieva, Yulia; Quintanar, Luis; Ortiz, Gerardo

    2012-01-01

    The initial formation of number concept represents one of the essential aspects of learning mathematics at the primary school. Children commonly show strong difficulties and absence of comprehension of symbolic and abstract nature of concept of number. The objective of the present study was to show the effectiveness of original method for…

  6. The dimension split element-free Galerkin method for three-dimensional potential problems

    NASA Astrophysics Data System (ADS)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-06-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  7. The Introduction of Crystallographic Concepts Using Lap-Dissolve Slide Techniques.

    ERIC Educational Resources Information Center

    Bodner, George M.; And Others

    1980-01-01

    Describes a method using lap-dissolve slide techniques with two or more slide projectors focused on a single screen for presenting visual effects that show structural features in extended arrays of atoms, or ions involving up to several hundred atoms. Presents an outline of an introduction to the structures of crystalline solids. (CS)

  8. Show the Data, Don't Conceal Them

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2011-01-01

    Current standards of data presentation and analysis in biological journals often fall short of ideal. This is the first of a planned series of short articles, to be published in a number of journals, aiming to highlight the principles of clear data presentation and appropriate statistical analysis. This article considers the methods used to show…

  9. Strain typing of acetic acid bacteria responsible for vinegar production by the submerged elaboration method.

    PubMed

    Fernández-Pérez, Rocío; Torres, Carmen; Sanz, Susana; Ruiz-Larrea, Fernanda

    2010-12-01

    Strain typing of 103 acetic acid bacteria isolates from vinegars elaborated by the submerged method from ciders, wines and spirit ethanol, was carried on in this study. Two different molecular methods were utilised: pulsed field gel electrophoresis (PFGE) of total DNA digests with a number of restriction enzymes, and enterobacterial repetitive intergenic consensus (ERIC) - PCR analysis. The comparative study of both methods showed that restriction fragment PFGE of SpeI digests of total DNA was a suitable method for strain typing and for determining which strains were present in vinegar fermentations. Results showed that strains of the species Gluconacetobacter europaeus were the most frequent leader strains of fermentations by the submerged method in the studied vinegars, and among them strain R1 was the predominant one. Results showed as well that mixed populations (at least two different strains) occurred in vinegars from cider and wine, whereas unique strains were found in spirit vinegars, which offered the most stressing conditions for bacterial growth. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Subsurface Void Characterization with 3-D Time Domain Full Waveform Tomography.

    NASA Astrophysics Data System (ADS)

    Nguyen, T. D.

    2017-12-01

    A new three dimensional full waveform inversion (3-D FWI) method is presented for subsurface site characterization at engineering scales (less than 30 m in depth). The method is based on a solution of 3-D elastic wave equations for forward modeling, and a cross-adjoint gradient approach for model updating. The staggered-grid finite-difference technique is used to solve the wave equations, together with implementation of the perfectly matched layer condition for boundary truncation. The gradient is calculated from the forward and backward wavefields. Reversed-in-time displacement residuals are induced as multiple sources at all receiver locations for the backward wavefield. The capability of the presented FWI method is tested on both synthetic and field experimental datasets. The test configuration uses 96 receivers and 117 shots at equal spacing (Fig 1). The inversion results from synthetic data show the ability of characterizing variable low- and high-velocity layers with embedded void (Figs 2-3). The synthetic study shows good potential for detection of voids and abnormalities in the field.

  11. Identifiability and Identification of Trace Continuous Pollutant Source

    PubMed Central

    Qu, Hongquan; Liu, Shouwen; Pang, Liping; Hu, Tao

    2014-01-01

    Accidental pollution events often threaten people's health and lives, and a pollutant source is very necessary so that prompt remedial actions can be taken. In this paper, a trace continuous pollutant source identification method is developed to identify a sudden continuous emission pollutant source in an enclosed space. The location probability model is set up firstly, and then the identification method is realized by searching a global optimal objective value of the location probability. In order to discuss the identifiability performance of the presented method, a conception of a synergy degree of velocity fields is presented in order to quantitatively analyze the impact of velocity field on the identification performance. Based on this conception, some simulation cases were conducted. The application conditions of this method are obtained according to the simulation studies. In order to verify the presented method, we designed an experiment and identified an unknown source appearing in the experimental space. The result showed that the method can identify a sudden trace continuous source when the studied situation satisfies the application conditions. PMID:24892041

  12. Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Hilburger, Mark W.

    2003-01-01

    A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.

  13. Differences between Presentation Methods in Working Memory Procedures: A Matter of Working Memory Consolidation

    PubMed Central

    Ricker, Timothy J.; Cowan, Nelson

    2014-01-01

    Understanding forgetting from working memory, the memory used in ongoing cognitive processing, is critical to understanding human cognition. In the last decade a number of conflicting findings have been reported regarding the role of time in forgetting from working memory. This has led to a debate concerning whether longer retention intervals necessarily result in more forgetting. An obstacle to directly comparing conflicting reports is a divergence in methodology across studies. Studies which find no forgetting as a function of retention-interval duration tend to use sequential presentation of memory items, while studies which find forgetting as a function of retention-interval duration tend to use simultaneous presentation of memory items. Here, we manipulate the duration of retention and the presentation method of memory items, presenting items either sequentially or simultaneously. We find that these differing presentation methods can lead to different rates of forgetting because they tend to differ in the time available for consolidation into working memory. The experiments detailed here show that equating the time available for working memory consolidation equates the rates of forgetting across presentation methods. We discuss the meaning of this finding in the interpretation of previous forgetting studies and in the construction of working memory models. PMID:24059859

  14. The complexity of classical music networks

    NASA Astrophysics Data System (ADS)

    Rolla, Vitor; Kestenberg, Juliano; Velho, Luiz

    2018-02-01

    Previous works suggest that musical networks often present the scale-free and the small-world properties. From a musician's perspective, the most important aspect missing in those studies was harmony. In addition to that, the previous works made use of outdated statistical methods. Traditionally, least-squares linear regression is utilised to fit a power law to a given data set. However, according to Clauset et al. such a traditional method can produce inaccurate estimates for the power law exponent. In this paper, we present an analysis of musical networks which considers the existence of chords (an essential element of harmony). Here we show that only 52.5% of music in our database presents the scale-free property, while 62.5% of those pieces present the small-world property. Previous works argue that music is highly scale-free; consequently, it sounds appealing and coherent. In contrast, our results show that not all pieces of music present the scale-free and the small-world properties. In summary, this research is focused on the relationship between musical notes (Do, Re, Mi, Fa, Sol, La, Si, and their sharps) and accompaniment in classical music compositions. More information about this research project is available at https://eden.dei.uc.pt/~vitorgr/MS.html.

  15. Cell tracking for cell image analysis

    NASA Astrophysics Data System (ADS)

    Bise, Ryoma; Sato, Yoichi

    2017-04-01

    Cell image analysis is important for research and discovery in biology and medicine. In this paper, we present our cell tracking methods, which is capable of obtaining fine-grain cell behavior metrics. In order to address difficulties under dense culture conditions, where cell detection cannot be done reliably since cell often touch with blurry intercellular boundaries, we proposed two methods which are global data association and jointly solving cell detection and association. We also show the effectiveness of the proposed methods by applying the method to the biological researches.

  16. A new Newton-like method for solving nonlinear equations.

    PubMed

    Saheya, B; Chen, Guo-Qing; Sui, Yun-Kang; Wu, Cai-Ying

    2016-01-01

    This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton's method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteration and obtains the quadratic convergence property. The numerical performance and comparison show that the proposed method is efficient.

  17. The convergence study of the homotopy analysis method for solving nonlinear Volterra-Fredholm integrodifferential equations.

    PubMed

    Ghanbari, Behzad

    2014-01-01

    We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.

  18. Adaptive allocation for binary outcomes using decreasingly informative priors.

    PubMed

    Sabo, Roy T

    2014-01-01

    A method of outcome-adaptive allocation is presented using Bayes methods, where a natural lead-in is incorporated through the use of informative yet skeptical prior distributions for each treatment group. These prior distributions are modeled on unobserved data in such a way that their influence on the allocation scheme decreases as the trial progresses. Simulation studies show this method to behave comparably to the Bayesian adaptive allocation method described by Thall and Wathen (2007), who incorporate a natural lead-in through sample-size-based exponents.

  19. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    PubMed

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  20. A reconsideration of negative ratings for network-based recommendation

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Ren, Liang; Lin, Wenbin

    2018-01-01

    Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.

  1. [Combine fats products: methodic opportunities of it identification].

    PubMed

    Viktorova, E V; Kulakova, S N; Mikhaĭlov, N A

    2006-01-01

    At present time very topical problem is falsification of milk fat. The number of methods was considered to detection of milk fat authention and possibilities his difference from combined fat products. The analysis of modern approaches to valuation of milk fat authention has showed that the main method for detection of fat nature is gas chromatography analysis. The computer method of express identification of fat products is proposed for quick getting of information about accessory of examine fat to nature milk or combined fat product.

  2. Hybrid ODE/SSA methods and the cell cycle model

    NASA Astrophysics Data System (ADS)

    Wang, S.; Chen, M.; Cao, Y.

    2017-07-01

    Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.

  3. Correction of motion artifacts in endoscopic optical coherence tomography and autofluorescence images based on azimuthal en face image registration.

    PubMed

    Abouei, Elham; Lee, Anthony M D; Pahlevaninezhad, Hamid; Hohert, Geoffrey; Cua, Michelle; Lane, Pierre; Lam, Stephen; MacAulay, Calum

    2018-01-01

    We present a method for the correction of motion artifacts present in two- and three-dimensional in vivo endoscopic images produced by rotary-pullback catheters. This method can correct for cardiac/breathing-based motion artifacts and catheter-based motion artifacts such as nonuniform rotational distortion (NURD). This method assumes that en face tissue imaging contains slowly varying structures that are roughly parallel to the pullback axis. The method reduces motion artifacts using a dynamic time warping solution through a cost matrix that measures similarities between adjacent frames in en face images. We optimize and demonstrate the suitability of this method using a real and simulated NURD phantom and in vivo endoscopic pulmonary optical coherence tomography and autofluorescence images. Qualitative and quantitative evaluations of the method show an enhancement of the image quality. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  4. Producing a Brighter Future by Changing a Trend

    NASA Astrophysics Data System (ADS)

    Gwinn, Elaine

    2006-12-01

    As production of physics teachers declines across the nation, efforts are being employed at Ball Statue University to change that trend. BSU has been a PhysTEC coalition member since the Fall of 2001 and continues to be a Principal Project Institution for the project. This presentation will show various methods that have shown success toward the goal of the recruitment of more and better prepared science teachers. This will be an overview of the entire project, showing the current methods being utilized as well as the successful techniques used in the past.

  5. Generalized empirical Bayesian methods for discovery of differential data in high-throughput biology.

    PubMed

    Hardcastle, Thomas J

    2016-01-15

    High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a 'large P, small n' setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. tjh48@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grice, Warren P; Bennink, Ryan S; Evans, Philip G

    A growing number of experiments make use of multiple pairs of photons generated in the process of spontaneous parametric down-conversion. We show that entanglement in unwanted degrees of freedom can adversely affect the results of these experiments. We also discuss techniques to reduce or eliminate spectral and spatial entanglement, and we present results from two-photon polarization-entangled source with almost no entanglement in these degrees of freedom. Finally, we present two methods for the generation of four-photon polarization- entangled states. In one of these methods, four-photon can be generated without the need for intermediate two-photon entanglement.

  7. Analysis of Flexural Fatigue Strength of Self Compacting Fibre Reinforced Concrete Beams

    NASA Astrophysics Data System (ADS)

    Murali, G.; Sudar Celestina, J. P. Arul; Subhashini, N.; Vigneshwari, M.

    2017-07-01

    This study presents the extensive statistical investigation ofvariations in flexural fatigue life of self-compacting Fibrous Concrete (FC) beams. For this purpose, the experimental data of earlier researchers were examined by two parameter Weibull distribution.Two methods namely Graphical and moment wereused to analyse the variations in experimental data and the results have been presented in the form of probability of survival. The Weibull parameters values obtained from graphical and method of moments are precise. At 0.7 stress level, the fatigue life shows 59861 cyclesfor areliability of 90%.

  8. High-Throughput Method for Strontium Isotope Analysis by Multi-Collector-Inductively Coupled Plasma-Mass Spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wall, Andrew J.; Capo, Rosemary C.; Stewart, Brian W.

    2016-09-22

    This technical report presents the details of the Sr column configuration and the high-throughput Sr separation protocol. Data showing the performance of the method as well as the best practices for optimizing Sr isotope analysis by MC-ICP-MS is presented. Lastly, this report offers tools for data handling and data reduction of Sr isotope results from the Thermo Scientific Neptune software to assist in data quality assurance, which help avoid issues of data glut associated with high sample throughput rapid analysis.

  9. High-Throughput Method for Strontium Isotope Analysis by Multi-Collector-Inductively Coupled Plasma-Mass Spectrometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakala, Jacqueline Alexandra

    2016-11-22

    This technical report presents the details of the Sr column configuration and the high-throughput Sr separation protocol. Data showing the performance of the method as well as the best practices for optimizing Sr isotope analysis by MC-ICP-MS is presented. Lastly, this report offers tools for data handling and data reduction of Sr isotope results from the Thermo Scientific Neptune software to assist in data quality assurance, which help avoid issues of data glut associated with high sample throughput rapid analysis.

  10. A Novel Method for Depositing Precious Metal Films on Difficult Surfaces

    NASA Technical Reports Server (NTRS)

    Veitch, L. C.; Phillip, W. H.

    1994-01-01

    A guanidine-based vehicle was developed to deposit precious metal coatings on surfaces known to be difficult to coat. To demonstrate this method, a platinum coating was deposited on alumina fibers using a guanidine-platinum solution. X-ray diffraction confirmed that the only species present in the coating was platinum and that all of the carbon species had been removed upon heat treatment. SEM results showed that some porosity was present but that the coatings uniformly covered the fiber surface and adhered well to the finer.

  11. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  12. Influence of different extraction methods on the yield and linalool content of the extracts of Eugenia uniflora L.

    PubMed

    Galhiane, Mário S; Rissato, Sandra R; Chierice, Gilberto O; Almeida, Marcos V; Silva, Letícia C

    2006-09-15

    This work has been developed using a sylvestral fruit tree, native to the Brazilian forest, the Eugenia uniflora L., one of the Mirtaceae family. The main goal of the analytical study was focused on extraction methods themselves. The method development pointed to the Clevenger extraction as the best yield in relation to SFE and Soxhlet. The SFE method presented a good yield but showed a big amount of components in the final extract, demonstrating low selectivity. The essential oil extracted was analyzed by GC/FID showing a large range of polarity and boiling point compounds, where linalool, a widely used compound, was identified. Furthermore, an analytical solid phase extraction method was used to clean it up and obtain separated classes of compounds that were fractionated and studied by GC/FID and GC/MS.

  13. A Method for Scheduling Air Traffic with Uncertain En Route Capacity Constraints

    NASA Technical Reports Server (NTRS)

    Arneson, Heather; Bloem, Michael

    2009-01-01

    A method for scheduling ground delay and airborne holding for flights scheduled to fly through airspace with uncertain capacity constraints is presented. The method iteratively solves linear programs for departure rates and airborne holding as new probabilistic information about future airspace constraints becomes available. The objective function is the expected value of the weighted sum of ground and airborne delay. In order to limit operationally costly changes to departure rates, they are updated only when such an update would lead to a significant cost reduction. Simulation results show a 13% cost reduction over a rough approximation of current practices. Comparison between the proposed as needed replanning method and a similar method that uses fixed frequency replanning shows a typical cost reduction of 1% to 2%, and even up to a 20% cost reduction in some cases.

  14. A human-machine cooperation route planning method based on improved A* algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Zhengsheng; Cai, Chao

    2011-12-01

    To avoid the limitation of common route planning method to blindly pursue higher Machine Intelligence and autoimmunization, this paper presents a human-machine cooperation route planning method. The proposed method includes a new A* path searing strategy based on dynamic heuristic searching and a human cooperated decision strategy to prune searching area. It can overcome the shortage of A* algorithm to fall into a local long term searching. Experiments showed that this method can quickly plan a feasible route to meet the macro-policy thinking.

  15. Using single-case experimental design methodology to evaluate the effects of the ABC method for nursing staff on verbal aggressive behaviour after acquired brain injury.

    PubMed

    Winkens, Ieke; Ponds, Rudolf; Pouwels, Climmy; Eilander, Henk; van Heugten, Caroline

    2014-01-01

    The ABC method is a basic and simplified form of behavioural modification therapy for use by nurses. ABC refers to the identification of Antecedent events, target Behaviours, and Consequent events. A single-case experimental AB design was used to evaluate the effects of the ABC method on a woman diagnosed with olivo-ponto-cerebellar ataxia. Target behaviour was verbal aggressive behaviour during ADL care, assessed at 9 time points immediately before implementation of the ABC method and at 36 time points after implementation. A randomisation test showed a significant treatment effect between the baseline and intervention phases (t = .58, p = .03; ES [Nonoverlap All Pairs] = .62). Visual analysis, however, showed that the target behaviour was still present after implementation of the method and that on some days the nurses even judged the behaviour to be more severe than at baseline. Although the target behaviour was still present after treatment, the ABC method seems to be a promising tool for decreasing problem behaviour in patients with acquired brain injury. It is worth investigating the effects of this method in future studies. When interpreting single-subject data, both visual inspection and statistical analysis are needed to determine whether treatment is effective and whether the effects lead to clinically desirable results.

  16. A look-ahead variant of the Lanczos algorithm and its application to the quasi-minimal residual method for non-Hermitian linear systems. Ph.D. Thesis - Massachusetts Inst. of Technology, Aug. 1991

    NASA Technical Reports Server (NTRS)

    Nachtigal, Noel M.

    1991-01-01

    The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.

  17. The interaction between a solid body and viscous fluid by marker-and-cell method

    NASA Technical Reports Server (NTRS)

    Cheng, R. Y. K.

    1976-01-01

    A computational method for solving nonlinear problems relating to impact and penetration of a rigid body into a fluid type medium is presented. The numerical techniques, based on the Marker-and-Cell method, gives the pressure and velocity of the flow field. An important feature in this method is that the force and displacement of the rigid body interacting with the fluid during the impact and sinking phases are evaluated from the boundary stresses imposed by the fluid on the rigid body. A sample problem of low velocity penetration of a rigid block into still water is solved by this method. The computed time histories of the acceleration, pressure, and displacement of the block show food agreement with experimental measurements. A sample problem of high velocity impact of a rigid block into soft clay is also presented.

  18. An exact noniterative linear method for locating sources based on measuring receiver arrival times.

    PubMed

    Militello, C; Buenafuente, S R

    2007-06-01

    In this paper an exact, linear solution to the source localization problem based on the time of arrival at the receivers is presented. The method is unique in that the source's position can be obtained by solving a system of linear equations, three for a plane and four for a volume. This simplification means adding an additional receiver to the minimum mathematically required (3+1 in two dimensions and 4+1 in three dimensions). The equations are easily worked out for any receiver configuration and their geometrical interpretation is straightforward. Unlike other methods, the system of reference used to describe the receivers' positions is completely arbitrary. The relationship between this method and previously published ones is discussed, showing how the present, more general, method overcomes nonlinearity and unknown dependency issues.

  19. Hybrid immersed interface-immersed boundary methods for AC dielectrophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hossan, Mohammad Robiul; Department of Engineering and Physics, University of Central Oklahoma, Edmond, OK 73034-5209; Dillon, Robert

    2014-08-01

    Dielectrophoresis, a nonlinear electrokinetic transport mechanism, has become popular in many engineering applications including manipulation, characterization and actuation of biomaterials, particles and biological cells. In this paper, we present a hybrid immersed interface–immersed boundary method to study AC dielectrophoresis where an algorithm is developed to solve the complex Poisson equation using a real variable formulation. An immersed interface method is employed to obtain the AC electric field in a fluid media with suspended particles and an immersed boundary method is used for the fluid equations and particle transport. The convergence of the proposed algorithm as well as validation of themore » hybrid scheme with experimental results is presented. In this paper, the Maxwell stress tensor is used to calculate the dielectrophoretic force acting on particles by considering the physical effect of particles in the computational domain. Thus, this study eliminates the approximations used in point dipole methods for calculating dielectrophoretic force. A comparative study between Maxwell stress tensor and point dipole methods for computing dielectrophoretic forces are presented. The hybrid method is used to investigate the physics of dielectrophoresis in microfluidic devices using an AC electric field. The numerical results show that with proper design and appropriate selection of applied potential and frequency, global electric field minima can be obtained to facilitate multiple particle trapping by exploiting the mechanism of negative dielectrophoresis. Our numerical results also show that electrically neutral particles form a chain parallel to the applied electric field irrespective of their initial orientation when an AC electric field is applied. This proposed hybrid numerical scheme will help to better understand dielectrophoresis and to design and optimize microfluidic devices.« less

  20. Evaluation of physicochemical properties of root-end filling materials using conventional and Micro-CT tests.

    PubMed

    Torres, Fernanda Ferrari Esteves; Bosso-Martelo, Roberta; Espir, Camila Galletti; Cirelli, Joni Augusto; Guerreiro-Tanomaru, Juliane Maria; Tanomaru-Filho, Mario

    2017-01-01

    To evaluate solubility, dimensional stability, filling ability and volumetric change of root-end filling materials using conventional tests and new Micro-CT-based methods. 7. The results suggested correlated or complementary data between the proposed tests. At 7 days, BIO showed higher solubility and at 30 days, showed higher volumetric change in comparison with MTA (p<0.05). With regard to volumetric change, the tested materials were similar (p>0.05) at 7 days. At 30 days, they presented similar solubility. BIO and MTA showed higher dimensional stability than ZOE (p<0.05). ZOE and BIO showed higher filling ability (p<0.05). ZOE presented a higher dimensional change, and BIO had greater solubility after 7 days. BIO presented filling ability and dimensional stability, but greater volumetric change than MTA after 30 days. Micro-CT can provide important data on the physicochemical properties of materials complementing conventional tests.

  1. Pressurized liquid extraction and chemical characterization of safflower oil: A comparison between methods.

    PubMed

    Conte, Rogério; Gullich, Letícia M D; Bilibio, Denise; Zanella, Odivan; Bender, João P; Carniel, Naira; Priamo, Wagner L

    2016-12-15

    This work investigates the extraction process of safflower oil using pressurized ethanol, and compares the chemical composition obtained (in terms of fatty acids) with other extraction techniques. Soxhlet and Ultrasound showed maximum global yield of 36.53% and 30.41%, respectively (70°C and 240min). PLE presented maximum global yields of 25.62% (3mLmin(-1)), 19.94% (2mLmin(-1)) and 12.37% (1mLmin(-1)) at 40°C, 100bar and 60min. Palmitic acid showed the lower concentration in all experimental conditions (from 5.70% to 7.17%); Stearic and Linoleic acid presented intermediate concentrations (from 2.93% to 25.09% and 14.09% to 19.06%, respectively); Oleic acid showed higher composition (from 55.12% to 83.26%). Differences between percentages of fatty acids, depending on method were observed. Results may be applied to maximize global yields and select fatty acids, reducing the energetic costs and process time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Use of a New Set of Linguistic Features to Improve Automatic Assessment of Text Readability

    ERIC Educational Resources Information Center

    Yoshimi, Takehiko; Kotani, Katsunori; Isahara, Hitoshi

    2012-01-01

    The present paper proposes and evaluates a readability assessment method designed for Japanese learners of EFL (English as a foreign language). The proposed readability assessment method is constructed by a regression algorithm using a new set of linguistic features that were employed separately in previous studies. The results showed that the…

  3. HOPI: on-line injection optimization program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LeMaire, J L

    1977-10-26

    A method of matching the beam from the 200 MeV linac to the AGS without the necessity of making emittance measurements is presented. An on-line computer program written on the PDP10 computer performs the matching by modifying independently the horizontal and vertical emittance. Experimental results show success with this method, which can be applied to any matching section.

  4. Network-Oriented Approach to Distributed Generation Planning

    NASA Astrophysics Data System (ADS)

    Kochukov, O.; Mutule, A.

    2017-06-01

    The main objective of the paper is to present an innovative complex approach to distributed generation planning and show the advantages over existing methods. The approach will be most suitable for DNOs and authorities and has specific calculation targets to support the decision-making process. The method can be used for complex distribution networks with different arrangement and legal base.

  5. Grid-based sampling designs and area estimation

    Treesearch

    Joseph M. McCollum

    2007-01-01

    The author discusses some area and variance estimation methods that have been used by personnel of the U.S. Department of Agriculture Forest Service Southern Research Station and its predecessors. The author also presents the methods of Horvitz and Thompson (1952), especially as they have been popularized by Stevens (1997), and shows how they could be used to produce...

  6. Probabilistic composite micromechanics

    NASA Technical Reports Server (NTRS)

    Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.

    1988-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.

  7. Innovative Solutions for Words with Emphasis: Alternative Methods of Braille Transcription

    ERIC Educational Resources Information Center

    Kamei-Hannan, Cheryl

    2009-01-01

    The author of this study proposed two alternative methods for transcribing words with emphasis into braille and compared the use of the symbols for emphasis with the current braille code. The results showed that students were faster at locating words presented in one of the alternate formats, but that there was no difference in students' accuracy…

  8. Science: Oil Slick.

    ERIC Educational Resources Information Center

    VanCleave, Janice

    2000-01-01

    Presents a science experiment about oil spills and oil pollution for 7th- and 8th-grade science students. This variation on a method used by pollution control experts to clean up oil spills shows students how oil is collected after an oil spill, explaining that with this method, much of the damage from an oil spill can be averted. (SM)

  9. Simultaneous determination of antiretroviral drugs in human hair with liquid chromatography-electrospray ionization-tandem mass spectrometry.

    PubMed

    Wu, Yan; Yang, Jin; Duan, Cailing; Chu, Liuxi; Chen, Shenghuo; Qiao, Shan; Li, Xiaoming; Deng, Huihua

    2018-04-15

    The determination of the concentrations of antiretroviral drugs in hair is believed to be an important means for the assessment of the long-term adherence to highly active antiretroviral therapy. At present, the combination of tenofovir, lamivudine and nevirapine is widely used in China. However, there was no research reporting simultaneous determination of the three drugs in hair. The present study aimed to develop a sensitive method for simultaneous determination of the three drugs in 2-mg and 10-mg natural hair (Method 1 and Method 2). Hair samples were incubated in methanol at 37 °C for 16 h after being rinsed with methanol twice. The analysis was performed on high performance liquid chromatography tandem mass spectrometry with electronic spray ionization in positive mode and multiple reactions monitoring. Method 1 and Method 2 showed the limits of detection at 160 and 30 pg/mg for tenofovir, at 5 and 6 pg/mg for lamivudine and at 15 and 3 pg/mg for nevirapine. The two methods showed good linearity with the square of correlation coefficient >0.99 at the ranges of 416-5000 and 77-5000 pg/mg for tenofovir, 12-5000 and 15-5000 pg/mg for lamivudine and 39-50,000 and 6-50,000 pg/mg for nevirapine. They gave intra-day and inter-day coefficient of variation <15% and the recoveries ranging from 80.6 to 122.3% and from 83.1 to 114.4%. Method 2 showed LOD and LOQ better than Method 1 for tenofovir and nevirapine and matched Method 1 for lamivudine, but there was high consistency between them in the determination of the three drugs in hair. The population analysis with Method 2 revealed that the concentrations in hair were decreased with the distance of hair segment away from the scalp for the three antiretroviral drugs. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Heat transfer technology for internal passages of air-cooled blades for heavy-duty gas turbines.

    PubMed

    Weigand, B; Semmler, K; von Wolfersdorf, J

    2001-05-01

    The present review paper, although far from being complete, aims to give an overview about the present state of the art in the field of heat transfer technology for internal cooling of gas turbine blades. After showing some typical modern cooled blades, the different methods to enhance heat transfer in the internal passages of air-cooled blades are discussed. The complicated flows occurring in bends are described in detail, because of their increasing importance for modern cooling designs. A short review about testing of cooling design elements is given, showing the interaction of the different cooling features as well. The special focus of the present review has been put on the cooling of blades for heavy-duty gas turbines, which show several differences compared to aero-engine blades.

  11. A simplified parsimonious higher order multivariate Markov chain model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  12. An Assessment Tool of Performance Based Logistics Appropriateness

    DTIC Science & Technology

    2012-03-01

    weighted tool score. The reason might be the willing to use PBL as an acquisition method . There is an 8.51% positive difference is present. Figure 20 shows...performance-based acquisition methods to the maximum extent practicable when acquiring services with little exclusion’ is mandated. Although PBL...determines the factors affecting the success in selecting PBL as an acquisition method . Each factor is examined in detail and built into a spreadsheet tool

  13. Term Cancellations in Computing Floating-Point Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Sasaki, Tateaki; Kako, Fujio

    We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.

  14. Gravimetric method for in vitro calibration of skin hydration measurements.

    PubMed

    Martinsen, Ørjan G; Grimnes, Sverre; Nilsen, Jon K; Tronstad, Christian; Jang, Wooyoung; Kim, Hongsig; Shin, Kunsoo; Naderi, Majid; Thielmann, Frank

    2008-02-01

    A novel method for in vitro calibration of skin hydration measurements is presented. The method combines gravimetric and electrical measurements and reveals an exponential dependency of measured electrical susceptance to absolute water content in the epidermal stratum corneum. The results also show that absorption of water into the stratum corneum exhibits three different phases with significant differences in absorption time constant. These phases probably correspond to bound, loosely bound, and bulk water.

  15. Stochastic symplectic and multi-symplectic methods for nonlinear Schrödinger equation with white noise dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn

    We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.

  16. Computer-generated graphical presentations: use of multimedia to enhance communication.

    PubMed

    Marks, L S; Penson, D F; Maller, J J; Nielsen, R T; deKernion, J B

    1997-01-01

    Personal computers may be used to create, store, and deliver graphical presentations. With computer-generated combinations of the five media (text, images, sound, video, and animation)--that is, multimedia presentations--the effectiveness of message delivery can be greatly increased. The basic tools are (1) a personal computer; (2) presentation software; and (3) a projector to enlarge the monitor images for audience viewing. Use of this new method has grown rapidly in the business-conference world, but has yet to gain widespread acceptance at medical meetings. We review herein the rationale for multimedia presentations in medicine (vis-à-vis traditional slide shows) as an improved means for increasing audience attention, comprehension, and retention. The evolution of multimedia is traced from earliest times to the present. The steps involved in making a multimedia presentation are summarized, emphasizing advances in technology that bring the new method within practical reach of busy physicians. Specific attention is given to software, digital image processing, storage devices, and delivery methods. Our development of a urology multimedia presentation--delivered May 4, 1996, before the Society for Urology and Engineering and now Internet-accessible at http://www.usrf.org--was the impetus for this work.

  17. A two-step method for retrieving the longitudinal profile of an electron bunch from its coherent radiation

    NASA Astrophysics Data System (ADS)

    Pelliccia, Daniele; Sen, Tanaji

    2014-11-01

    The coherent radiation emitted by an electron bunch provides a diagnostic signal that can be used to estimate its longitudinal distribution. Commonly only the amplitude of the intensity spectrum can be measured and the associated phase must be calculated to obtain the bunch profile. Very recently an iterative method was proposed to retrieve this phase. However ambiguities associated with non-uniqueness of the solution are always present in the phase retrieval procedure. Here we present a method to overcome the ambiguity problem by first performing multiple independent runs of the phase retrieval procedure and then second, sorting the good solutions by means of cross-correlation analysis. Results obtained with simulated bunches of various shapes and experimental measured spectra are presented, discussed and compared with the established Kramers-Kronig method. It is shown that even when the effect of the ambiguities is strong, as is the case for a double peak in the profile, the cross-correlation post-processing is able to filter out unwanted solutions. We show that, unlike the Kramers-Kronig method, the combined approach presented is able to faithfully reconstruct complicated bunch profiles.

  18. Improving precise positioning of surgical robotic instruments by a three-side-view presentation system on telesurgery.

    PubMed

    Hori, Kenta; Kuroda, Tomohiro; Oyama, Hiroshi; Ozaki, Yasuhiko; Nakamura, Takehiko; Takahashi, Takashi

    2005-12-01

    For faultless collaboration among the surgeon, surgical staffs, and surgical robots in telesurgery, communication must include environmental information of the remote operating room, such as behavior of robots and staffs, vital information of a patient, named supporting information, in addition to view of surgical field. "Surgical Cockpit System, " which is a telesurgery support system that has been developed by the authors, is mainly focused on supporting information exchange between remote sites. Live video presentation is important technology for Surgical Cockpit System. Visualization method to give precise location/posture of surgical instruments is indispensable for accurate control and faultless operation. In this paper, the authors propose three-side-view presentation method for precise location/posture control of surgical instruments in telesurgery. The experimental results show that the proposed method improved accurate positioning of a telemanipulator.

  19. Self Calibrated Wireless Distributed Environmental Sensory Networks

    PubMed Central

    Fishbain, Barak; Moreno-Centeno, Erick

    2016-01-01

    Recent advances in sensory and communication technologies have made Wireless Distributed Environmental Sensory Networks (WDESN) technically and economically feasible. WDESNs present an unprecedented tool for studying many environmental processes in a new way. However, the WDESNs’ calibration process is a major obstacle in them becoming the common practice. Here, we present a new, robust and efficient method for aggregating measurements acquired by an uncalibrated WDESN, and producing accurate estimates of the observed environmental variable’s true levels rendering the network as self-calibrated. The suggested method presents novelty both in group-decision-making and in environmental sensing as it offers a most valuable tool for distributed environmental monitoring data aggregation. Applying the method on an extensive real-life air-pollution dataset showed markedly more accurate results than the common practice and the state-of-the-art. PMID:27098279

  20. Improving Multidimensional Wireless Sensor Network Lifetime Using Pearson Correlation and Fractal Clustering

    PubMed Central

    Almeida, Fernando R.; Brayner, Angelo; Rodrigues, Joel J. P. C.; Maia, Jose E. Bessa

    2017-01-01

    An efficient strategy for reducing message transmission in a wireless sensor network (WSN) is to group sensors by means of an abstraction denoted cluster. The key idea behind the cluster formation process is to identify a set of sensors whose sensed values present some data correlation. Nowadays, sensors are able to simultaneously sense multiple different physical phenomena, yielding in this way multidimensional data. This paper presents three methods for clustering sensors in WSNs whose sensors collect multidimensional data. The proposed approaches implement the concept of multidimensional behavioral clustering. To show the benefits introduced by the proposed methods, a prototype has been implemented and experiments have been carried out on real data. The results prove that the proposed methods decrease the amount of data flowing in the network and present low root-mean-square error (RMSE). PMID:28590450

  1. Improving Multidimensional Wireless Sensor Network Lifetime Using Pearson Correlation and Fractal Clustering.

    PubMed

    Almeida, Fernando R; Brayner, Angelo; Rodrigues, Joel J P C; Maia, Jose E Bessa

    2017-06-07

    An efficient strategy for reducing message transmission in a wireless sensor network (WSN) is to group sensors by means of an abstraction denoted cluster. The key idea behind the cluster formation process is to identify a set of sensors whose sensed values present some data correlation. Nowadays, sensors are able to simultaneously sense multiple different physical phenomena, yielding in this way multidimensional data. This paper presents three methods for clustering sensors in WSNs whose sensors collect multidimensional data. The proposed approaches implement the concept of multidimensional behavioral clustering . To show the benefits introduced by the proposed methods, a prototype has been implemented and experiments have been carried out on real data. The results prove that the proposed methods decrease the amount of data flowing in the network and present low root-mean-square error (RMSE).

  2. Subaperture correlation based digital adaptive optics for full field optical coherence tomography.

    PubMed

    Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A

    2013-05-06

    This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.

  3. The fast multipole method and point dipole moment polarizable force fields.

    PubMed

    Coles, Jonathan P; Masella, Michel

    2015-01-14

    We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.

  4. Prediction of the effect of formulation on the toxicity of chemicals.

    PubMed

    Mistry, Pritesh; Neagu, Daniel; Sanchez-Ruiz, Antonio; Trundle, Paul R; Vessey, Jonathan D; Gosling, John Paul

    2017-01-01

    Two approaches for the prediction of which of two vehicles will result in lower toxicity for anticancer agents are presented. Machine-learning models are developed using decision tree, random forest and partial least squares methodologies and statistical evidence is presented to demonstrate that they represent valid models. Separately, a clustering method is presented that allows the ordering of vehicles by the toxicity they show for chemically-related compounds.

  5. Method of Harmonic Balance in Full-Scale-Model Tests of Electrical Devices

    NASA Astrophysics Data System (ADS)

    Gorbatenko, N. I.; Lankin, A. M.; Lankin, M. V.

    2017-01-01

    Methods for determining the weber-ampere characteristics of electrical devices, one of which is based on solution of direct problem of harmonic balance and the other on solution of inverse problem of harmonic balance by the method of full-scale-model tests, are suggested. The mathematical model of the device is constructed using the describing function and simplex optimization methods. The presented results of experimental applications of the method show its efficiency. The advantage of the method is the possibility of application for nondestructive inspection of electrical devices in the processes of their production and operation.

  6. Experimental preparation and characterization of four-dimensional quantum states using polarization and time-bin modes of a single photon

    NASA Astrophysics Data System (ADS)

    Yoo, Jinwon; Choi, Yujun; Cho, Young-Wook; Han, Sang-Wook; Lee, Sang-Yun; Moon, Sung; Oh, Kyunghwan; Kim, Yong-Su

    2018-07-01

    We present a detailed method to prepare and characterize four-dimensional pure quantum states or ququarts using polarization and time-bin modes of a single-photon. In particular, we provide a simple method to generate an arbitrary pure ququart and fully characterize the state with quantum state tomography. We also verify the reliability of the recipe by showing experimental preparation and characterization of 20 ququart states in mutually unbiased bases. As qudits provide superior properties over qubits in many fundamental tests of quantum physics and applications in quantum information processing, the presented method will be useful for photonic quantum information science.

  7. Classification of Magnetic Nanoparticle Systems—Synthesis, Standardization and Analysis Methods in the NanoMag Project

    PubMed Central

    Bogren, Sara; Fornara, Andrea; Ludwig, Frank; del Puerto Morales, Maria; Steinhoff, Uwe; Fougt Hansen, Mikkel; Kazakova, Olga; Johansson, Christer

    2015-01-01

    This study presents classification of different magnetic single- and multi-core particle systems using their measured dynamic magnetic properties together with their nanocrystal and particle sizes. The dynamic magnetic properties are measured with AC (dynamical) susceptometry and magnetorelaxometry and the size parameters are determined from electron microscopy and dynamic light scattering. Using these methods, we also show that the nanocrystal size and particle morphology determines the dynamic magnetic properties for both single- and multi-core particles. The presented results are obtained from the four year EU NMP FP7 project, NanoMag, which is focused on standardization of analysis methods for magnetic nanoparticles. PMID:26343639

  8. Solution of Poisson equations for 3-dimensional grid generations. [computations of a flow field over a thin delta wing

    NASA Technical Reports Server (NTRS)

    Fujii, K.

    1983-01-01

    A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.

  9. Phase-step retrieval for tunable phase-shifting algorithms

    NASA Astrophysics Data System (ADS)

    Ayubi, Gastón A.; Duarte, Ignacio; Perciante, César D.; Flores, Jorge L.; Ferrari, José A.

    2017-12-01

    Phase-shifting (PS) is a well-known technique for phase retrieval in interferometry, with applications in deflectometry and 3D-profiling, which requires a series of intensity measurements with certain phase-steps. Usually the phase-steps are evenly spaced, and its knowledge is crucial for the phase retrieval. In this work we present a method to extract the phase-step between consecutive interferograms. We test the proposed technique with images corrupted by additive noise. The results were compared with other known methods. We also present experimental results showing the performance of the method when spatial filters are applied to the interferograms and the effect that they have on their relative phase-steps.

  10. Analysis of the internal anatomy of maxillary first molars by using different methods.

    PubMed

    Baratto Filho, Flares; Zaitter, Suellen; Haragushiku, Gisele Aihara; de Campos, Edson Alves; Abuabara, Allan; Correr, Gisele Maria

    2009-03-01

    The success of endodontic treatment depends on the identification of all root canals so that they can be cleaned, shaped, and obturated. This study investigated internal morphology of maxillary first molars by 3 different methods: ex vivo, clinical, and cone beam computed tomography (CBCT) analysis. In all these different methods, the number of additional root canals and their locations, the number of foramina, and the frequency of canals that could or could not be negotiated were recorded. In the ex vivo study, 140 extracted maxillary first molars were evaluated. After canals were accessed and detected by using an operating microscope, the teeth with significant anatomic variances were cleared. In the clinical analysis, the records of 291 patients who had undergone endodontic treatment in a dental school during a 2-year period were used. In the CBCT analysis, 54 maxillary first molars were evaluated. The ex vivo assessment results showed a fourth canal frequency in 67.14% of the teeth, besides a tooth with 7 root canals (0.72%). Additional root canals were located in the mesiobuccal root in 92.85% of the teeth (17.35% could not be negotiated), and when they were present, 65.30% exhibited 1 foramen. Clinical assessment showed that 53.26%, 0.35%, and 0.35% of the teeth exhibited 4, 5, and 6 root canals, respectively. Additional root canals were located in this assessment in mesiobuccal root in 95.63% (27.50% could not be negotiated), and when they were present, 59.38% exhibited 1 foramen. CBCT results showed 2, 4, and 5 root canals in 1.85%, 37.05%, and 1.85% of the teeth, respectively. When present, additional canals showed 1 foramen in 90.90% of the teeth studied. This study demonstrated that operating microscope and CBCT have been important for locating and identifying root canals, and CBCT can be used as a good method for initial identification of maxillary first molar internal morphology.

  11. Edge-based nonlinear diffusion for finite element approximations of convection-diffusion equations and its relation to algebraic flux-correction schemes.

    PubMed

    Barrenechea, Gabriel R; Burman, Erik; Karakatsani, Fotini

    2017-01-01

    For the case of approximation of convection-diffusion equations using piecewise affine continuous finite elements a new edge-based nonlinear diffusion operator is proposed that makes the scheme satisfy a discrete maximum principle. The diffusion operator is shown to be Lipschitz continuous and linearity preserving. Using these properties we provide a full stability and error analysis, which, in the diffusion dominated regime, shows existence, uniqueness and optimal convergence. Then the algebraic flux correction method is recalled and we show that the present method can be interpreted as an algebraic flux correction method for a particular definition of the flux limiters. The performance of the method is illustrated on some numerical test cases in two space dimensions.

  12. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  13. A novel interferometric method for the study of the viscoelastic properties of ultra-thin polymer films determined from nanobubble inflation

    NASA Astrophysics Data System (ADS)

    Chapuis, P.; Montgomery, P. C.; Anstotz, F.; Leong-Hoï, A.; Gauthier, C.; Baschnagel, J.; Reiter, G.; McKenna, G. B.; Rubin, A.

    2017-09-01

    Glass formation and glassy behavior remain as the important areas of investigation in soft matter physics with many aspects which are still not completely understood, especially at the nanometer size-scale. In the present work, we show an extension of the "nanobubble inflation" method developed by O'Connell and McKenna [Rev. Sci. Instrum. 78, 013901 (2007)] which uses an interferometric method to measure the topography of a large array of 5 μ m sized nanometer thick films subjected to constant inflation pressures during which the bubbles grow or creep with time. The interferometric method offers the possibility of making measurements on multiple bubbles at once as well as having the advantage over the AFM methods of O'Connell and McKenna of being a true non-contact method. Here we demonstrate the method using ultra-thin films of both poly(vinyl acetate) (PVAc) and polystyrene (PS) and discuss the capabilities of the method relative to the AFM method, its advantages and disadvantages. Furthermore we show that the results from experiments on PVAc are consistent with the prior work on PVAc, while high stress results with PS show signs of a new non-linear response regime that may be related to the plasticity of the ultra-thin film.

  14. Influence of Prosolv and Prosolv:Mannitol 200 direct compression fillers on the physicomechanical properties of atorvastatin oral dispersible tablets.

    PubMed

    Gowda, Veeran; Pabari, Ritesh M; Kelly, John G; Ramtoola, Zebunnissa

    2015-06-01

    The objective of the present study was to evaluate the influence of Prosolv® and Prosolv®: Mannitol 200 direct compression (DC) fillers on the physicomechanical characteristics of oral dispersible tablets (ODTs) of crystalline atorvastatin calcium. ODTs were formulated by DC and were analyzed for weight uniformity, hardness, friability, drug content, disintegration and dissolution. Three disintegration time (DT) test methods; European Pharmacopoeia (EP) method for conventional tablets (Method 1), a modification of this method (Method 2) and the EP method for oral lyophilisates (Method 3) were compared as part of this study. All ODTs showed low weight variation of <2.5%. Prosolv® only ODTs showed the highest tablet hardness of ∼ 73 N, hardness decreased with increasing mannitol content. Friability of all formulations was <1% although friability of Prosolv®:Mannitol ODTs was higher than for pure Prosolv®. DT of all ODTs was <30 s. Method 2 showed the fastest DT. Method 3 was non-discriminatory giving a DT of 13-15 s for all formulations. Atorvastatin dissolution from all ODTs was >60% within 5 min despite the drug being crystalline. Prosolv® and Prosolv®:Mannitol-based ODTs are suitable for ODT formulations by DC to give ODTs with high mechanical strength, rapid disintegration and dissolution.

  15. Development and Verification of the Charring Ablating Thermal Protection Implicit System Solver

    NASA Technical Reports Server (NTRS)

    Amar, Adam J.; Calvert, Nathan D.; Kirk, Benjamin S.

    2010-01-01

    The development and verification of the Charring Ablating Thermal Protection Implicit System Solver is presented. This work concentrates on the derivation and verification of the stationary grid terms in the equations that govern three-dimensional heat and mass transfer for charring thermal protection systems including pyrolysis gas flow through the porous char layer. The governing equations are discretized according to the Galerkin finite element method with first and second order implicit time integrators. The governing equations are fully coupled and are solved in parallel via Newton's method, while the fully implicit linear system is solved with the Generalized Minimal Residual method. Verification results from exact solutions and the Method of Manufactured Solutions are presented to show spatial and temporal orders of accuracy as well as nonlinear convergence rates.

  16. Blurred image recognition by legendre moment invariants

    PubMed Central

    Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2010-01-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003

  17. Development and Verification of the Charring, Ablating Thermal Protection Implicit System Simulator

    NASA Technical Reports Server (NTRS)

    Amar, Adam J.; Calvert, Nathan; Kirk, Benjamin S.

    2011-01-01

    The development and verification of the Charring Ablating Thermal Protection Implicit System Solver (CATPISS) is presented. This work concentrates on the derivation and verification of the stationary grid terms in the equations that govern three-dimensional heat and mass transfer for charring thermal protection systems including pyrolysis gas flow through the porous char layer. The governing equations are discretized according to the Galerkin finite element method (FEM) with first and second order fully implicit time integrators. The governing equations are fully coupled and are solved in parallel via Newton s method, while the linear system is solved via the Generalized Minimum Residual method (GMRES). Verification results from exact solutions and Method of Manufactured Solutions (MMS) are presented to show spatial and temporal orders of accuracy as well as nonlinear convergence rates.

  18. [The backgroud sky subtraction around [OIII] line in LAMOST QSO spectra].

    PubMed

    Shi, Zhi-Xin; Comte, Georges; Luo, A-Li; Tu, Liang-Ping; Zhao, Yong-Heng; Wu, Fu-Chao

    2014-11-01

    At present, most sky-subtraction methods focus on the full spectrum, not the particular location, especially for the backgroud sky around [OIII] line which is very important to low redshift quasars. A new method to precisely subtract sky lines in local region is proposed in the present paper, which sloves the problem that the width of Hβ-[OIII] line is effected by the backgroud sky subtraction. The exprimental results show that, for different redshift quasars, the spectral quality has been significantly improved using our method relative to the original batch program by LAMOST. It provides a complementary solution for the small part of LAMOST spectra which are not well handled by LAMOST 2D pipeline. Meanwhile, This method has been used in searching for candidates of double-peaked Active Galactic Nuclei.

  19. Ice Growth Measurements from Image Data to Support Ice Crystal and Mixed-Phase Accretion Testing

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Lynch, Christopher J.

    2012-01-01

    This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.

  20. Ice Growth Measurements from Image Data to Support Ice-Crystal and Mixed-Phase Accretion Testing

    NASA Technical Reports Server (NTRS)

    Struk, Peter, M; Lynch, Christopher, J.

    2012-01-01

    This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.

  1. The uniform asymptotic swallowtail approximation - Practical methods for oscillating integrals with four coalescing saddle points

    NASA Technical Reports Server (NTRS)

    Connor, J. N. L.; Curtis, P. R.; Farrelly, D.

    1984-01-01

    Methods that can be used in the numerical implementation of the uniform swallowtail approximation are described. An explicit expression for that approximation is presented to the lowest order, showing that there are three problems which must be overcome in practice before the approximation can be applied to any given problem. It is shown that a recently developed quadrature method can be used for the accurate numerical evaluation of the swallowtail canonical integral and its partial derivatives. Isometric plots of these are presented to illustrate some of their properties. The problem of obtaining the arguments of the swallowtail integral from an analytical function of its argument is considered, describing two methods of solving this problem. The asymptotic evaluation of the butterfly canonical integral is addressed.

  2. The developmental stages of the middle phalanx of the third finger (MP3): a sole indicator in assessing the skeletal maturity?

    PubMed

    Madhu, S; Hegde, Amitha M; Munshi, A K

    2003-01-01

    Assessment of skeletal maturity is an integral part of interceptive diagnosis and treatment planning. The present day methods of skeletal maturity assessment like the hand-wrist radiographs or cervical vertebrae radiographs are expensive, require elaborate equipment and accounts for high radiation exposure, especially for growing children. The present study was thus undertaken to provide a simple and practical method of skeletal maturity assessment using the developmental stages of the middle phalanx of the third finger (MP3) as seen on an IOPA film taken using a standard dental x-ray machine. The results of the study showed that this simple method was highly reliable and could be used as an alternative method to assess the skeletal maturity of growing children.

  3. Propagation Constant of a Rectangular Waveguides Completely Full of Ferrite Magnetized Longitudinally

    NASA Astrophysics Data System (ADS)

    Sakli, Hedi; Benzina, Hafedh; Aguili, Taoufik; Tao, Jun Wu

    2009-08-01

    This paper is an analysis of rectangular waveguide completely full of ferrite magnetized longitudinally. The analysis is based on the formulation of the transverse operator method (TOM), followed by the application of the Galerkin method. We obtain an eigenvalue equation system. The propagation constant of some homogenous and anisotropic waveguide structures with ferrite has been obtained. The results presented here show that the transverse operator formulation is not only an elegant theoretical form, but also a powerful and efficient analysis method, it is useful to solve a number of the propagation problems in electromagnetic. One advantage of this method is that it presents a fast convergence. Numerical examples are given for different cases and compared with the published results. A good agreement is obtained.

  4. Surface Cleaning of Iron Artefacts by Lasers

    NASA Astrophysics Data System (ADS)

    Koh, Y. S.; Sárady, I.

    In this paper the general method and ethics of the laser cleaning technique for conservation are presented. The results of two experiments are also presented; experiment 1 compares cleaning of rust by an Nd:YAG laser and micro-blasting whilst experiment 2 deals with removing the wax coating from iron samples by a TEA CO2 laser. The first experiment showed that cleaning with a pulsed laser and higher photon energy obtained a better surface structure than micro blasting. The second experiment showed how differences in energy density affect the same surface.

  5. A network of automatic atmospherics analyzer

    NASA Technical Reports Server (NTRS)

    Schaefer, J.; Volland, H.; Ingmann, P.; Eriksson, A. J.; Heydt, G.

    1980-01-01

    The design and function of an atmospheric analyzer which uses a computer are discussed. Mathematical models which show the method of measurement are presented. The data analysis and recording procedures of the analyzer are discussed.

  6. The Amateur Scientist.

    ERIC Educational Resources Information Center

    Walker, Jearl

    1989-01-01

    Describes a method for making holograms viewable in white light without the problem of vibrations. Presents diagrams which show the arrangement for construction, viewing procedures, and a device for generating holograms. A list of supplies is included. (RT)

  7. Numerical methods on European option second order asymptotic expansions for multiscale stochastic volatility

    NASA Astrophysics Data System (ADS)

    Canhanga, Betuel; Ni, Ying; Rančić, Milica; Malyarenko, Anatoliy; Silvestrov, Sergei

    2017-01-01

    After Black-Scholes proposed a model for pricing European Options in 1973, Cox, Ross and Rubinstein in 1979, and Heston in 1993, showed that the constant volatility assumption made by Black-Scholes was one of the main reasons for the model to be unable to capture some market details. Instead of constant volatilities, they introduced stochastic volatilities to the asset dynamic modeling. In 2009, Christoffersen empirically showed "why multifactor stochastic volatility models work so well". Four years later, Chiarella and Ziveyi solved the model proposed by Christoffersen. They considered an underlying asset whose price is governed by two factor stochastic volatilities of mean reversion type. Applying Fourier transforms, Laplace transforms and the method of characteristics they presented a semi-analytical formula to compute an approximate price for American options. The huge calculation involved in the Chiarella and Ziveyi approach motivated the authors of this paper in 2014 to investigate another methodology to compute European Option prices on a Christoffersen type model. Using the first and second order asymptotic expansion method we presented a closed form solution for European option, and provided experimental and numerical studies on investigating the accuracy of the approximation formulae given by the first order asymptotic expansion. In the present paper we will perform experimental and numerical studies for the second order asymptotic expansion and compare the obtained results with results presented by Chiarella and Ziveyi.

  8. Moving Computational Domain Method and Its Application to Flow Around a High-Speed Car Passing Through a Hairpin Curve

    NASA Astrophysics Data System (ADS)

    Watanabe, Koji; Matsuno, Kenichi

    This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.

  9. Spectral solver for multi-scale plasma physics simulations with dynamically adaptive number of moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec

    2015-06-01

    A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less

  10. Facile Preparation of Nanostructured, Superhydrophobic Filter Paper for Efficient Water/Oil Separation

    PubMed Central

    Wang, Jianhua; Wong, Jessica X. H.; Kwok, Honoria; Li, Xiaochun; Yu, Hua-Zhong

    2016-01-01

    In this paper, we present a facile and cost-effective method to obtain superhydrophobic filter paper and demonstrate its application for efficient water/oil separation. By coupling structurally distinct organosilane precursors (e.g., octadecyltrichlorosilane and methyltrichlorosilane) to paper fibers under controlled reaction conditions, we have formulated a simple, inexpensive, and efficient protocol to achieve a desirable superhydrophobic and superoleophilic surface on conventional filter paper. The silanized superhydrophobic filter paper showed nanostructured morphology and demonstrated great separation efficiency (up to 99.4%) for water/oil mixtures. The modified filter paper is stable in both aqueous solutions and organic solvents, and can be reused multiple times. The present study shows that our newly developed binary silanization is a promising method of modifying cellulose-based materials for practical applications, in particular the treatment of industrial waste water and ecosystem recovery. PMID:26982055

  11. Framing Electronic Medical Records as Polylingual Documents in Query Expansion

    PubMed Central

    Huang, Edward W; Wang, Sheng; Lee, Doris Jung-Lin; Zhang, Runshun; Liu, Baoyan; Zhou, Xuezhong; Zhai, ChengXiang

    2017-01-01

    We present a study of electronic medical record (EMR) retrieval that emulates situations in which a doctor treats a new patient. Given a query consisting of a new patient’s symptoms, the retrieval system returns the set of most relevant records of previously treated patients. However, due to semantic, functional, and treatment synonyms in medical terminology, queries are often incomplete and thus require enhancement. In this paper, we present a topic model that frames symptoms and treatments as separate languages. Our experimental results show that this method improves retrieval performance over several baselines with statistical significance. These baselines include methods used in prior studies as well as state-of-the-art embedding techniques. Finally, we show that our proposed topic model discovers all three types of synonyms to improve medical record retrieval. PMID:29854161

  12. Influencing clinicians and healthcare managers: can ROC be more persuasive?

    NASA Astrophysics Data System (ADS)

    Taylor-Phillips, S.; Wallis, M. G.; Duncan, A.; Gale, A. G.

    2010-02-01

    Receiver Operating Characteristic analysis provides a reliable and cost effective performance measurement tool, without using full clinical trials. However, when ROC analysis shows that performance is statistically superior in one condition than another it is difficult to relate this result to effects in practice, or even to determine whether it is clinically significant. In this paper we present two concurrent analyses: using ROC methods alongside single threshold recall rate data, and suggest that reporting both provides complimentary data. Four mammographers read 160 difficult cases (41% malignant) twice, with and without prior mammograms. Lesion location and probability of malignancy was reported for each case and analyzed using JAFROC. Concurrently each participant chose recall or return to screen for each case. JAFROC analysis showed that the presence of prior mammograms improved performance (p<.05). Single threshold data showed a trend towards a 26% increase in the number of false positive recalls without prior mammograms (p=.056). If this trend were present throughout the NHS Breast Screening Programme then discarding prior mammograms would correspond to an increase in recall rate from 4.6% to 5.3%, and 12,414 extra women recalled annually for assessment. Whilst ROC methods account for all possible thresholds of recall and have higher power, providing a single threshold example of false positive, false negative, and recall rates when reporting results could be more influential for clinicians. This paper discusses whether this is a useful additional method of presenting data, or whether it is misleading and inaccurate.

  13. A two-step non-flowcytometry-based naïve B cell isolation method and its application in Staphylococcal enterotoxin B (SEB) presentation.

    PubMed

    Chokeshai-u-saha, Kaj; Buranapraditkun, Supranee; Jacquet, Alain; Nguyen, Catherine; Ruxrungtham, Kiat

    2012-09-01

    To study the role of human naïve B cells in antigen presentation and stimulation to naïve CD4+ T cell, a suitable method to reproducibly isolate sufficient naïve B cells is required. To improve the purity of isolated naive B cells obtained from a conventional one-step magnetic bead method, we added a rosetting step to enrich total B cell isolates from human whole blood samples prior to negative cell sorting by magnetic beads. The acquired naïve B cells were analyzed for phenotypes and for their role in Staphylococcal enterotoxin B (SEB) presentation to naïve CD4+ T cells. The mean (SD) naïve B cell (CD19+/CD27-) purity obtained from this two-step method compared with the one-step method was 97% (1.0) versus 90% (1.2), respectively. This two-step method can be used with a sample of whole blood as small as 10 ml. The isolated naive B cells were phenotypically at a resting state and were able to prime naïve CD4+ T cell activation by Staphylococcal enterotoxin B (SEB) presentation. This two-step non-flow cytometry-based approach improved the purity of isolated naïve B cells compared with conventional one-step magnetic bead method. It also worked well with a small blood volume. In addition, this study showed that the isolated naïve B cells can present a super-antigen "SEB" to activate naïve CD4 cells. These methods may thus be useful for further in vitro characterization of human naïve B cells and their roles as antigen presenting cells in various diseases.

  14. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  15. Pirogow's Amputation: A Modification of the Operation Method

    PubMed Central

    Bueschges, M.; Muehlberger, T.; Mauss, K. L.; Bruck, J. C.; Ottomann, C.

    2013-01-01

    Introduction. Pirogow's amputation at the ankle presents a valuable alternative to lower leg amputation for patients with the corresponding indications. Although this method offers the ability to stay mobile without the use of a prosthesis, it is rarely performed. This paper proposes a modification regarding the operation method of the Pirogow amputation. The results of the modified operation method on ten patients were objectified 12 months after the operation using a patient questionnaire (Ankle Score). Material and Methods. We modified the original method by rotating the calcaneus. To fix the calcaneus to the tibia, Kirschner wire and a 3/0 spongiosa tension screw as well as a Fixateur externe were used. Results. 70% of those questioned who were amputated following the modified Pirogow method indicated an excellent or very good result in total points whereas in the control group (original Pirogow's amputation) only 40% reported excellent or very good result. In addition, the level of pain experienced one year after the completed operation showed different results in favour of the group being operated with the modified way. Furthermore, patients in both groups showed differences in radiological results, postoperative leg length difference, and postoperative mobility. Conclusion. The modified Pirogow amputation presents a valuable alternative to the original amputation method for patients with the corresponding indications. The benefits are found in the significantly reduced pain, difference in reduced radiological complications, the increase in mobility without a prosthesis, and the reduction of postoperative leg length difference. PMID:23606976

  16. Biological characteristics and population status of anadromous salmon in southeast Alaska.

    Treesearch

    Karl C. Halupka; Mason D. Bryant; Mary F. Willson; Fred H. Everest

    2000-01-01

    Populations of Pacific salmon (Oncorhynchus spp.) in southeast Alaska and adjacent areas of British Columbia and the Yukon Territory show great variation in biological characteristics. An introduction presents goals and methods common to the series of reviews of regional salmon diversity presented in the five subsequent chapters. Our primary goals were to (1) describe...

  17. Prospecting in Ultracool Dwarfs: Measuring the Metallicities of Mid- and Late-M Dwarfs

    NASA Astrophysics Data System (ADS)

    Mann, Andrew W.; Deacon, Niall R.; Gaidos, Eric; Ansdell, Megan; Brewer, John M.; Liu, Michael C.; Magnier, Eugene A.; Aller, Kimberly M.

    2014-06-01

    Metallicity is a fundamental parameter that contributes to the physical characteristics of a star. The low temperatures and complex molecules present in M dwarf atmospheres make it difficult to measure their metallicities using techniques that have been commonly used for Sun-like stars. Although there has been significant progress in developing empirical methods to measure M dwarf metallicities over the last few years, these techniques have been developed primarily for early- to mid-M dwarfs. We present a method to measure the metallicity of mid- to late-M dwarfs from moderate resolution (R ~ 2000) K-band (sime 2.2 μm) spectra. We calibrate our formula using 44 wide binaries containing an F, G, K, or early-M primary of known metallicity and a mid- to late-M dwarf companion. We show that similar features and techniques used for early-M dwarfs are still effective for late-M dwarfs. Our revised calibration is accurate to ~0.07 dex for M4.5-M9.5 dwarfs with -0.58 < [Fe/H] < +0.56 and shows no systematic trends with spectral type, metallicity, or the method used to determine the primary star metallicity. We show that our method gives consistent metallicities for the components of M+M wide binaries. We verify that our new formula works for unresolved binaries by combining spectra of single stars. Lastly, we show that our calibration gives consistent metallicities with the Mann et al. study for overlapping (M4-M5) stars, establishing that the two calibrations can be used in combination to determine metallicities across the entire M dwarf sequence.

  18. Visualization of medical data based on EHR standards.

    PubMed

    Kopanitsa, G; Hildebrand, C; Stausberg, J; Englmeier, K H

    2013-01-01

    To organize an efficient interaction between a doctor and an EHR the data has to be presented in the most convenient way. Medical data presentation methods and models must be flexible in order to cover the needs of the users with different backgrounds and requirements. Most visualization methods are doctor oriented, however, there are indications that the involvement of patients can optimize healthcare. The research aims at specifying the state of the art of medical data visualization. The paper analyzes a number of projects and defines requirements for a generic ISO 13606 based data visualization method. In order to do so it starts with a systematic search for studies on EHR user interfaces. In order to identify best practices visualization methods were evaluated according to the following criteria: limits of application, customizability, re-usability. The visualization methods were compared by using specified criteria. The review showed that the analyzed projects can contribute knowledge to the development of a generic visualization method. However, none of them proposed a model that meets all the necessary criteria for a re-usable standard based visualization method. The shortcomings were mostly related to the structure of current medical concept specifications. The analysis showed that medical data visualization methods use hardcoded GUI, which gives little flexibility. So medical data visualization has to turn from a hardcoded user interface to generic methods. This requires a great effort because current standards are not suitable for organizing the management of visualization data. This contradiction between a generic method and a flexible and user-friendly data layout has to be overcome.

  19. Technologies for Turbofan Noise Reduction

    NASA Technical Reports Server (NTRS)

    Huff, Dennis

    2005-01-01

    An overview presentation of NASA's engine noise research since 1992 is given for subsonic commercial aircraft applications. Highlights are included from the Advanced Subsonic Technology (AST) Noise Reduction Program and the Quiet Aircraft Technology (QAT) project with emphasis on engine source noise reduction. Noise reduction goals for 10 EPNdB by 207 and 20 EPNdB by 2022 are reviewed. Fan and jet noise technologies are highlighted from the AST program including higher bypass ratio propulsion, scarf inlets, forward-swept fans, swept/leaned stators, chevron nozzles, noise prediction methods, and active noise control for fans. Source diagnostic tests for fans and jets that have been completed over the past few years are presented showing how new flow measurement methods such as Particle Image Velocimetry (PIV) have played a key role in understanding turbulence, the noise generation process, and how to improve noise prediction methods. Tests focused on source decomposition have helped identify which engine components need further noise reduction. The role of Computational AeroAcoustics (CAA) for fan noise prediction is presented. Advanced noise reduction methods such as Hershel-Quincke tubes and trailing edge blowing for fan noise that are currently being pursued n the QAT program are also presented. Highlights are shown form engine validation and flight demonstrations that were done in the late 1990's with Pratt & Whitney on their PW4098 engine and Honeywell on their TFE-731-60 engine. Finally, future propulsion configurations currently being studied that show promise towards meeting NASA's long term goal of 20 dB noise reduction are shown including a Dual Fan Engine concept on a Blended Wing Body aircraft.

  20. Injection of thermal and suprathermal seed particles into coronal shocks of varying obliquity

    NASA Astrophysics Data System (ADS)

    Battarbee, M.; Vainio, R.; Laitinen, T.; Hietala, H.

    2013-10-01

    Context. Diffusive shock acceleration in the solar corona can accelerate solar energetic particles to very high energies. Acceleration efficiency is increased by entrapment through self-generated waves, which is highly dependent on the amount of accelerated particles. This, in turn, is determined by the efficiency of particle injection into the acceleration process. Aims: We present an analysis of the injection efficiency at coronal shocks of varying obliquity. We assessed injection through reflection and downstream scattering, including the effect of a cross-shock potential. Both quasi-thermal and suprathermal seed populations were analysed. We present results on the effect of cross-field diffusion downstream of the shock on the injection efficiency. Methods: Using analytical methods, we present applicable injection speed thresholds that were compared with both semi-analytical flux integration and Monte Carlo simulations, which do not resort to binary thresholds. Shock-normal angle θBn and shock-normal velocity Vs were varied to assess the injection efficiency with respect to these parameters. Results: We present evidence of a significant bias of thermal seed particle injection at small shock-normal angles. We show that downstream isotropisation methods affect the θBn-dependence of this result. We show a non-negligible effect caused by the cross-shock potential, and that the effect of downstream cross-field diffusion is highly dependent on boundary definitions. Conclusions: Our results show that for Monte Carlo simulations of coronal shock acceleration a full distribution function assessment with downstream isotropisation through scatterings is necessary to realistically model particle injection. Based on our results, seed particle injection at quasi-parallel coronal shocks can result in significant acceleration efficiency, especially when combined with varying field-line geometry. Appendices are available in electronic form at http://www.aanda.org

  1. Nonlinear features for classification and pose estimation of machined parts from single views

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-10-01

    A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.

  2. Rapid identification of staphylococci by Raman spectroscopy.

    PubMed

    Rebrošová, Katarína; Šiler, Martin; Samek, Ota; Růžička, Filip; Bernatová, Silvie; Holá, Veronika; Ježek, Jan; Zemánek, Pavel; Sokolová, Jana; Petráš, Petr

    2017-11-01

    Clinical treatment of the infections caused by various staphylococcal species differ depending on the actual cause of infection. Therefore, it is necessary to develop a fast and reliable method for identification of staphylococci. Raman spectroscopy is an optical method used in multiple scientific fields. Recent studies showed that the method has a potential for use in microbiological research, too. Our work here shows a possibility to identify staphylococci by Raman spectroscopy. We present a method that enables almost 100% successful identification of 16 of the clinically most important staphylococcal species directly from bacterial colonies grown on a Mueller-Hinton agar plate. We obtained characteristic Raman spectra of 277 staphylococcal strains belonging to 16 species from a 24-hour culture of each strain grown on the Mueller-Hinton agar plate using the Raman instrument. The results show that it is possible to distinguish among the tested species using Raman spectroscopy and therefore it has a great potential for use in routine clinical diagnostics.

  3. Differential expression analysis for RNAseq using Poisson mixed models

    PubMed Central

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny

    2017-01-01

    Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632

  4. HPLC analysis and standardization of Brahmi vati – An Ayurvedic poly-herbal formulation

    PubMed Central

    Mishra, Amrita; Mishra, Arun K.; Tiwari, Om Prakash; Jha, Shivesh

    2013-01-01

    Objectives The aim of the present study was to standardize Brahmi vati (BV) by simultaneous quantitative estimation of Bacoside A3 and Piperine adopting HPLC–UV method. BV very important Ayurvedic polyherbo formulation used to treat epilepsy and mental disorders containing thirty eight ingredients including Bacopa monnieri L. and Piper longum L. Materials and methods An HPLC–UV method was developed for the standardization of BV in light of simultaneous quantitative estimation of Bacoside A3 and Piperine, the major constituents of B. monnieri L. and P. longum L. respectively. The developed method was validated on parameters including linearity, precision, accuracy and robustness. Results The HPLC analysis showed significant increase in amount of Bacoside A3 and Piperine in the in-house sample of BV when compared with all three different marketed samples of the same. Results showed variations in the amount of Bacoside A3 and Piperine in different samples which indicate non-uniformity in their quality which will lead to difference in their therapeutic effects. Conclusion The outcome of the present investigation underlines the importance of standardization of Ayurvedic formulations. The developed method may be further used to standardize other samples of BV or other formulations containing Bacoside A3 and Piperine. PMID:24396246

  5. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  6. Thermodynamic integration from classical to quantum mechanics.

    PubMed

    Habershon, Scott; Manolopoulos, David E

    2011-12-14

    We present a new method for calculating quantum mechanical corrections to classical free energies, based on thermodynamic integration from classical to quantum mechanics. In contrast to previous methods, our method is numerically stable even in the presence of strong quantum delocalization. We first illustrate the method and its relationship to a well-established method with an analysis of a one-dimensional harmonic oscillator. We then show that our method can be used to calculate the quantum mechanical contributions to the free energies of ice and water for a flexible water model, a problem for which the established method is unstable. © 2011 American Institute of Physics

  7. Improved methods for predicting peptide binding affinity to MHC class II molecules.

    PubMed

    Jensen, Kamilla Kjaergaard; Andreatta, Massimo; Marcatili, Paolo; Buus, Søren; Greenbaum, Jason A; Yan, Zhen; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten

    2018-07-01

    Major histocompatibility complex class II (MHC-II) molecules are expressed on the surface of professional antigen-presenting cells where they display peptides to T helper cells, which orchestrate the onset and outcome of many host immune responses. Understanding which peptides will be presented by the MHC-II molecule is therefore important for understanding the activation of T helper cells and can be used to identify T-cell epitopes. We here present updated versions of two MHC-II-peptide binding affinity prediction methods, NetMHCII and NetMHCIIpan. These were constructed using an extended data set of quantitative MHC-peptide binding affinity data obtained from the Immune Epitope Database covering HLA-DR, HLA-DQ, HLA-DP and H-2 mouse molecules. We show that training with this extended data set improved the performance for peptide binding predictions for both methods. Both methods are publicly available at www.cbs.dtu.dk/services/NetMHCII-2.3 and www.cbs.dtu.dk/services/NetMHCIIpan-3.2. © 2018 John Wiley & Sons Ltd.

  8. Chatter detection in milling process based on VMD and energy entropy

    NASA Astrophysics Data System (ADS)

    Liu, Changfu; Zhu, Lida; Ni, Chenbing

    2018-05-01

    This paper presents a novel approach to detect the milling chatter based on Variational Mode Decomposition (VMD) and energy entropy. VMD has already been employed in feature extraction from non-stationary signals. The parameters like number of modes (K) and the quadratic penalty (α) need to be selected empirically when raw signal is decomposed by VMD. Aimed at solving the problem how to select K and α, the automatic selection method of VMD's based on kurtosis is proposed in this paper. When chatter occurs in the milling process, energy will be absorbed to chatter frequency bands. To detect the chatter frequency bands automatically, the chatter detection method based on energy entropy is presented. The vibration signal containing chatter frequency is simulated and three groups of experiments which represent three cutting conditions are conducted. To verify the effectiveness of method presented by this paper, chatter feather extraction has been successfully employed on simulation signals and experimental signals. The simulation and experimental results show that the proposed method can effectively detect the chatter.

  9. Enhancing Learning Management Systems Utility for Blind Students: A Task-Oriented, User-Centered, Multi-Method Evaluation Technique

    ERIC Educational Resources Information Center

    Babu, Rakesh; Singh, Rahul

    2013-01-01

    This paper presents a novel task-oriented, user-centered, multi-method evaluation (TUME) technique and shows how it is useful in providing a more complete, practical and solution-oriented assessment of the accessibility and usability of Learning Management Systems (LMS) for blind and visually impaired (BVI) students. Novel components of TUME…

  10. A centrifugal method for the evaluation of polymer membranes for reverse osmosis

    NASA Technical Reports Server (NTRS)

    Hollahan, J. R.; Wydeven, T.; Mccullough, R. P.

    1973-01-01

    A rapid and simple method employing the laboratory centrifuge shows promise for evaluation of membrane performance during reverse osmosis. Results are presented for cellulose acetate membranes for rejection of salt and urea dissolved solids. Implications of the study are to rapid screening of membrane performance, use in laboratories with limited facilities, and possible space waste water purification.

  11. Invariant 2D object recognition using the wavelet transform and structured neural networks

    NASA Astrophysics Data System (ADS)

    Khalil, Mahmoud I.; Bayoumi, Mohamed M.

    1999-03-01

    This paper applies the dyadic wavelet transform and the structured neural networks approach to recognize 2D objects under translation, rotation, and scale transformation. Experimental results are presented and compared with traditional methods. The experimental results showed that this refined technique successfully classified the objects and outperformed some traditional methods especially in the presence of noise.

  12. Flow Injection Technique for Biochemical Analysis with Chemiluminescence Detection in Acidic Media

    PubMed Central

    Chen, Jing; Fang, Yanjun

    2007-01-01

    A review with 90 references is presented to show the development of acidic chemiluminescence methods for biochemical analysis by use of flow injection technique in the last 10 years. A brief discussion of both the chemiluminescence and flow injection technique is given. The proposed methods for biochemical analysis are described and compared according to the used chemiluminescence system.

  13. Various Solution Methods, Accompanied by Dynamic Investigation, for the Same Problem as a Means for Enriching the Mathematical Toolbox

    ERIC Educational Resources Information Center

    Oxman, Victor; Stupel, Moshe

    2018-01-01

    A geometrical task is presented with multiple solutions using different methods, in order to show the connection between various branches of mathematics and to highlight the importance of providing the students with an extensive 'mathematical toolbox'. Investigation of the property that appears in the task was carried out using a computerized tool.

  14. Various solution methods, accompanied by dynamic investigation, for the same problem as a means for enriching the mathematical toolbox

    NASA Astrophysics Data System (ADS)

    Oxman, Victor; Stupel, Moshe

    2018-04-01

    A geometrical task is presented with multiple solutions using different methods, in order to show the connection between various branches of mathematics and to highlight the importance of providing the students with an extensive 'mathematical toolbox'. Investigation of the property that appears in the task was carried out using a computerized tool.

  15. A probabilistic approach to composite micromechanics

    NASA Technical Reports Server (NTRS)

    Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.

    1988-01-01

    Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.

  16. Fly ash carbon passivation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Count, Robert B; Baltrus, John P; Kern, Douglas G

    A thermal method to passivate the carbon and/or other components in fly ash significantly decreases adsorption. The passivated carbon remains in the fly ash. Heating the fly ash to about 500 and 800 degrees C. under inert gas conditions sharply decreases the amount of surfactant adsorbed by the fly ash recovered after thermal treatment despite the fact that the carbon content remains in the fly ash. Using oxygen and inert gas mixtures, the present invention shows that a thermal treatment to about 500 degrees C. also sharply decreases the surfactant adsorption of the recovered fly ash even though most ofmore » the carbon remains intact. Also, thermal treatment to about 800 degrees C. under these same oxidative conditions shows a sharp decrease in surfactant adsorption of the recovered fly ash due to the fact that the carbon has been removed. This experiment simulates the various "carbon burnout" methods and is not a claim in this method. The present invention provides a thermal method of deactivating high carbon fly ash toward adsorption of AEAs while retaining the fly ash carbon. The fly ash can be used, for example, as a partial Portland cement replacement in air-entrained concrete, in conductive and other concretes, and for other applications.« less

  17. HPLC analysis and standardization of Brahmi vati - An Ayurvedic poly-herbal formulation.

    PubMed

    Mishra, Amrita; Mishra, Arun K; Tiwari, Om Prakash; Jha, Shivesh

    2013-09-01

    The aim of the present study was to standardize Brahmi vati (BV) by simultaneous quantitative estimation of Bacoside A3 and Piperine adopting HPLC-UV method. BV very important Ayurvedic polyherbo formulation used to treat epilepsy and mental disorders containing thirty eight ingredients including Bacopa monnieri L. and Piper longum L. An HPLC-UV method was developed for the standardization of BV in light of simultaneous quantitative estimation of Bacoside A3 and Piperine, the major constituents of B. monnieri L. and P. longum L. respectively. The developed method was validated on parameters including linearity, precision, accuracy and robustness. The HPLC analysis showed significant increase in amount of Bacoside A3 and Piperine in the in-house sample of BV when compared with all three different marketed samples of the same. Results showed variations in the amount of Bacoside A3 and Piperine in different samples which indicate non-uniformity in their quality which will lead to difference in their therapeutic effects. The outcome of the present investigation underlines the importance of standardization of Ayurvedic formulations. The developed method may be further used to standardize other samples of BV or other formulations containing Bacoside A3 and Piperine.

  18. New Computational Methods for the Prediction and Analysis of Helicopter Noise

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    This paper describes several new methods to predict and analyze rotorcraft noise. These methods are: 1) a combined computational fluid dynamics and Kirchhoff scheme for far-field noise predictions, 2) parallel computer implementation of the Kirchhoff integrations, 3) audio and visual rendering of the computed acoustic predictions over large far-field regions, and 4) acoustic tracebacks to the Kirchhoff surface to pinpoint the sources of the rotor noise. The paper describes each method and presents sample results for three test cases. The first case consists of in-plane high-speed impulsive noise and the other two cases show idealized parallel and oblique blade-vortex interactions. The computed results show good agreement with available experimental data but convey much more information about the far-field noise propagation. When taken together, these new analysis methods exploit the power of new computer technologies and offer the potential to significantly improve our prediction and understanding of rotorcraft noise.

  19. Preparation of samples for leaf architecture studies, a method for mounting cleared leaves.

    PubMed

    Vasco, Alejandra; Thadeo, Marcela; Conover, Margaret; Daly, Douglas C

    2014-09-01

    Several recent waves of interest in leaf architecture have shown an expanding range of approaches and applications across a number of disciplines. Despite this increased interest, examination of existing archives of cleared and mounted leaves shows that current methods for mounting, in particular, yield unsatisfactory results and deterioration of samples over relatively short periods. Although techniques for clearing and staining leaves are numerous, published techniques for mounting leaves are scarce. • Here we present a complete protocol and recommendations for clearing, staining, and imaging leaves, and, most importantly, a method to permanently mount cleared leaves. • The mounting protocol is faster than other methods, inexpensive, and straightforward; moreover, it yields clear and permanent samples that can easily be imaged, scanned, and stored. Specimens mounted with this method preserve well, with leaves that were mounted more than 35 years ago showing no signs of bubbling or discoloration.

  20. Generalized sample entropy analysis for traffic signals based on similarity measure

    NASA Astrophysics Data System (ADS)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  1. Reconstruction method for running shape of rotor blade considering nonlinear stiffness and loads

    NASA Astrophysics Data System (ADS)

    Wang, Yongliang; Kang, Da; Zhong, Jingjun

    2017-10-01

    The aerodynamic and centrifugal loads acting on the rotating blade make the blade configuration deformed comparing to its shape at rest. Accurate prediction of the running blade configuration plays a significant role in examining and analyzing turbomachinery performance. Considering nonlinear stiffness and loads, a reconstruction method is presented to address transformation of a rotating blade from cold to hot state. When calculating blade deformations, the blade stiffness and load conditions are updated simultaneously as blade shape varies. The reconstruction procedure is iterated till a converged hot blade shape is obtained. This method has been employed to determine the operating blade shapes of a test rotor blade and the Stage 37 rotor blade. The calculated results are compared with the experiments. The results show that the proposed method used for blade operating shape prediction is effective. The studies also show that this method can improve precision of finite element analysis and aerodynamic performance analysis.

  2. Iodine Absorption Cells Purity Testing.

    PubMed

    Hrabina, Jan; Zucco, Massimo; Philippe, Charles; Pham, Tuan Minh; Holá, Miroslava; Acef, Ouali; Lazar, Josef; Číp, Ondřej

    2017-01-06

    This article deals with the evaluation of the chemical purity of iodine-filled absorption cells and the optical frequency references used for the frequency locking of laser standards. We summarize the recent trends and progress in absorption cell technology and we focus on methods for iodine cell purity testing. We compare two independent experimental systems based on the laser-induced fluorescence method, showing an improvement of measurement uncertainty by introducing a compensation system reducing unwanted influences. We show the advantages of this technique, which is relatively simple and does not require extensive hardware equipment. As an alternative to the traditionally used methods we propose an approach of hyperfine transitions' spectral linewidth measurement. The key characteristic of this method is demonstrated on a set of testing iodine cells. The relationship between laser-induced fluorescence and transition linewidth methods will be presented as well as a summary of the advantages and disadvantages of the proposed technique (in comparison with traditional measurement approaches).

  3. Iodine Absorption Cells Purity Testing

    PubMed Central

    Hrabina, Jan; Zucco, Massimo; Philippe, Charles; Pham, Tuan Minh; Holá, Miroslava; Acef, Ouali; Lazar, Josef; Číp, Ondřej

    2017-01-01

    This article deals with the evaluation of the chemical purity of iodine-filled absorption cells and the optical frequency references used for the frequency locking of laser standards. We summarize the recent trends and progress in absorption cell technology and we focus on methods for iodine cell purity testing. We compare two independent experimental systems based on the laser-induced fluorescence method, showing an improvement of measurement uncertainty by introducing a compensation system reducing unwanted influences. We show the advantages of this technique, which is relatively simple and does not require extensive hardware equipment. As an alternative to the traditionally used methods we propose an approach of hyperfine transitions’ spectral linewidth measurement. The key characteristic of this method is demonstrated on a set of testing iodine cells. The relationship between laser-induced fluorescence and transition linewidth methods will be presented as well as a summary of the advantages and disadvantages of the proposed technique (in comparison with traditional measurement approaches). PMID:28067834

  4. Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.

    PubMed

    Yang, Yimin; Wu, Q M Jonathan

    2016-11-01

    The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.

  5. Establishment of an efficient transformation system for Pleurotus ostreatus.

    PubMed

    Lei, Min; Wu, Xiangli; Zhang, Jinxia; Wang, Hexiang; Huang, Chenyang

    2017-11-21

    Pleurotus ostreatus is widely cultivated worldwide, but the lack of an efficient transformation system regarding its use restricts its genetic research. The present study developed an improved and efficient Agrobacterium tumefaciens-mediated transformation method in P. ostreatus. Four parameters were optimized to obtain the most efficient transformation method. The strain LBA4404 was the most suitable for the transformation of P. ostreatus. A bacteria-to-protoplast ratio of 100:1, an acetosyringone (AS) concentration of 0.1 mM, and 18 h of co-culture showed the best transformation efficiency. The hygromycin B phosphotransferase gene (HPH) was used as the selective marker, and EGFP was used as the reporter gene in this study. Southern blot analysis combined with EGFP fluorescence assay showed positive results, and mitotic stability assay showed that more than 75% transformants were stable after five generations. These results showed that our transformation method is effective and stable and may facilitate future genetic studies in P. ostreatus.

  6. [Measurement of free urinary cortisol and cortisone using liquid chromatography associated with tandem mass spectrometry method].

    PubMed

    Vieira, José Gilberto H; Nakamura, Odete H; Carvalho, Valdemir M

    2005-04-01

    Free urinary cortisol (UFF) measurement is one of the most useful screening tests for Cushing's syndrome. Immunoassays employed today by most clinical laboratories present limitations, specially concerning specificity. These limitations restrain a widespread application of the method, as well as the comparison of results obtained by the use of different methods. We present the development and characterization of a UFF and cortisone method based on liquid chromatography and tandem mass spectrometry (LC-MS/MS). A 200 microL aliquot from a 24 h urine sample is mixed with a solution containing a known quantity of deuterated cortisol and on-line extracted in solid phase (C18). The eluate is transferred to a second C18 column (Phenomenex Luna, 3 micro, 50 x 2 mm) and the isocratic mode elution profile is directly applied to a tandem mass spectrometer model Quattro Micro operating in positive mode atmospheric pressure chemical ionization (APCI). All process is automated and the quantification is performed by isotopic dilution, based on the analyte and the deuterated internal standard peak area ratios. The specificity study showed that all the steroids tested presented cross reactivity of <1% for cortisol and cortisone. Functional sensitivity is <1 microg/L for both steroids, and the interassay CV <8%. Recovery and linearity studies were satisfactory and comparison of results obtained using a RIA for UFF and the present method in 98 routine samples showed a correlation of r= 0.838, with the results obtained with LC-MS/MS significantly lower (medians of 22.0 vs. 49.4 microg/24 h for RIA) (P<0.0001). Reference values for cortisol were defined as values between 11 and 43 microg/24 h, compatible to those recently described for similar methods. The concomitant measurement of UF cortisone allows the study of the activity of the enzyme 11beta-HSD2 and the diagnosis of the apparent mineralocorticoid excess syndrome. The method represents the first steroid assay of a new generation, based on automated preparative methods and tandem mass spectrometry, described in our country.

  7. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less

  8. Movie denoising by average of warped lines.

    PubMed

    Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro

    2007-09-01

    Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.

  9. Evaluation of use of MPAD trajectory tape and number of orbit points for orbiter mission thermal predictions

    NASA Technical Reports Server (NTRS)

    Vogt, R. A.

    1979-01-01

    The application of using the mission planning and analysis division (MPAD) common format trajectory data tape to predict temperatures for preflight and post flight mission analysis is presented and evaluated. All of the analyses utilized the latest Space Transportation System 1 flight (STS-1) MPAD trajectory tape, and the simplified '136 note' midsection/payload bay thermal math model. For the first 6.7 hours of the STS-1 flight profile, transient temperatures are presented for selected nodal locations with the current standard method, and the trajectory tape method. Whether the differences are considered significant or not depends upon the view point. Other transient temperature predictions are also presented. These results were obtained to investigate an initial concern that perhaps the predicted temperature differences between the two methods would not only be caused by the inaccuracies of the current method's assumed nominal attitude profile but also be affected by a lack of a sufficient number of orbit points in the current method. Comparison between 6, 12, and 24 orbit point parameters showed a surprising insensitivity to the number of orbit points.

  10. Robust Identification of Alzheimer's Disease subtypes based on cortical atrophy patterns.

    PubMed

    Park, Jong-Yun; Na, Han Kyu; Kim, Sungsoo; Kim, Hyunwook; Kim, Hee Jin; Seo, Sang Won; Na, Duk L; Han, Cheol E; Seong, Joon-Kyung

    2017-03-09

    Accumulating evidence suggests that Alzheimer's disease (AD) is heterogenous and can be classified into several subtypes. Here, we propose a robust subtyping method for AD based on cortical atrophy patterns and graph theory. We calculated similarities between subjects in their atrophy patterns throughout the whole brain, and clustered subjects with similar atrophy patterns using the Louvain method for modular organization extraction. We applied our method to AD patients recruited at Samsung Medical Center and externally validated our method by using the AD Neuroimaging Initiative (ADNI) dataset. Our method categorized very mild AD into three clinically distinct subtypes with high reproducibility (>90%); the parietal-predominant (P), medial temporal-predominant (MT), and diffuse (D) atrophy subtype. The P subtype showed the worst clinical presentation throughout the cognitive domains, while the MT and D subtypes exhibited relatively mild presentation. The MT subtype revealed more impaired language and executive function compared to the D subtype.

  11. Robust Identification of Alzheimer’s Disease subtypes based on cortical atrophy patterns

    NASA Astrophysics Data System (ADS)

    Park, Jong-Yun; Na, Han Kyu; Kim, Sungsoo; Kim, Hyunwook; Kim, Hee Jin; Seo, Sang Won; Na, Duk L.; Han, Cheol E.; Seong, Joon-Kyung; Weiner, Michael; Aisen, Paul; Petersen, Ronald; Jack, Clifford R.; Jagust, William; Trojanowki, John Q.; Toga, Arthur W.; Beckett, Laurel; Green, Robert C.; Saykin, Andrew J.; Morris, John; Shaw, Leslie M.; Liu, Enchi; Montine, Tom; Thomas, Ronald G.; Donohue, Michael; Walter, Sarah; Gessert, Devon; Sather, Tamie; Jiminez, Gus; Harvey, Danielle; Bernstein, Matthew; Fox, Nick; Thompson, Paul; Schuff, Norbert; Decarli, Charles; Borowski, Bret; Gunter, Jeff; Senjem, Matt; Vemuri, Prashanthi; Jones, David; Kantarci, Kejal; Ward, Chad; Koeppe, Robert A.; Foster, Norm; Reiman, Eric M.; Chen, Kewei; Mathis, Chet; Landau, Susan; Cairns, Nigel J.; Householder, Erin; Taylor Reinwald, Lisa; Lee, Virginia; Korecka, Magdalena; Figurski, Michal; Crawford, Karen; Neu, Scott; Foroud, Tatiana M.; Potkin, Steven G.; Shen, Li; Kelley, Faber; Kim, Sungeun; Nho, Kwangsik; Kachaturian, Zaven; Frank, Richard; Snyder, Peter J.; Molchan, Susan; Kaye, Jeffrey; Quinn, Joseph; Lind, Betty; Carter, Raina; Dolen, Sara; Schneider, Lon S.; Pawluczyk, Sonia; Beccera, Mauricio; Teodoro, Liberty; Spann, Bryan M.; Brewer, James; Vanderswag, Helen; Fleisher, Adam; Heidebrink, Judith L.; Lord, Joanne L.; Mason, Sara S.; Albers, Colleen S.; Knopman, David; Johnson, Kris; Doody, Rachelle S.; Villanueva Meyer, Javier; Chowdhury, Munir; Rountree, Susan; Dang, Mimi; Stern, Yaakov; Honig, Lawrence S.; Bell, Karen L.; Ances, Beau; Carroll, Maria; Leon, Sue; Mintun, Mark A.; Schneider, Stacy; Oliver, Angela; Marson, Daniel; Griffith, Randall; Clark, David; Geldmacher, David; Brockington, John; Roberson, Erik; Grossman, Hillel; Mitsis, Effie; de Toledo-Morrell, Leyla; Shah, Raj C.; Duara, Ranjan; Varon, Daniel; Greig, Maria T.; Roberts, Peggy; Albert, Marilyn; Onyike, Chiadi; D'Agostino, Daniel, II; Kielb, Stephanie; Galvin, James E.; Pogorelec, Dana M.; Cerbone, Brittany; Michel, Christina A.; Rusinek, Henry; de Leon, Mony J.; Glodzik, Lidia; de Santi, Susan; Doraiswamy, P. Murali; Petrella, Jeffrey R.; Wong, Terence Z.; Arnold, Steven E.; Karlawish, Jason H.; Wolk, David; Smith, Charles D.; Jicha, Greg; Hardy, Peter; Sinha, Partha; Oates, Elizabeth; Conrad, Gary; Lopez, Oscar L.; Oakley, Maryann; Simpson, Donna M.; Porsteinsson, Anton P.; Goldstein, Bonnie S.; Martin, Kim; Makino, Kelly M.; Ismail, M. Saleem; Brand, Connie; Mulnard, Ruth A.; Thai, Gaby; Mc Adams Ortiz, Catherine; Womack, Kyle; Mathews, Dana; Quiceno, Mary; Diaz Arrastia, Ramon; King, Richard; Weiner, Myron; Martin Cook, Kristen; Devous, Michael; Levey, Allan I.; Lah, James J.; Cellar, Janet S.; Burns, Jeffrey M.; Anderson, Heather S.; Swerdlow, Russell H.; Apostolova, Liana; Tingus, Kathleen; Woo, Ellen; Silverman, Daniel H. S.; Lu, Po H.; Bartzokis, George; Graff Radford, Neill R.; Parfitt, Francine; Kendall, Tracy; Johnson, Heather; Farlow, Martin R.; Marie Hake, Ann; Matthews, Brandy R.; Herring, Scott; Hunt, Cynthia; van Dyck, Christopher H.; Carson, Richard E.; Macavoy, Martha G.; Chertkow, Howard; Bergman, Howard; Hosein, Chris; Black, Sandra; Stefanovic, Bojana; Caldwell, Curtis; Robin Hsiung, Ging Yuek; Feldman, Howard; Mudge, Benita; Assaly, Michele; Trost, Dick; Bernick, Charles; Munic, Donna; Kerwin, Diana; Marsel Mesulam, Marek; Lipowski, Kristine; Kuo Wu, Chuang; Johnson, Nancy; Sadowsky, Carl; Martinez, Walter; Villena, Teresa; Scott Turner, Raymond; Johnson, Kathleen; Reynolds, Brigid; Sperling, Reisa A.; Johnson, Keith A.; Marshall, Gad; Frey, Meghan; Yesavage, Jerome; Taylor, Joy L.; Lane, Barton; Rosen, Allyson; Tinklenberg, Jared; Sabbagh, Marwan N.; Belden, Christine M.; Jacobson, Sandra A.; Sirrel, Sherye A.; Kowall, Neil; Killiany, Ronald; Budson, Andrew E.; Norbash, Alexander; Lynn Johnson, Patricia; Obisesan, Thomas O.; Wolday, Saba; Allard, Joanne; Lerner, Alan; Ogrocki, Paula; Hudson, Leon; Fletcher, Evan; Carmichael, Owen; Olichney, John; Kittur, Smita; Borrie, Michael; Lee, T. Y.; Bartha, Rob; Johnson, Sterling; Asthana, Sanjay; Carlsson, Cynthia M.; Preda, Adrian; Nguyen, Dana; Tariot, Pierre; Reeder, Stephanie; Bates, Vernice; Capote, Horacio; Rainka, Michelle; Scharre, Douglas W.; Kataki, Maria; Adeli, Anahita; Zimmerman, Earl A.; Celmins, Dzintra; Brown, Alice D.; Pearlson, Godfrey D.; Blank, Karen; Anderson, Karen; Santulli, Robert B.; Kitzmiller, Tamar J.; Schwartz, Eben S.; Sink, Kaycee M.; Williamson, Jeff D.; Garg, Pradeep; Watkins, Franklin; Ott, Brian R.; Querfurth, Henry; Tremont, Geoffrey; Salloway, Stephen; Malloy, Paul; Correia, Stephen; Rosen, Howard J.; Miller, Bruce L.; Mintzer, Jacobo; Spicer, Kenneth; Bachman, David; Finger, Elizabether; Pasternak, Stephen; Rachinsky, Irina; Rogers, John; Kertesz, Andrew; Pomara, Nunzio; Hernando, Raymundo; Sarrael, Antero; Schultz, Susan K.; Boles Ponto, Laura L.; Shim, Hyungsub; Smith, Karen Elizabeth; Relkin, Norman; Chaing, Gloria; Raudin, Lisa; Smith, Amanda; Fargher, Kristin; Raj, Balebail Ashok

    2017-03-01

    Accumulating evidence suggests that Alzheimer’s disease (AD) is heterogenous and can be classified into several subtypes. Here, we propose a robust subtyping method for AD based on cortical atrophy patterns and graph theory. We calculated similarities between subjects in their atrophy patterns throughout the whole brain, and clustered subjects with similar atrophy patterns using the Louvain method for modular organization extraction. We applied our method to AD patients recruited at Samsung Medical Center and externally validated our method by using the AD Neuroimaging Initiative (ADNI) dataset. Our method categorized very mild AD into three clinically distinct subtypes with high reproducibility (>90%) the parietal-predominant (P), medial temporal-predominant (MT), and diffuse (D) atrophy subtype. The P subtype showed the worst clinical presentation throughout the cognitive domains, while the MT and D subtypes exhibited relatively mild presentation. The MT subtype revealed more impaired language and executive function compared to the D subtype.

  12. [Text Comprehensibility of Hospital Report Cards].

    PubMed

    Sander, U; Kolb, B; Christoph, C; Emmert, M

    2016-12-01

    Objectives: Recently, the number of hospital report cards that compare quality of hospitals and present information from German quality reports has greatly increased. Objectives of this study were to a) identify suitable methods for measuring the readability and comprehensibility of hospital report cards, b) to obtain reliable information on the comprehensibility of texts for laymen, c) to give recommendations for improvements and d) to recommend public health actions. Methods: The readability and comprehensibility of the texts were tested with a) a computer-aided evaluation of formal text characteristics (readability indices Flesch (German formula) and 1. Wiener Sachtextformel formula), b) an expert-based heuristic analysis of readability and comprehensibility of texts (counting technical terms and analysis of text simplicity as well as brevity and conciseness using the Hamburg intelligibility model) and c) a survey of subjects about the comprehensibility of individual technical terms, the assessment of the comprehensibility of the presentations and the subjects' decisions in favour of one of the 5 presented clinics due to the better quality of data. In addition, the correlation between the results of the text analysis with the results from the survey of subjects was tested. Results: The assessment of texts with the computer-aided evaluations showed poor comprehensibility values. The assessment of text simplicity using the Hamburg intelligibility model showed poor comprehensibility values (-0.3). On average, 6.8% of the words used were technical terms. A review of 10 technical terms revealed that in all cases only a minority of respondents (from 4.4% to 39.1%) exactly knew what was meant by each of them. Most subjects (62.4%) also believed that unclear terms worsened their understanding of the information offered. The correlation analysis showed that presentations with a lower frequency of technical terms and better values for the text simplicity were better understood. Conclusion: The determination of the frequency of technical terms and the assessment of text simplicity using the Hamburg intelligibility model were suitable methods to determine the readability and comprehensibility of presentations of quality indicators. The analysis showed predominantly poor comprehensibility values and indicated the need to improve the texts of report cards. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Finite-surface method for the Maxwell equations with corner singularities

    NASA Technical Reports Server (NTRS)

    Vinokur, Marcel; Yarrow, Maurice

    1994-01-01

    The finite-surface method for the two-dimensional Maxwell equations in generalized coordinates is extended to treat perfect conductor boundaries with sharp corners. Known singular forms of the grid and the electromagnetic fields in the neighborhood of each corner are used to obtain accurate approximations to the surface and line integrals appearing in the method. Numerical results are presented for a harmonic plane wave incident on a finite flat plate. Comparisons with exact solutions show good agreement.

  14. A Multiphysics Finite Element and Peridynamics Model of Dielectric Breakdown

    DTIC Science & Technology

    2017-09-01

    A method for simulating dielectric breakdown in solid materials is presented that couples electro-quasi-statics, the adiabatic heat equation, and...temperatures or high strains. The Kelvin force computation used in the method is verified against a 1-D solution and the linearization scheme used to treat the...plane problems, a 2-D composite capacitor with a conductive flaw, and a 3-D point–plane problem. The results show that the method is capable of

  15. A novel approach to identifying regulatory motifs in distantly related genomes

    PubMed Central

    Van Hellemont, Ruth; Monsieurs, Pieter; Thijs, Gert; De Moor, Bart; Van de Peer, Yves; Marchal, Kathleen

    2005-01-01

    Although proven successful in the identification of regulatory motifs, phylogenetic footprinting methods still show some shortcomings. To assess these difficulties, most apparent when applying phylogenetic footprinting to distantly related organisms, we developed a two-step procedure that combines the advantages of sequence alignment and motif detection approaches. The results on well-studied benchmark datasets indicate that the presented method outperforms other methods when the sequences become either too long or too heterogeneous in size. PMID:16420672

  16. A New Method for Negative Bias Temperature Instability Assessment in P-Channel Metal Oxide Semiconductor Transistors

    NASA Astrophysics Data System (ADS)

    Djezzar, Boualem; Tahi, Hakim; Benabdelmoumene, Abdelmadjid; Chenouf, Amel; Kribes, Youcef

    2012-11-01

    In this paper, we present a new method, named on the fly oxide trap (OTFOT), to extract the bias temperature instability (BTI) in MOS transistors. The OTFOT method is based on charge pumping technique (CP) at low and high frequencies. We emphasize on the theoretical-based concept, giving a clear insight on the easy-use of the OTFOT methodology and demonstrating its viability to characterize the negative BTI (NBTI). Using alternatively high and low frequencies, OTFOT method separates the interface-traps (ΔNit) and border-trap (ΔNbt) (switching oxide-trap) densities independently and also their contributions to the threshold voltage shift (ΔVth), without needing additional methods. The experimental results, from two experimental scenarios, showing the extraction of NBTI-induced shifts caused by interface- and oxide-trap increases are also presented. In the first scenario, all stresses are performed on the same transistor. It exhibits an artifact value of exponent n. In the second scenario, each voltage stress is applied only on one transistor. Its results show an average n of 0.16, 0.05, and 0.11 for NBTI-induced ΔNit, ΔNbt, ΔVth, respectively. Therefore, OTFOT method can contribute to further understand the behavior of the NBTI degradation, especially through the threshold voltage shift components such as ΔVit and ΔVot caused by interface-trap and border-trap, respectively.

  17. An n -material thresholding method for improving integerness of solutions in topology optimization

    DOE PAGES

    Watts, Seth; Tortorelli, Daniel A.

    2016-04-10

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less

  18. A method for measuring the nonlinear response in dielectric spectroscopy through third harmonics detection.

    PubMed

    Thibierge, C; L'Hôte, D; Ladieu, F; Tourbot, R

    2008-10-01

    We present a high sensitivity method allowing the measurement of the nonlinear dielectric susceptibility of an insulating material at finite frequency. It has been developed for the study of dynamic heterogeneities in supercooled liquids using dielectric spectroscopy at frequencies 0.05 Hz < or = f < or = 3x10(4) Hz. It relies on the measurement of the third harmonics component of the current flowing out of a capacitor. We first show that standard laboratory electronics (amplifiers and voltage sources) nonlinearities lead to limits on the third harmonics measurements that preclude reaching the level needed by our physical goal, a ratio of the third harmonics to the fundamental signal about 10(-7). We show that reaching such a sensitivity needs a method able to get rid of the nonlinear contributions both of the measuring device (lock-in amplifier) and of the excitation voltage source. A bridge using two sources fulfills only the first of these two requirements, but allows to measure the nonlinearities of the sources. Our final method is based on a bridge with two plane capacitors characterized by different dielectric layer thicknesses. It gets rid of the source and amplifier nonlinearities because in spite of a strong frequency dependence of the capacitor impedance, it is equilibrated at any frequency. We present the first measurements of the physical nonlinear response using our method. Two extensions of the method are suggested.

  19. Three-phase Discussion Sessions.

    ERIC Educational Resources Information Center

    Karr, M. C.; And Others

    1988-01-01

    Describes the procedures, organizational pattern and design of basic soils course used by teaching assistants. Cites studies which support small-group discussion for promoting higher levels of intellectual functioning. Presents tables showing survey evaluation results of this method. (RT)

  20. Applications: Cloud Height at Night.

    ERIC Educational Resources Information Center

    Mathematics Teacher, 1980

    1980-01-01

    The method used at airports in determining the cloud height at night is presented. Several problems, the equation used, and a simple design of an alidade (an instrument that shows cloud heights directly) are also included. (MP)

  1. Photorealistic scene presentation: virtual video camera

    NASA Astrophysics Data System (ADS)

    Johnson, Michael J.; Rogers, Joel Clark W.

    1994-07-01

    This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.

  2. Sustained modelling ability of artificial neural networks in the analysis of two pharmaceuticals (dextropropoxyphene and dipyrone) present in unequal concentrations.

    PubMed

    Cámara, María S; Ferroni, Félix M; De Zan, Mercedes; Goicoechea, Héctor C

    2003-07-01

    An improvement is presented on the simultaneous determination of two active ingredients present in unequal concentrations in injections. The analysis was carried out with spectrophotometric data and non-linear multivariate calibration methods, in particular artificial neural networks (ANNs). The presence of non-linearities caused by the major analyte concentrations which deviate from Beer's law was confirmed by plotting actual vs. predicted concentrations, and observing curvatures in the residuals for the estimated concentrations with linear methods. Mixtures of dextropropoxyphene and dipyrone have been analysed by using linear and non-linear partial least-squares (PLS and NPLSs) and ANNs. Notwithstanding the high degree of spectral overlap and the occurrence of non-linearities, rapid and simultaneous analysis has been achieved, with reasonably good accuracy and precision. A commercial sample was analysed by using the present methodology, and the obtained results show reasonably good agreement with those obtained by using high-performance liquid chromatography (HPLC) and a UV-spectrophotometric comparative methods.

  3. A complete study of the precision of the concentric MacLaurin spheroid method to calculate Jupiter's gravitational moments

    NASA Astrophysics Data System (ADS)

    Debras, F.; Chabrier, G.

    2018-01-01

    A few years ago, Hubbard (2012, ApJ, 756, L15; 2013, ApJ, 768, 43) presented an elegant, non-perturbative method, called concentric MacLaurin spheroid (CMS), to calculate with very high accuracy the gravitational moments of a rotating fluid body following a barotropic pressure-density relationship. Having such an accurate method is of great importance for taking full advantage of the Juno mission, and its extremely precise determination of Jupiter gravitational moments, to better constrain the internal structure of the planet. Recently, several authors have applied this method to the Juno mission with 512 spheroids linearly spaced in altitude. We demonstrate in this paper that such calculations lead to errors larger than Juno's error bars, invalidating the aforederived Jupiter models at the level required by Juno's precision. We show that, in order to fulfill Juno's observational constraints, at least 1500 spheroids must be used with a cubic, square or exponential repartition, the most reliable solutions. When using a realistic equation of state instead of a polytrope, we highlight the necessity to properly describe the outermost layers to derive an accurate boundary condition, excluding in particular a zero pressure outer condition. Providing all these constraints are fulfilled, the CMS method can indeed be used to derive Jupiter models within Juno's present observational constraints. However, we show that the treatment of the outermost layers leads to irreducible errors in the calculation of the gravitational moments and thus on the inferred physical quantities for the planet. We have quantified these errors and evaluated the maximum precision that can be reached with the CMS method in the present and future exploitation of Juno's data.

  4. A vessel length-based method to compute coronary fractional flow reserve from optical coherence tomography images.

    PubMed

    Lee, Kyung Eun; Lee, Seo Ho; Shin, Eun-Seok; Shim, Eun Bo

    2017-06-26

    Hemodynamic simulation for quantifying fractional flow reserve (FFR) is often performed in a patient-specific geometry of coronary arteries reconstructed from the images from various imaging modalities. Because optical coherence tomography (OCT) images can provide more precise vascular lumen geometry, regardless of stenotic severity, hemodynamic simulation based on OCT images may be effective. The aim of this study is to perform OCT-FFR simulations by coupling a 3D CFD model from geometrically correct OCT images with a LPM based on vessel lengths extracted from CAG data with clinical validations for the present method. To simulate coronary hemodynamics, we developed a fast and accurate method that combined a computational fluid dynamics (CFD) model of an OCT-based region of interest (ROI) with a lumped parameter model (LPM) of the coronary microvasculature and veins. Here, the LPM was based on vessel lengths extracted from coronary X-ray angiography (CAG) images. Based on a vessel length-based approach, we describe a theoretical formulation for the total resistance of the LPM from a three-dimensional (3D) CFD model of the ROI. To show the utility of this method, we present calculated examples of FFR from OCT images. To validate the OCT-based FFR calculation (OCT-FFR) clinically, we compared the computed OCT-FFR values for 17 vessels of 13 patients with clinically measured FFR (M-FFR) values. A novel formulation for the total resistance of LPM is introduced to accurately simulate a 3D CFD model of the ROI. The simulated FFR values compared well with clinically measured ones, showing the accuracy of the method. Moreover, the present method is fast in terms of computational time, enabling clinicians to provide solutions handled within the hospital.

  5. Biomimetic synthesis of chiral erbium-doped silver/peptide/silica core-shell nanoparticles (ESPN)

    NASA Astrophysics Data System (ADS)

    Mantion, Alexandre; Graf, Philipp; Florea, Ileana; Haase, Andrea; Thünemann, Andreas F.; Mašić, Admir; Ersen, Ovidiu; Rabu, Pierre; Meier, Wolfgang; Luch, Andreas; Taubert, Andreas

    2011-12-01

    Peptide-modified silver nanoparticles have been coated with an erbium-doped silica layer using a method inspired by silica biomineralization. Electron microscopy and small-angle X-ray scattering confirm the presence of an Ag/peptide core and silica shell. The erbium is present as small Er2O3 particles in and on the silica shell. Raman, IR, UV-Vis, and circular dichroism spectroscopies show that the peptide is still present after shell formation and the nanoparticles conserve a chiral plasmon resonance. Magnetic measurements find a paramagnetic behavior. In vitro tests using a macrophage cell line model show that the resulting multicomponent nanoparticles have a low toxicity for macrophages, even on partial dissolution of the silica shell.Peptide-modified silver nanoparticles have been coated with an erbium-doped silica layer using a method inspired by silica biomineralization. Electron microscopy and small-angle X-ray scattering confirm the presence of an Ag/peptide core and silica shell. The erbium is present as small Er2O3 particles in and on the silica shell. Raman, IR, UV-Vis, and circular dichroism spectroscopies show that the peptide is still present after shell formation and the nanoparticles conserve a chiral plasmon resonance. Magnetic measurements find a paramagnetic behavior. In vitro tests using a macrophage cell line model show that the resulting multicomponent nanoparticles have a low toxicity for macrophages, even on partial dissolution of the silica shell. Electronic supplementary information (ESI) available: Figures S1 to S12, Tables S1 and S2. See DOI: 10.1039/c1nr10930h

  6. Meta-analysis of haplotype-association studies: comparison of methods and empirical evaluation of the literature

    PubMed Central

    2011-01-01

    Background Meta-analysis is a popular methodology in several fields of medical research, including genetic association studies. However, the methods used for meta-analysis of association studies that report haplotypes have not been studied in detail. In this work, methods for performing meta-analysis of haplotype association studies are summarized, compared and presented in a unified framework along with an empirical evaluation of the literature. Results We present multivariate methods that use summary-based data as well as methods that use binary and count data in a generalized linear mixed model framework (logistic regression, multinomial regression and Poisson regression). The methods presented here avoid the inflation of the type I error rate that could be the result of the traditional approach of comparing a haplotype against the remaining ones, whereas, they can be fitted using standard software. Moreover, formal global tests are presented for assessing the statistical significance of the overall association. Although the methods presented here assume that the haplotypes are directly observed, they can be easily extended to allow for such an uncertainty by weighting the haplotypes by their probability. Conclusions An empirical evaluation of the published literature and a comparison against the meta-analyses that use single nucleotide polymorphisms, suggests that the studies reporting meta-analysis of haplotypes contain approximately half of the included studies and produce significant results twice more often. We show that this excess of statistically significant results, stems from the sub-optimal method of analysis used and, in approximately half of the cases, the statistical significance is refuted if the data are properly re-analyzed. Illustrative examples of code are given in Stata and it is anticipated that the methods developed in this work will be widely applied in the meta-analysis of haplotype association studies. PMID:21247440

  7. Principle and engineering implementation of 3D visual representation and indexing of medical diagnostic records (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shi, Liehang; Sun, Jianyong; Yang, Yuanyuan; Ling, Tonghui; Wang, Mingqing; Zhang, Jianguo

    2017-03-01

    Purpose: Due to the generation of a large number of electronic imaging diagnostic records (IDR) year after year in a digital hospital, The IDR has become the main component of medical big data which brings huge values to healthcare services, professionals and administration. But a large volume of IDR presented in a hospital also brings new challenges to healthcare professionals and services as there may be too many IDRs for each patient so that it is difficult for a doctor to review all IDR of each patient in a limited appointed time slot. In this presentation, we presented an innovation method which uses an anatomical 3D structure object visually to represent and index historical medical status of each patient, which is called Visual Patient (VP) in this presentation, based on long term archived electronic IDR in a hospital, so that a doctor can quickly learn the historical medical status of the patient, quickly point and retrieve the IDR he or she interested in a limited appointed time slot. Method: The engineering implementation of VP was to build 3D Visual Representation and Index system called VP system (VPS) including components of natural language processing (NLP) for Chinese, Visual Index Creator (VIC), and 3D Visual Rendering Engine.There were three steps in this implementation: (1) an XML-based electronic anatomic structure of human body for each patient was created and used visually to index the all of abstract information of each IDR for each patient; (2)a number of specific designed IDR parsing processors were developed and used to extract various kinds of abstract information of IDRs retrieved from hospital information systems; (3) a 3D anatomic rendering object was introduced visually to represent and display the content of VIO for each patient. Results: The VPS was implemented in a simulated clinical environment including PACS/RIS to show VP instance to doctors. We setup two evaluation scenario in a hospital radiology department to evaluate whether radiologists accept the VPS and how the VP impact the radiologists' efficiency and accuracy in reviewing historic medical records of the patients. We got a statistical results showing that more than 70% participated radiologist would like to use the VPS in their radiological imaging services. In comparison testing of using VPS and RIS/PACS in reviewing historic medical records of the patients, we got a statistical result showing that the efficiency of using VPS was higher than that of using PACS/RIS. New Technologies and Results to be presented: This presentation presented an innovation method to use an anatomical 3D structure object, called VP, visually to represent and index historical medical records such as IDR of each patient and a doctor can quickly learn the historical medical status of the patient through VPS. The evaluation results showed that VPS has better performance than RIS-integrated PACS in efficiency of reviewing historic medical records of the patients. Conclusions: In this presentation, we presented an innovation method called VP to use an anatomical 3D structure object visually to represent and index historical IDR of each patient and briefed an engineering implementation to build a VPS to implement the major features and functions of VP. We setup two evaluation scenarios in a hospital radiology department to evaluate VPS and achieved evaluation results showed that VPS has better performance than RIS-integrated PACS in efficiency of reviewing historic medical records of the patients.

  8. Online and unsupervised face recognition for continuous video stream

    NASA Astrophysics Data System (ADS)

    Huo, Hongwen; Feng, Jufu

    2009-10-01

    We present a novel online face recognition approach for video stream in this paper. Our method includes two stages: pre-training and online training. In the pre-training phase, our method observes interactions, collects batches of input data, and attempts to estimate their distributions (Box-Cox transformation is adopted here to normalize rough estimates). In the online training phase, our method incrementally improves classifiers' knowledge of the face space and updates it continuously with incremental eigenspace analysis. The performance achieved by our method shows its great potential in video stream processing.

  9. A new cannulation method for isolated mitral valve surgery--"apicoaortic-pa" cannulation.

    PubMed

    Wada, J; Komatsu, S; Nakae, S; Kazui, T

    1976-06-01

    The present paper describes experimental and clinical studies of a new method "Apicoaortic-PA" cannulation for mitral valve surgery. Our experimental study showed that this method was more rapid and more physiological for cardiopulmonary bypass. We used this technique in 55 cases of isolated mitral valve surgery with successful results. Our general philosophy of surgical approach to the mitral valve diseases is also discussed. We advocate the utilization of the "Apicoaortic Pulmonary Artery" cannulation method for clinical use in isolated mitral valve surgery through the left thoracotomy.

  10. An integrated algorithm for hypersonic fluid-thermal-structural numerical simulation

    NASA Astrophysics Data System (ADS)

    Li, Jia-Wei; Wang, Jiang-Feng

    2018-05-01

    In this paper, a fluid-structural-thermal integrated method is presented based on finite volume method. A unified integral equations system is developed as the control equations for physical process of aero-heating and structural heat transfer. The whole physical field is discretized by using an up-wind finite volume method. To demonstrate its capability, the numerical simulation of Mach 6.47 flow over stainless steel cylinder shows a good agreement with measured values, and this method dynamically simulates the objective physical processes. Thus, the integrated algorithm proves to be efficient and reliable.

  11. Acceleration of low order finite element computation with GPUs (Invited)

    NASA Astrophysics Data System (ADS)

    Knepley, M. G.

    2010-12-01

    Considerable effort has been focused on the acceleration using GPUs of high order spectral element methods and discontinuous Galerkin finite element methods. However, these methods are not universally applicable, and much of the existing FEM software base employs low order methods. In this talk, we present a formulation of FEM, using the PETSc framework from ANL, which is amenable to GPU acceleration even at very low order. In addition, using the FEniCS system for FEM, we show that the relevant kernels can be automatically generated and optimized using a symbolic manipulation system.

  12. DNA Base-Calling from a Nanopore Using a Viterbi Algorithm

    PubMed Central

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (∼98%), even with a poor signal/noise ratio. PMID:22677395

  13. Rolling bearing fault diagnosis based on information fusion using Dempster-Shafer evidence theory

    NASA Astrophysics Data System (ADS)

    Pei, Di; Yue, Jianhai; Jiao, Jing

    2017-10-01

    This paper presents a fault diagnosis method for rolling bearing based on information fusion. Acceleration sensors are arranged at different position to get bearing vibration data as diagnostic evidence. The Dempster-Shafer (D-S) evidence theory is used to fuse multi-sensor data to improve diagnostic accuracy. The efficiency of the proposed method is demonstrated by the high speed train transmission test bench. The results of experiment show that the proposed method in this paper improves the rolling bearing fault diagnosis accuracy compared with traditional signal analysis methods.

  14. A Versatile Method for Nanostructuring Metals, Alloys and Metal Based Composites

    NASA Astrophysics Data System (ADS)

    Gurau, G.; Gurau, C.; Bujoreanu, L. G.; Sampath, V.

    2017-06-01

    A new severe plastic deformation method based on High Pressure Torsion is described. The method patented as High Speed High Pressure Torsion (HSHPT) shows a wide scope and excellent adaptability assuring large plastic deformation degree on metals, alloys even on hard to deform or brittle alloys. The paper present results obtained on aluminium, magnesium, titan, iron and coper alloys. In addition capability of HSHPT to process metallic composites is described. OM SEM, TEM, DSC, RDX and HV investigation methods were employed to confirm fine and ultrafine structure.

  15. Time-splitting combined with exponential wave integrator fourier pseudospectral method for Schrödinger-Boussinesq system

    NASA Astrophysics Data System (ADS)

    Liao, Feng; Zhang, Luming; Wang, Shanshan

    2018-02-01

    In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.

  16. A quantitative experimental phantom study on MRI image uniformity.

    PubMed

    Felemban, Doaa; Verdonschot, Rinus G; Iwamoto, Yuri; Uchiyama, Yuka; Kakimoto, Naoya; Kreiborg, Sven; Murakami, Shumei

    2018-05-23

    Our goal was to assess MR image uniformity by investigating aspects influencing said uniformity via a method laid out by the National Electrical Manufacturers Association (NEMA). Six metallic materials embedded in a glass phantom were scanned (i.e. Au, Ag, Al, Au-Ag-Pd alloy, Ti and Co-Cr alloy) as well as a reference image. Sequences included spin echo (SE) and gradient echo (GRE) scanned in three planes (i.e. axial, coronal, and sagittal). Moreover, three surface coil types (i.e. head and neck, Brain, and temporomandibular joint coils) and two image correction methods (i.e. surface coil intensity correction or SCIC, phased array uniformity enhancement or PURE) were employed to evaluate their effectiveness on image uniformity. Image uniformity was assessed using the National Electrical Manufacturers Association peak-deviation non-uniformity method. Results showed that temporomandibular joint coils elicited the least uniform image and brain coils outperformed head and neck coils when metallic materials were present. Additionally, when metallic materials were present, spin echo outperformed gradient echo especially for Co-Cr (particularly in the axial plane). Furthermore, both SCIC and PURE improved image uniformity compared to uncorrected images, and SCIC slightly surpassed PURE when metallic metals were present. Lastly, Co-Cr elicited the least uniform image while other metallic materials generally showed similar patterns (i.e. no significant deviation from images without metallic metals). Overall, a quantitative understanding of the factors influencing MR image uniformity (e.g. coil type, imaging method, metal susceptibility, and post-hoc correction method) is advantageous to optimize image quality, assists clinical interpretation, and may result in improved medical and dental care.

  17. An Immersed Boundary-Lattice Boltzmann Method for Simulating Particulate Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Baili; Cheng, Ming; Lou, Jing

    2013-11-01

    A two-dimensional momentum exchange-based immersed boundary-lattice Boltzmann method developed by X.D. Niu et al. (2006) has been extended in three-dimensions for solving fluid-particles interaction problems. This method combines the most desirable features of the lattice Boltzmann method and the immersed boundary method by using a regular Eulerian mesh for the flow domain and a Lagrangian mesh for the moving particles in the flow field. The non-slip boundary conditions for the fluid and the particles are enforced by adding a force density term into the lattice Boltzmann equation, and the forcing term is simply calculated by the momentum exchange of the boundary particle density distribution functions, which are interpolated by the Lagrangian polynomials from the underlying Eulerian mesh. This method preserves the advantages of lattice Boltzmann method in tracking a group of particles and, at the same time, provides an alternative approach to treat solid-fluid boundary conditions. Numerical validations show that the present method is very accurate and efficient. The present method will be further developed to simulate more complex problems with particle deformation, particle-bubble and particle-droplet interactions.

  18. Analysis of high aspect ratio jet flap wings of arbitrary geometry.

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    Paper presents a design technique for rapidly computing lift, induced drag, and spanwise loading of unswept jet flap wings of arbitrary thickness, chord, twist, blowing, and jet angle, including discontinuities. Linear theory is used, extending Spence's method for elliptically loaded jet flap wings. Curves for uniformly blown rectangular wings are presented for direct performance estimation. Arbitrary planforms require a simple computer program. Method of reducing wing to equivalent stretched, twisted, unblown planform for hand calculation is also given. Results correlate with limited existing data, and show lifting line theory is reasonable down to aspect ratios of 5.

  19. Méthode du second membre modifié pour la gestion de rapports de viscosité importants dans le problème de Stokes bifluide

    NASA Astrophysics Data System (ADS)

    Bui, Thi Thu Cuc; Frey, Pascal; Maury, Bertrand

    2008-06-01

    In this Note, we present a method to solve numerically the Stokes equation for the incompressible flow between two immiscible fluids presenting very different viscosities. The resolution of the finite element systems of equations is performed using Uzawa's method. The stiffness matrix conditioning problems related to the very important viscosity ratios are circumvented using an new iterative scheme. A numerical example is proposed to show the efficiency of this approach. To cite this article: T.T.C. Bui et al., C. R. Mecanique 336 (2008).

  20. Foveation: an alternative method to simultaneously preserve privacy and information in face images

    NASA Astrophysics Data System (ADS)

    Alonso, Víctor E.; Enríquez-Caldera, Rogerio; Sucar, Luis Enrique

    2017-03-01

    This paper presents a real-time foveation technique proposed as an alternative method for image obfuscation while simultaneously preserving privacy in face deidentification. Relevance of the proposed technique is discussed through a comparative study of the most common distortions methods in face images and an assessment on performance and effectiveness of privacy protection. All the different techniques presented here are evaluated when they go through a face recognition software. Evaluating the data utility preservation was carried out under gender and facial expression classification. Results on quantifying the tradeoff between privacy protection and image information preservation at different obfuscation levels are presented. Comparative results using the facial expression subset of the FERET database show that the technique achieves a good tradeoff between privacy and awareness with 30% of recognition rate and a classification accuracy as high as 88% obtained from the common figures of merit using the privacy-awareness map.

  1. Quantitative Rainbow Schlieren Deflectometry as a Temperature Diagnostic for Spherical Flames

    NASA Technical Reports Server (NTRS)

    Feikema, Douglas A.

    2004-01-01

    Numerical analysis and experimental results are presented to define a method for quantitatively measuring the temperature distribution of a spherical diffusion flame using Rainbow Schlieren Deflectometry in microgravity. First, a numerical analysis is completed to show the method can suitably determine temperature in the presence of spatially varying species composition. Also, a numerical forward-backward inversion calculation is presented to illustrate the types of calculations and deflections to be encountered. Lastly, a normal gravity demonstration of temperature measurement in an axisymmetric laminar, diffusion flame using Rainbow Schlieren deflectometry is presented. The method employed in this paper illustrates the necessary steps for the preliminary design of a Schlieren system. The largest deflections for the normal gravity flame considered in this paper are 7.4 x 10(-4) radians which can be accurately measured with 2 meter focal length collimating and decollimating optics. The experimental uncertainty of deflection is less than 5 x 10(-5) radians.

  2. [Possibilities of the TruScreen for screening of precancer and cancer of the uterine cervix].

    PubMed

    Zlatkov, V

    2009-01-01

    The classic approach of detection of pre-cancer and cancer of uterine cervix includes cytological examination, followed by colposcopy assessment of the detected cytological abnormalities. Real-time devices use in-vivo techniques for the measurement, computerized analysis and classifying of different types of cervical tissues. The aim of the present review is to present the technical characteristics and to discus the diagnostic possibilities of TruScreen-automated optical-electron system for cervical screening. The analysis of the presented in the literature diagnostic value of the method at different grades intraepithelial lesions shows that it has higher sensitivity (67-70%) and lower specificity (81%) in comparison to the Pap test with the following results (45-69% sensitivity and 95% specificity). This makes the method suitable for independent primary screening, as well as for adding the diagnostic assurance of the cytological method.

  3. Statistical evaluation of fatty acid profile and cholesterol content in fish (common carp) lipids obtained by different sample preparation procedures.

    PubMed

    Spiric, Aurelija; Trbovic, Dejana; Vranic, Danijela; Djinovic, Jasna; Petronijevic, Radivoj; Matekalo-Sverak, Vesna

    2010-07-05

    Studies performed on lipid extraction from animal and fish tissues do not provide information on its influence on fatty acid composition of the extracted lipids as well as on cholesterol content. Data presented in this paper indicate the impact of extraction procedures on fatty acid profile of fish lipids extracted by the modified Soxhlet and ASE (accelerated solvent extraction) procedure. Cholesterol was also determined by direct saponification method, too. Student's paired t-test used for comparison of the total fat content in carp fish population obtained by two extraction methods shows that differences between values of the total fat content determined by ASE and modified Soxhlet method are not statistically significant. Values obtained by three different methods (direct saponification, ASE and modified Soxhlet method), used for determination of cholesterol content in carp, were compared by one-way analysis of variance (ANOVA). The obtained results show that modified Soxhlet method gives results which differ significantly from the results obtained by direct saponification and ASE method. However the results obtained by direct saponification and ASE method do not differ significantly from each other. The highest quantities for cholesterol (37.65 to 65.44 mg/100 g) in the analyzed fish muscle were obtained by applying direct saponification method, as less destructive one, followed by ASE (34.16 to 52.60 mg/100 g) and modified Soxhlet extraction method (10.73 to 30.83 mg/100 g). Modified Soxhlet method for extraction of fish lipids gives higher values for n-6 fatty acids than ASE method (t(paired)=3.22 t(c)=2.36), while there is no statistically significant difference in the n-3 content levels between the methods (t(paired)=1.31). The UNSFA/SFA ratio obtained by using modified Soxhlet method is also higher than the ratio obtained using ASE method (t(paired)=4.88 t(c)=2.36). Results of Principal Component Analysis (PCA) showed that the highest positive impact to the second principal component (PC2) is recorded by C18:3 n-3, and C20:3 n-6, being present in a higher amount in the samples treated by the modified Soxhlet extraction, while C22:5 n-3, C20:3 n-3, C22:1 and C20:4, C16 and C18 negatively influence the score values of the PC2, showing significantly increased level in the samples treated by ASE method. Hotelling's paired T-square test used on the first three principal components for confirmation of differences in individual fatty acid content obtained by ASE and Soxhlet method in carp muscle showed statistically significant difference between these two data sets (T(2)=161.308, p<0.001). Copyright 2010 Elsevier B.V. All rights reserved.

  4. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.

    2003-01-01

    The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.

  5. Photonic slab heterostructures based on opals

    NASA Astrophysics Data System (ADS)

    Palacios-Lidon, Elisa; Galisteo-Lopez, Juan F.; Juarez, Beatriz H.; Lopez, Cefe

    2004-09-01

    In this paper the fabrication of photonic slab heterostructures based on artificial opals is presented. The innovated method combines high-quality thin-films growing of opals and silica infiltration by Chemical Vapor Deposition through a multi-step process. By varying structure parameters, such as lattice constant, sample thickness or refractive index, different heterostructures have been obtained. The optical study of these systems, carried out by reflectance and transmittance measurements, shows that the prepared samples are of high quality further confirmed by Scanning Electron Microscopy micrographs. The proposed novel method for sample preparation allows a high control of the involved structure parameters, giving the possibility of tunning their photonic behavior. Special attention in the optical response of these materials has been addressed to the study of planar defects embedded in opals, due to their importance in different photonic fields and future technological applications. Reflectance and transmission measurements show a sharp resonance due to localized states associated with the presence of planar defects. A detailed study of the defect mode position and its dependance on defect thickness and on the surrounding photonic crystal is presented as well as evidence showing the scalability of the problem. Finally, it is also concluded that the proposed method is cheap and versatile allowing the preparation of opal-based complex structures.

  6. Statistical Methods for Identifying Sequence Motifs Affecting Point Mutations

    PubMed Central

    Zhu, Yicheng; Neeman, Teresa; Yap, Von Bing; Huttley, Gavin A.

    2017-01-01

    Mutation processes differ between types of point mutation, genomic locations, cells, and biological species. For some point mutations, specific neighboring bases are known to be mechanistically influential. Beyond these cases, numerous questions remain unresolved, including: what are the sequence motifs that affect point mutations? How large are the motifs? Are they strand symmetric? And, do they vary between samples? We present new log-linear models that allow explicit examination of these questions, along with sequence logo style visualization to enable identifying specific motifs. We demonstrate the performance of these methods by analyzing mutation processes in human germline and malignant melanoma. We recapitulate the known CpG effect, and identify novel motifs, including a highly significant motif associated with A→G mutations. We show that major effects of neighbors on germline mutation lie within ±2 of the mutating base. Models are also presented for contrasting the entire mutation spectra (the distribution of the different point mutations). We show the spectra vary significantly between autosomes and X-chromosome, with a difference in T→C transition dominating. Analyses of malignant melanoma confirmed reported characteristic features of this cancer, including statistically significant strand asymmetry, and markedly different neighboring influences. The methods we present are made freely available as a Python library https://bitbucket.org/pycogent3/mutationmotif. PMID:27974498

  7. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    NASA Astrophysics Data System (ADS)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  8. Reader Reaction On the generalized Kruskal-Wallis test for genetic association studies incorporating group uncertainty

    PubMed Central

    Wu, Baolin; Guan, Weihua

    2015-01-01

    Summary Acar and Sun (2013, Biometrics, 69, 427-435) presented a generalized Kruskal-Wallis (GKW) test for genetic association studies that incorporated the genotype uncertainty and showed its robust and competitive performance compared to existing methods. We present another interesting way to derive the GKW test via a rank linear model. PMID:25351417

  9. Reader reaction on the generalized Kruskal-Wallis test for genetic association studies incorporating group uncertainty.

    PubMed

    Wu, Baolin; Guan, Weihua

    2015-06-01

    Acar and Sun (2013, Biometrics 69, 427-435) presented a generalized Kruskal-Wallis (GKW) test for genetic association studies that incorporated the genotype uncertainty and showed its robust and competitive performance compared to existing methods. We present another interesting way to derive the GKW test via a rank linear model. © 2014, The International Biometric Society.

  10. Spectral simulations of an axisymmetric force-free pulsar magnetosphere

    NASA Astrophysics Data System (ADS)

    Cao, Gang; Zhang, Li; Sun, Sineng

    2016-02-01

    A pseudo-spectral method with an absorbing outer boundary is used to solve a set of time-dependent force-free equations. In this method, both electric and magnetic fields are expanded in terms of the vector spherical harmonic (VSH) functions in spherical geometry and the divergence-free state of the magnetic field is enforced analytically by a projection method. Our simulations show that the Deutsch vacuum solution and the Michel monopole solution can be reproduced well by our pseudo-spectral code. Further, the method is used to present a time-dependent simulation of the force-free pulsar magnetosphere for an aligned rotator. The simulations show that the current sheet in the equatorial plane can be resolved well and the spin-down luminosity obtained in the steady state is in good agreement with the value given by Spitkovsky.

  11. Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images.

    PubMed

    Dabbah, M A; Graham, J; Petropoulos, I; Tavakoli, M; Malik, R A

    2010-01-01

    Corneal Confocal Microscopy (CCM) imaging is a non-invasive surrogate of detecting, quantifying and monitoring diabetic peripheral neuropathy. This paper presents an automated method for detecting nerve-fibres from CCM images using a dual-model detection algorithm and compares the performance to well-established texture and feature detection methods. The algorithm comprises two separate models, one for the background and another for the foreground (nerve-fibres), which work interactively. Our evaluation shows significant improvement (p approximately 0) in both error rate and signal-to-noise ratio of this model over the competitor methods. The automatic method is also evaluated in comparison with manual ground truth analysis in assessing diabetic neuropathy on the basis of nerve-fibre length, and shows a strong correlation (r = 0.92). Both analyses significantly separate diabetic patients from control subjects (p approximately 0).

  12. [Lateral chromatic aberrations correction for AOTF imaging spectrometer based on doublet prism].

    PubMed

    Zhao, Hui-Jie; Zhou, Peng-Wei; Zhang, Ying; Li, Chong-Chong

    2013-10-01

    An user defined surface function method was proposed to model the acousto-optic interaction of AOTF based on wave-vector match principle. Assessment experiment result shows that this model can achieve accurate ray trace of AOTF diffracted beam. In addition, AOTF imaging spectrometer presents large residual lateral color when traditional chromatic aberrations correcting method is adopted. In order to reduce lateral chromatic aberrations, a method based on doublet prism is proposed. The optical material and angle of the prism are optimized automatically using global optimization with the help of user defined AOTF surface. Simulation result shows that the proposed method provides AOTF imaging spectrometer with great conveniences, which reduces the lateral chromatic aberration to less than 0.000 3 degrees and improves by one order of magnitude, with spectral image shift effectively corrected.

  13. Automatic brain caudate nuclei segmentation and classification in diagnostic of Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Igual, Laura; Soliva, Joan Carles; Escalera, Sergio; Gimeno, Roger; Vilarroya, Oscar; Radeva, Petia

    2012-12-01

    We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Temporary Thermocouple Attachment for Thermal/Vacuum Testing at Non-Extreme Temperatures

    NASA Technical Reports Server (NTRS)

    Ungar, Eugene K.; Wright, Sarah E.

    2016-01-01

    Post-test examination and data analysis that followed a two week long vacuum test showed that numerous self-stick thermocouples became detached from the test article. The thermocouples were reattached with thermally conductive epoxy and the test was repeated to obtain the required data. Because the thermocouple detachment resulted in significant expense and rework, it was decided to investigate the temporary attachment methods used around NASA and to perform a test to assess their efficacy. The present work describes the original test and the analysis that showed that the thermocouples had become detached, temporary thermocouple attachment methods assessed in the retest and in the thermocouple attachment test, and makes a recommendation for attachment methods for future tests.

  15. Time-of-flight PET time calibration using data consistency

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan

    2018-05-01

    This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.

  16. Sparse-View Ultrasound Diffraction Tomography Using Compressed Sensing with Nonuniform FFT

    PubMed Central

    2014-01-01

    Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed. PMID:24868241

  17. Influence of the Extractive Method on the Recovery of Phenolic Compounds in Different Parts of Hymenaea martiana Hayne

    PubMed Central

    Oliveira, Fernanda Granja da Silva; de Lima-Saraiva, Sarah Raquel Gomes; Oliveira, Ana Paula; Rabêlo, Suzana Vieira; Rolim, Larissa Araújo; Almeida, Jackson Roberto Guedes da Silva

    2016-01-01

    Background: Popularly known as “jatobá,” Hymenaea martiana Hayne is a medicinal plant widely used in the Brazilian Northeast for the treatment of various diseases. Objective: The aim of this study was to evaluate the influence of different extractive methods in the production of phenolic compounds from different parts of H. martiana. Materials and Methods: The leaves, bark, fruits, and seeds were dried, pulverized, and submitted to maceration, ultrasound, and percolation extractive methods, which were evaluated for yield, visual aspects, qualitative phytochemical screening, phenolic compound content, and total flavonoids. Results: The highest results of yield were obtained from the maceration of the leaves, which may be related to the contact time between the plant drug and solvent. The visual aspects of the extracts presented some differences between the extractive methods. The phytochemical screening showed consistent data with other studies of the genus. Both the vegetal part as the different extractive methods influenced significantly the levels of phenolic compounds, and the highest content was found in the maceration of the barks, even more than the content found previously. No differences between the levels of total flavonoids were significant. The highest concentration of total flavonoids was found in the ultrasound of the barks, followed by maceration on this drug. According to the results, the barks of H. martiana presented the highest total flavonoid contents. Conclusion: The results demonstrate that both the vegetable and the different extractive methods influenced significantly various parameters obtained in the various extracts, demonstrating the importance of systematic comparative studies for the development of pharmaceuticals and cosmetics. SUMMARY The phytochemical screening showed consistent data with other studies of the genus HymenaeaBoth the vegetable part and the different extractive methods influenced significantly various parameters obtained in the various extracts, including the levels of phenolic compoundsThe barks of H. martiana presented the highest total phenolic and flavonoid contents. PMID:27695267

  18. A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image

    NASA Astrophysics Data System (ADS)

    Su, Junying

    2011-11-01

    A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.

  19. Determining the refractive index of particles using glare-point imaging technique

    NASA Astrophysics Data System (ADS)

    Meng, Rui; Ge, Baozhen; Lu, Qieni; Yu, Xiaoxue

    2018-04-01

    A method of measuring the refractive index of a particle is presented from a glare-point image. The space of a doublet image of a particle can be determined with high accuracy by using auto-correlation and Gaussian interpolation, and then the refractive index is obtained from glare-point separation, and a factor that may influence the accuracy of glare-point separation is explored. Experiments are carried out for three different kinds of particles, including polystyrene latex particles, glass beads, and water droplets, whose measuring accuracy is improved by the data fitting method. The research results show that the method presented in this paper is feasible and beneficial to applications such as spray and atmospheric composition measurements.

  20. Method to estimate the electron temperature and neutral density in a plasma from spectroscopic measurements using argon atom and ion collisional-radiative models.

    PubMed

    Sciamma, Ella M; Bengtson, Roger D; Rowan, W L; Keesee, Amy; Lee, Charles A; Berisford, Dan; Lee, Kevin; Gentle, K W

    2008-10-01

    We present a method to infer the electron temperature in argon plasmas using a collisional-radiative model for argon ions and measurements of electron density to interpret absolutely calibrated spectroscopic measurements of argon ion (Ar II) line intensities. The neutral density, and hence the degree of ionization of this plasma, can then be estimated using argon atom (Ar I) line intensities and a collisional-radiative model for argon atoms. This method has been tested for plasmas generated on two different devices at the University of Texas at Austin: the helicon experiment and the helimak experiment. We present results that show good correlation with other measurements in the plasma.

  1. Research on social communication network evolution based on topology potential distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Dongjie; Jiang, Jian; Li, Deyi; Zhang, Haisu; Chen, Guisheng

    2011-12-01

    Aiming at the problem of social communication network evolution, first, topology potential is introduced to measure the local influence among nodes in networks. Second, from the perspective of topology potential distribution the method of network evolution description based on topology potential distribution is presented, which takes the artificial intelligence with uncertainty as basic theory and local influence among nodes as essentiality. Then, a social communication network is constructed by enron email dataset, the method presented is used to analyze the characteristic of the social communication network evolution and some useful conclusions are got, implying that the method is effective, which shows that topology potential distribution can effectively describe the characteristic of sociology and detect the local changes in social communication network.

  2. Hardness of H13 Tool Steel After Non-isothermal Tempering

    NASA Astrophysics Data System (ADS)

    Nelson, E.; Kohli, A.; Poirier, D. R.

    2018-04-01

    A direct method to calculate the tempering response of a tool steel (H13) that exhibits secondary hardening is presented. Based on the traditional method of presenting tempering response in terms of isothermal tempering, we show that the tempering response for a steel undergoing a non-isothermal tempering schedule can be predicted. Experiments comprised (1) isothermal tempering, (2) non-isothermal tempering pertaining to a relatively slow heating to process-temperature and (3) fast-heating cycles that are relevant to tempering by induction heating. After establishing the tempering response of the steel under simple isothermal conditions, the tempering response can be applied to non-isothermal tempering by using a numerical method to calculate the tempering parameter. Calculated results are verified by the experiments.

  3. Particle size distributions and the vertical distribution of suspended matter in the upwelling region off Oregon

    NASA Technical Reports Server (NTRS)

    Kitchen, J. C.

    1977-01-01

    Various methods of presenting and mathematically describing particle size distribution are explained and evaluated. The hyperbolic distribution is found to be the most practical but the more complex characteristic vector analysis is the most sensitive to changes in the shape of the particle size distributions. A method for determining onshore-offshore flow patterns from the distribution of particulates was presented. A numerical model of the vertical structure of two size classes of particles was developed. The results show a close similarity to the observed distributions but overestimate the particle concentration by forty percent. This was attributed to ignoring grazing by zooplankton. Sensivity analyses showed the size preference was most responsive to the maximum specific growth rates and nutrient half saturation constants. The verical structure was highly dependent on the eddy diffusivity followed closely by the growth terms.

  4. A path planning method used in fluid jet polishing eliminating lightweight mirror imprinting effect

    NASA Astrophysics Data System (ADS)

    Li, Wenzong; Fan, Bin; Shi, Chunyan; Wang, Jia; Zhuo, Bin

    2014-08-01

    With the development of space technology, the design of optical system tends to large aperture lightweight mirror with high dimension-thickness ratio. However, when the lightweight mirror PV value is less than λ/10 , the surface will show wavy imprinting effect obviously. Imprinting effect introduced by head-tool pressure has become a technological barrier in high-precision lightweight mirror manufacturing. Fluid jet polishing can exclude outside pressure. Presently, machining tracks often used are grating type path, screw type path and pseudo-random path. On the edge of imprinting error, the speed of adjacent path points changes too fast, which causes the machine hard to reflect quickly, brings about new path error, and increases the polishing time due to superfluous path. This paper presents a new planning path method to eliminate imprinting effect. Simulation results show that the path of the improved grating path can better eliminate imprinting effect compared to the general path.

  5. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    NASA Astrophysics Data System (ADS)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  6. Linear time-dependent reference intervals where there is measurement error in the time variable-a parametric approach.

    PubMed

    Gillard, Jonathan

    2015-12-01

    This article re-examines parametric methods for the calculation of time specific reference intervals where there is measurement error present in the time covariate. Previous published work has commonly been based on the standard ordinary least squares approach, weighted where appropriate. In fact, this is an incorrect method when there are measurement errors present, and in this article, we show that the use of this approach may, in certain cases, lead to referral patterns that may vary with different values of the covariate. Thus, it would not be the case that all patients are treated equally; some subjects would be more likely to be referred than others, hence violating the principle of equal treatment required by the International Federation for Clinical Chemistry. We show, by using measurement error models, that reference intervals are produced that satisfy the requirement for equal treatment for all subjects. © The Author(s) 2011.

  7. A temperature match based optimization method for daily load prediction considering DLC effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Z.

    This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less

  8. FTTH: the overview of existing technologies

    NASA Astrophysics Data System (ADS)

    Nowak, Dawid; Murphy, John

    2005-06-01

    The growing popularity of the Internet is the key driver behind the development of new access methods which would enable a customer to experience a true broadband. Amongst various technologies, the access methods based on the optical fiber are getting more and more attention as they offer the ultimate solution in delivering different services to the customers' premises. Three different architectures have been proposed that facilitate the roll out of Fiber-to-the-Home (FTTH) infrastructure. Point-to-point Ethernet networks are the most straightforward and already matured solution. Different flavors of Passive Optical Networks (PONs) with Time Division Multiplexing Access (TDMA) are getting more widespread as necessary equipment is becoming available on the market. The third main contender are PONs withWavelength DivisionMultiplexing Access (WDMA). Although still in their infancy, the laboratory tests show that they have many advantages over present solutions. In this paper we show a brief comparison of these three access methods. In our analysis the architecture of each solution is presented. The applicability of each system is looked at from different viewpoint and their advantages and disadvantages are highlighted.

  9. Decoding magnetoencephalographic rhythmic activity using spectrospatial information.

    PubMed

    Kauppi, Jukka-Pekka; Parkkonen, Lauri; Hari, Riitta; Hyvärinen, Aapo

    2013-12-01

    We propose a new data-driven decoding method called Spectral Linear Discriminant Analysis (Spectral LDA) for the analysis of magnetoencephalography (MEG). The method allows investigation of changes in rhythmic neural activity as a result of different stimuli and tasks. The introduced classification model only assumes that each "brain state" can be characterized as a combination of neural sources, each of which shows rhythmic activity at one or several frequency bands. Furthermore, the model allows the oscillation frequencies to be different for each such state. We present decoding results from 9 subjects in a four-category classification problem defined by an experiment involving randomly alternating epochs of auditory, visual and tactile stimuli interspersed with rest periods. The performance of Spectral LDA was very competitive compared with four alternative classifiers based on different assumptions concerning the organization of rhythmic brain activity. In addition, the spectral and spatial patterns extracted automatically on the basis of trained classifiers showed that Spectral LDA offers a novel and interesting way of analyzing spectrospatial oscillatory neural activity across the brain. All the presented classification methods and visualization tools are freely available as a Matlab toolbox. © 2013.

  10. Channel Temperature Determination for AlGaN/GaN HEMTs on SiC and Sapphire

    NASA Technical Reports Server (NTRS)

    Freeman, Jon C.; Mueller, Wolfgang

    2008-01-01

    Numerical simulation results (with emphasis on channel temperature) for a single gate AlGaN/GaN High Electron Mobility Transistor (HEMT) with either a sapphire or SiC substrate are presented. The static I-V characteristics, with concomitant channel temperatures (T(sub ch)) are calculated using the software package ATLAS, from Silvaco, Inc. An in-depth study of analytical (and previous numerical) methods for the determination of T(sub ch) in both single and multiple gate devices is also included. We develop a method for calculating T(sub ch) for the single gate device with the temperature dependence of the thermal conductivity of all material layers included. We also present a new method for determining the temperature on each gate in a multi-gate array. These models are compared with experimental results, and show good agreement. We demonstrate that one may obtain the channel temperature within an accuracy of +/-10 C in some cases. Comparisons between different approaches are given to show the limits, sensitivities, and needed approximations, for reasonable agreement with measurements.

  11. Determination of the content of alkyl ketene dimer in its latex by an ionic-liquid assisted headspace gas chromatography.

    PubMed

    Yan, Ning; Wan, Xiao-Fang; Chai, Xin-Sheng; Chen, Run-Quan; Chen, Chun-Xia

    2017-12-29

    This paper reports on an ionic-liquid assisted headspace gas chromatographic (HS-GC) for the determination of the content of alkyl ketene dimer (AKD) in its latex samples, in which the GC system was equipped with a thermal conductivity detector (TCD). The method was based on the AKD hydrolysis conducted in 1-butyl-3-methylimidazolium chloride (ionic-liquid) added medium at 100°C for 10min in a closed headspace sample vial, and the measured CO 2 (the resulting product of the hydrolysis) by HS-GC. The results showed that the present method has a good measurement precision (RSD <2.3%) and accuracy (recoveries from 96 - 105%), and the limit of quantitation (LOQ) is 0.9%. The present method is very suitable to be used for the routine check of AKD content in its latex sample in mill applications. The study also showed that the content of AKD in the tested commercial latex samples were in the range of 3.5-12%. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Water stress assessment of cork oak leaves and maritime pine needles based on LIF spectra

    NASA Astrophysics Data System (ADS)

    Lavrov, A.; Utkin, A. B.; Marques da Silva, J.; Vilar, Rui; Santos, N. M.; Alves, B.

    2012-02-01

    The aim of the present work was to develop a method for the remote assessment of the impact of fire and drought stress on Mediterranean forest species such as the cork oak ( Quercus suber) and maritime pine ( Pinus pinaster). The proposed method is based on laser induced fluorescence (LIF): chlorophyll fluorescence is remotely excited by frequency-doubled YAG:Nd laser radiation pulses and collected and analyzed using a telescope and a gated high sensitivity spectrometer. The plant health criterion used is based on the I 685/ I 740 ratio value, calculated from the fluorescence spectra. The method was benchmarked by comparing the results achieved with those obtained by conventional, continuous excitation fluorometric method and water loss gravimetric measurements. The results obtained with both methods show a strong correlation between them and with the weight-loss measurements, showing that the proposed method is suitable for fire and drought impact assessment on these two species.

  13. The initial rise method extended to multiple trapping levels in thermoluminescent materials.

    PubMed

    Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E

    2011-02-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  15. Local statistics adaptive entropy coding method for the improvement of H.26L VLC coding

    NASA Astrophysics Data System (ADS)

    Yoo, Kook-yeol; Kim, Jong D.; Choi, Byung-Sun; Lee, Yung Lyul

    2000-05-01

    In this paper, we propose an adaptive entropy coding method to improve the VLC coding efficiency of H.26L TML-1 codec. First of all, we will show that the VLC coding presented in TML-1 does not satisfy the sibling property of entropy coding. Then, we will modify the coding method into the local statistics adaptive one to satisfy the property. The proposed method based on the local symbol statistics dynamically changes the mapping relationship between symbol and bit pattern in the VLC table according to sibling property. Note that the codewords in the VLC table of TML-1 codec is not changed. Since this changed mapping relationship also derived in the decoder side by using the decoded symbols, the proposed VLC coding method does not require any overhead information. The simulation results show that the proposed method gives about 30% and 37% reduction in average bit rate for MB type and CBP information, respectively.

  16. Preparation of samples for leaf architecture studies, a method for mounting cleared leaves1

    PubMed Central

    Vasco, Alejandra; Thadeo, Marcela; Conover, Margaret; Daly, Douglas C.

    2014-01-01

    • Premise of the study: Several recent waves of interest in leaf architecture have shown an expanding range of approaches and applications across a number of disciplines. Despite this increased interest, examination of existing archives of cleared and mounted leaves shows that current methods for mounting, in particular, yield unsatisfactory results and deterioration of samples over relatively short periods. Although techniques for clearing and staining leaves are numerous, published techniques for mounting leaves are scarce. • Methods and Results: Here we present a complete protocol and recommendations for clearing, staining, and imaging leaves, and, most importantly, a method to permanently mount cleared leaves. • Conclusions: The mounting protocol is faster than other methods, inexpensive, and straightforward; moreover, it yields clear and permanent samples that can easily be imaged, scanned, and stored. Specimens mounted with this method preserve well, with leaves that were mounted more than 35 years ago showing no signs of bubbling or discoloration. PMID:25225627

  17. A Limited-Memory BFGS Algorithm Based on a Trust-Region Quadratic Model for Large-Scale Nonlinear Equations.

    PubMed

    Li, Yong; Yuan, Gonglin; Wei, Zengxin

    2015-01-01

    In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.

  18. A Study on Attention Guidance to Driver by Subliminal Visual Information

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroshi; Honda, Hirohiko

    This paper presents a new warning method for increasing drivers' sensitivity for recognizing hazardous factors in the driving environment. The method is based on a subliminal effect. The results of many experiments performed by three dimensional head-mounted display shows that the response time for detecting a flashing mark tended to decrease when a subliminal mark was shown in advance. Priming effects are observed in subliminal visual information. This paper also proposes a scenario for implementing this method in real vehicles.

  19. Development of acceptance criteria for batches of silane primer for external tank thermal protection system bonding applications

    NASA Technical Reports Server (NTRS)

    Mikes, F.

    1984-01-01

    Silane primers for use as thermal protection on external tanks were subjected to various analytic techniques to determine the most effective testing method for silane lot evaluation. The analytic methods included high performance liquid chromatography, gas chromatography, thermogravimetry (TGA), and fourier transform infrared spectroscopy (FTIR). It is suggested that FTIR be used as the method for silane lot evaluation. Chromatograms, TGA profiles, bar graphs showing IR absorbances, and FTIR spectra are presented.

  20. Concept for an off-line gain stabilisation method.

    PubMed

    Pommé, S; Sibbens, G

    2004-01-01

    Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.

  1. Numerical solution of a coupled pair of elliptic equations from solid state electronics

    NASA Technical Reports Server (NTRS)

    Phillips, T. N.

    1983-01-01

    Iterative methods are considered for the solution of a coupled pair of second order elliptic partial differential equations which arise in the field of solid state electronics. A finite difference scheme is used which retains the conservative form of the differential equations. Numerical solutions are obtained in two ways, by multigrid and dynamic alternating direction implicit methods. Numerical results are presented which show the multigrid method to be an efficient way of solving this problem.

  2. A multiscale approach to accelerate pore-scale simulation of porous electrodes

    NASA Astrophysics Data System (ADS)

    Zheng, Weibo; Kim, Seung Hyun

    2017-04-01

    A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.

  3. Finite Volume Method for Pricing European Call Option with Regime-switching Volatility

    NASA Astrophysics Data System (ADS)

    Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM

    2018-03-01

    In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.

  4. Operational modal analysis applied to the concert harp

    NASA Astrophysics Data System (ADS)

    Chomette, B.; Le Carrou, J.-L.

    2015-05-01

    Operational modal analysis (OMA) methods are useful to extract modal parameters of operating systems. These methods seem to be particularly interesting to investigate the modal basis of string instruments during operation to avoid certain disadvantages due to conventional methods. However, the excitation in the case of string instruments is not optimal for OMA due to the presence of damped harmonic components and low noise in the disturbance signal. Therefore, the present study investigates the least-square complex exponential (LSCE) and the modified least-square complex exponential methods in the case of a string instrument to identify modal parameters of the instrument when it is played. The efficiency of the approach is experimentally demonstrated on a concert harp excited by some of its strings and the two methods are compared to a conventional modal analysis. The results show that OMA allows us to identify modes particularly present in the instrument's response with a good estimation especially if they are close to the excitation frequency with the modified LSCE method.

  5. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  6. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  7. A Novel Method for Producing Light GMT Sheets by a Pneumatic Technique

    NASA Astrophysics Data System (ADS)

    Dai, H.-L.; Rao, Y.-N.

    2015-09-01

    A novel method for producing a kind of light glass-mat- reinforced thermoplastic (GMT) sheets by using a pneumatic technique is presented. The tensile and flexural properties of produced light GMT sheets, with various lengths of glass fibers and PP content, were determined experimentally. Results of the experimental investigation show that the light GMT sheets are fully suitable for engineering applications.

  8. Thermodynamic method for generating random stress distributions on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  9. Chemistry by Way of Density Functional Theory

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Partridge, Harry; Langohff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    In this work we demonstrate that density functional theory (DFT) methods make an important contribution to understanding chemical systems and are an important additional method for the computational chemist. We report calibration calculations obtained with different functionals for the 55 G2 molecules to justify our selection of the B3LYP functional. We show that accurate geometries and vibrational frequencies obtained at the B3LYP level can be combined with traditional methods to simplify the calculation of accurate heats of formation. We illustrate the application of the B3LYP approach to a variety of chemical problems from the vibrational frequencies of polycyclic aromatic hydrocarbons to transition metal systems. We show that the B3LYP method typically performs better than the MP2 method at a significantly lower computational cost. Thus the B3LYP method allows us to extend our studies to much larger systems while maintaining a high degree of accuracy. We show that for transition metal systems, the B3LYP bond energies are typically of sufficient accuracy that they can be used to explain experimental trends and even differentiate between different experimental values. We show that for boron clusters the B3LYP energetics are not as good as for many of the other systems presented, but even in this case the B3LYP approach is able to help understand the experimental trends.

  10. Evaluating fMRI methods for assessing hemispheric language dominance in healthy subjects.

    PubMed

    Baciu, Monica; Juphard, Alexandra; Cousin, Emilie; Bas, Jean François Le

    2005-08-01

    We evaluated two methods for quantifying the hemispheric language dominance in healthy subjects, by using a rhyme detection (deciding whether couple of words rhyme) and a word fluency (generating words starting with a given letter) task. One of methods called "flip method" (FM) was based on the direct statistical comparison between hemispheres' activity. The second one, the classical lateralization indices method (LIM), was based on calculating lateralization indices by taking into account the number of activated pixels within hemispheres. The main difference between methods is the statistical assessment of the inter-hemispheric difference: while FM shows if the difference between hemispheres' activity is statistically significant, LIM shows only that if there is a difference between hemispheres. The robustness of LIM and FM was assessed by calculating correlation coefficients between LIs obtained with each of these methods and manual lateralization indices MLI obtained with Edinburgh inventory. Our results showed significant correlation between LIs provided by each method and the MIL, suggesting that both methods are robust for quantifying hemispheric dominance for language in healthy subjects. In the present study we also evaluated the effect of spatial normalization, smoothing and "clustering" (NSC) on the intra-hemispheric location of activated regions and inter-hemispheric asymmetry of the activation. Our results have shown that NSC did not affect the hemispheric specialization but increased the value of the inter-hemispheric difference.

  11. Harmonic reduction of Direct Torque Control of six-phase induction motor.

    PubMed

    Taheri, A

    2016-07-01

    In this paper, a new switching method in Direct Torque Control (DTC) of a six-phase induction machine for reduction of current harmonics is introduced. Selecting a suitable vector in each sampling period is an ordinal method in the ST-DTC drive of a six-phase induction machine. The six-phase induction machine has 64 voltage vectors and divided further into four groups. In the proposed DTC method, the suitable voltage vectors are selected from two vector groups. By a suitable selection of two vectors in each sampling period, the harmonic amplitude is decreased more, in and various comparison to that of the ST-DTC drive. The harmonics loss is greater reduced, while the electromechanical energy is decreased with switching loss showing a little increase. Spectrum analysis of the phase current in the standard and new switching table DTC of the six-phase induction machine and determination for the amplitude of each harmonics is proposed in this paper. The proposed method has a less sampling time in comparison to the ordinary method. The Harmonic analyses of the current in the low and high speed shows the performance of the presented method. The simplicity of the proposed method and its implementation without any extra hardware is other advantages of the proposed method. The simulation and experimental results show the preference of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    PubMed Central

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  13. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    PubMed

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  14. Quantitative assessment of copper proteinates used as animal feed additives using ATR-FTIR spectroscopy and powder X-ray diffraction (PXRD) analysis.

    PubMed

    Cantwell, Caoimhe A; Byrne, Laurann A; Connolly, Cathal D; Hynes, Michael J; McArdle, Patrick; Murphy, Richard A

    2017-08-01

    The aim of the present work was to establish a reliable analytical method to determine the degree of complexation in commercial metal proteinates used as feed additives in the solid state. Two complementary techniques were developed. Firstly, a quantitative attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopic method investigated modifications in vibrational absorption bands of the ligand on complex formation. Secondly, a powder X-ray diffraction (PXRD) method to quantify the amount of crystalline material in the proteinate product was developed. These methods were developed in tandem and cross-validated with each other. Multivariate analysis (MVA) was used to develop validated calibration and prediction models. The FTIR and PXRD calibrations showed excellent linearity (R 2  > 0.99). The diagnostic model parameters showed that the FTIR and PXRD methods were robust with a root mean square error of calibration RMSEC ≤3.39% and a root mean square error of prediction RMSEP ≤7.17% respectively. Comparative statistics show excellent agreement between the MVA packages assessed and between the FTIR and PXRD methods. The methods can be used to determine the degree of complexation in complexes of both protein hydrolysates and pure amino acids.

  15. A high-order multi-zone cut-stencil method for numerical simulations of high-speed flows over complex geometries

    NASA Astrophysics Data System (ADS)

    Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John

    2016-07-01

    In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.

  16. Improvement of the tissue-adhesive and sealing effect of fibrin sealant using polyglycolic acid felt.

    PubMed

    Shinya, Noriko; Oka, Shirou; Miyabashira, Sumika; Kaetsu, Hiroshi; Uchida, Takanori; Sueyoshi, Masuo; Takase, Kozo; Akuzawa, Masao; Miyamoto, Atsushi; Shigaki, Takamichi

    2009-01-01

    Although fibrin sealant (FS) has an advantage of high biocompatibility, its adhesive force and sealing effect have been generally considered to be inadequate. In the present study, a high adhesive force and sealing effect were obtained by first rubbing fibrinogen solution into the target tissue, attaching polyglycolic acid (PGA) felt to the treated area, and finally spraying it with FS. This method was compared with three conventional FS application methods and a method using fibrin glue-coated collagen fleece. The adhesive force resulting from the present method was 12 times higher than that for the sequential application method, 4.5 times higher than the spray method, 2.5 times higher than the rubbing and spray method, and 2.2 times higher than the use of fibrin glue-coated collagen fleece. The high adhesive force of FS with PGA felt seemed to be due the high fibrin content of the fibrin gel (FG). Light and electron microscopic observations suggested that the formation of FG in closer contact with the muscle fibers was a factor contributing to this superior adhesive force. Comparison of the sealing effect of the present method with other methods using various biomaterials in combination with FS showed that the sealing effect of FS with PGA felt was 1.4 times higher that of polyglactin 910, 1.8 times that of polytetrafluoroethylene, and 6.7 times that of oxidized regenerated cellulose.

  17. Local Discontinuous Galerkin Methods for Partial Differential Equations with Higher Order Derivatives

    NASA Technical Reports Server (NTRS)

    Yan, Jue; Shu, Chi-Wang; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    In this paper we review the existing and develop new continuous Galerkin methods for solving time dependent partial differential equations with higher order derivatives in one and multiple space dimensions. We review local discontinuous Galerkin methods for convection diffusion equations involving second derivatives and for KdV type equations involving third derivatives. We then develop new local discontinuous Galerkin methods for the time dependent bi-harmonic type equations involving fourth derivatives, and partial differential equations involving fifth derivatives. For these new methods we present correct interface numerical fluxes and prove L(exp 2) stability for general nonlinear problems. Preliminary numerical examples are shown to illustrate these methods. Finally, we present new results on a post-processing technique, originally designed for methods with good negative-order error estimates, on the local discontinuous Galerkin methods applied to equations with higher derivatives. Numerical experiments show that this technique works as well for the new higher derivative cases, in effectively doubling the rate of convergence with negligible additional computational cost, for linear as well as some nonlinear problems, with a local uniform mesh.

  18. Comparison of normalization methods for the analysis of metagenomic gene abundance data.

    PubMed

    Pereira, Mariana Buongermino; Wallroth, Mikael; Jonsson, Viktor; Kristiansson, Erik

    2018-04-20

    In shotgun metagenomics, microbial communities are studied through direct sequencing of DNA without any prior cultivation. By comparing gene abundances estimated from the generated sequencing reads, functional differences between the communities can be identified. However, gene abundance data is affected by high levels of systematic variability, which can greatly reduce the statistical power and introduce false positives. Normalization, which is the process where systematic variability is identified and removed, is therefore a vital part of the data analysis. A wide range of normalization methods for high-dimensional count data has been proposed but their performance on the analysis of shotgun metagenomic data has not been evaluated. Here, we present a systematic evaluation of nine normalization methods for gene abundance data. The methods were evaluated through resampling of three comprehensive datasets, creating a realistic setting that preserved the unique characteristics of metagenomic data. Performance was measured in terms of the methods ability to identify differentially abundant genes (DAGs), correctly calculate unbiased p-values and control the false discovery rate (FDR). Our results showed that the choice of normalization method has a large impact on the end results. When the DAGs were asymmetrically present between the experimental conditions, many normalization methods had a reduced true positive rate (TPR) and a high false positive rate (FPR). The methods trimmed mean of M-values (TMM) and relative log expression (RLE) had the overall highest performance and are therefore recommended for the analysis of gene abundance data. For larger sample sizes, CSS also showed satisfactory performance. This study emphasizes the importance of selecting a suitable normalization methods in the analysis of data from shotgun metagenomics. Our results also demonstrate that improper methods may result in unacceptably high levels of false positives, which in turn may lead to incorrect or obfuscated biological interpretation.

  19. A contrast source method for nonlinear acoustic wave fields in media with spatially inhomogeneous attenuation.

    PubMed

    Demi, L; van Dongen, K W A; Verweij, M D

    2011-03-01

    Experimental data reveals that attenuation is an important phenomenon in medical ultrasound. Attenuation is particularly important for medical applications based on nonlinear acoustics, since higher harmonics experience higher attenuation than the fundamental. Here, a method is presented to accurately solve the wave equation for nonlinear acoustic media with spatially inhomogeneous attenuation. Losses are modeled by a spatially dependent compliance relaxation function, which is included in the Westervelt equation. Introduction of absorption in the form of a causal relaxation function automatically results in the appearance of dispersion. The appearance of inhomogeneities implies the presence of a spatially inhomogeneous contrast source in the presented full-wave method leading to inclusion of forward and backward scattering. The contrast source problem is solved iteratively using a Neumann scheme, similar to the iterative nonlinear contrast source (INCS) method. The presented method is directionally independent and capable of dealing with weakly to moderately nonlinear, large scale, three-dimensional wave fields occurring in diagnostic ultrasound. Convergence of the method has been investigated and results for homogeneous, lossy, linear media show full agreement with the exact results. Moreover, the performance of the method is demonstrated through simulations involving steered and unsteered beams in nonlinear media with spatially homogeneous and inhomogeneous attenuation. © 2011 Acoustical Society of America

  20. [Differentiation of Staphylococcus aureus isolates based on phenotypical characters].

    PubMed

    Miedzobrodzki, Jacek; Małachowa, Natalia; Markiewski, Tomasz; Białecka, Anna; Kasprowicz, Andrzej

    2008-06-30

    Typing of Staphylococcus aureus isolates is a necessary procedure for monitoring the transmission of S. aureus among carriers and in epidemiology. Evaluation of the range of relationship among isolates rely on epidemiological markers and is possible because of the clonal character of S. aureus species. Effective typing shows the scheme of transmission of infection in a selected area, enables identifying the reservoir of the microorganism, and may enhance effective eradication. A set of typing methods for use in analyses of epidemiological correlations and the identification of S. aureus isolates is presented. The following methods of typing are described: biotyping, serotyping, antibiogram, protein electrophoresis, cell protein profiles (proteom), immunoblotting, multilocus enzyme electrophoresis (MLEE), zymotyping, and standard species identification of S. aureus in the diagnostic laboratory. Phenotyping methods for S. aureus isolates used in the past and today in epidemiological investigations and in analyses of correlations among S. aureus isolates are presented in this review. The presented methods use morphological characteristics, physiological properties, and chemical structures of the bacteria as criteria for typing. The precision of these standard methods is not always satisfactory as S. aureus strains with atypical biochemical characters have evolved recently. Therefore it is essential to introduce additional typing procedures using molecular biology methods without neglecting phenotypic methods.

  1. Finite element method formulation in polar coordinates for transient heat conduction problems

    NASA Astrophysics Data System (ADS)

    Duda, Piotr

    2016-04-01

    The aim of this paper is the formulation of the finite element method in polar coordinates to solve transient heat conduction problems. It is hard to find in the literature a formulation of the finite element method (FEM) in polar or cylindrical coordinates for the solution of heat transfer problems. This document shows how to apply the most often used boundary conditions. The global equation system is solved by the Crank-Nicolson method. The proposed algorithm is verified in three numerical tests. In the first example, the obtained transient temperature distribution is compared with the temperature obtained from the presented analytical solution. In the second numerical example, the variable boundary condition is assumed. In the last numerical example the component with the shape different than cylindrical is used. All examples show that the introduction of the polar coordinate system gives better results than in the Cartesian coordinate system. The finite element method formulation in polar coordinates is valuable since it provides a higher accuracy of the calculations without compacting the mesh in cylindrical or similar to tubular components. The proposed method can be applied for circular elements such as boiler drums, outlet headers, flux tubes. This algorithm can be useful during the solution of inverse problems, which do not allow for high density grid. This method can calculate the temperature distribution in the bodies of different properties in the circumferential and the radial direction. The presented algorithm can be developed for other coordinate systems. The examples demonstrate a good accuracy and stability of the proposed method.

  2. Validated flow-injection method for rapid aluminium determination in anti-perspirants.

    PubMed

    López-Gonzálvez, A; Ruiz, M A; Barbas, C

    2008-09-29

    A flow-injection (FI) method for the rapid determination of aluminium in anti-perspirants has been developed. The method is based on the spectrophotometric detection at 535nm of the complex formed between Al ions and the chromogenic reagent eriochrome cyanine R. Both the batch and FI methods were validated by checking the parameters included in the ISO-3543-1 regulation. Variables involved in the FI method were optimized by using appropriate statistical tools. The method does not exhibit interference from other substances present in anti-perspirants and it shows a high precision with a R.S.D. value (n=6) of 0.9%. Moreover, the accuracy of the method was evaluated by comparison with a back complexometric titration method, which is currently used for routine analysis in pharmaceutical laboratories. The Student's t-test showed that the results obtained by both methods were not significantly different for a significance level of 95%. A response time of 12s and a sample analysis time, by performing triplicate injections, of 60s were achieved. The analytical figures of merit make the method highly appropriate to substitute the time-consuming complexometric method for this kind of analysis.

  3. Suitability of Secondary PEEK Telescopic Crowns on Zirconia Primary Crowns: The Influence of Fabrication Method and Taper.

    PubMed

    Merk, Susanne; Wagner, Christina; Stock, Veronika; Eichberger, Marlis; Schmidlin, Patrick R; Roos, Malgorzata; Stawarczyk, Bogna

    2016-11-08

    This study investigates the retention load (RL) between ZrO₂ primary crowns and secondary polyetheretherketone (PEEK) crowns made by different fabrication methods with three different tapers. Standardized primary ZrO₂ crowns were fabricated with three different tapers: 0°, 1°, and 2° ( n = 10/group). Ten secondary crowns were fabricated (i) milled from breCam BioHPP blanks (PM); (ii) pressed from industrially fabricated PEEK pellets (PP) (BioHPP Pellet); or (iii) pressed from granular PEEK (PG) (BioHPP Granulat). One calibrated operator adjusted all crowns. In total, the RL of 90 secondary crowns were measured in pull-off tests at 50 mm/min, and each specimen was tested 20 times. Two- and one-way ANOVAs followed by a Scheffé's post-hoc test were used for data analysis ( p < 0.05). Within crowns with a 0° taper, the PP group showed significantly higher retention load values compared with the other groups. Among the 1° taper, the PM group presented significantly lower retention loads than the PP group. However, the pressing type had no impact on the results. Within the 2° taper, the fabrication method had no influence on the RL. Within the PM group, the 2° taper showed significantly higher retention load compared with the 1° taper. The taper with 0° was in the same range value as the 1° and 2° tapers. No impact of the taper on the retention value was observed between the PP groups. Within the PG groups, the 0° taper presented significantly lower RL than the 1° taper, whereas the 2° taper showed no differences. The fabrication method of the secondary PEEK crowns and taper angles showed no consistent effect within all tested groups.

  4. Multichannel-Hadamard calibration of high-order adaptive optics systems.

    PubMed

    Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai

    2014-06-02

    we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.

  5. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    This paper presents the application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems. The methods consists of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line- or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to those of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  6. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  7. Evaluation of microtensile and tensile bond strength tests determining effects of erbium, chromium: yttrium-scandium-gallium-garnet laser pulse frequency on resin-enamel bonding.

    PubMed

    Yildirim, T; Ayar, M K; Yesilyurt, C; Kilic, S

    2016-01-01

    The aim of the present study was to compare two different bond strength test methods (tensile and microtensile) in investing the influence of erbium, chromium: yttrium-scandium-gallium-garnet (Er, Cr: YSGG) laser pulse frequency on resin-enamel bonding. One-hundred and twenty-five bovine incisors were used in the present study. Two test methods were used: Tensile bond strength (TBS; n = 20) and micro-TBS (μTBS; n = 5). Those two groups were further split into three subgroups according to Er, Cr: YSGG laser frequency (20, 35, and 50 Hz). Following adhesive procedures, microhybrid composite was placed in a custom-made bonding jig for TBS testing and incrementally for μTBS testing. TBS and μTBS tests were carried out using a universal testing machine and a microtensile tester, respectively. Analysis of TBS results showed that means were not significantly different. For μTBS, the Laser-50 Hz group showed the highest bond strength (P < 0.05), and increasing frequency significantly increased bond strength (P < 0.05). Comparing the two tests, the μTBS results showed higher means and lower standard deviations. It was demonstrated that increasing μTBS pulse frequency significantly improved immediate bond strength while TBS showed no significant effect. It can, therefore, be concluded that test method may play a significant role in determining optimum laser parameters for resin bonding.

  8. Fast animation of lightning using an adaptive mesh.

    PubMed

    Kim, Theodore; Lin, Ming C

    2007-01-01

    We present a fast method for simulating, animating, and rendering lightning using adaptive grids. The "dielectric breakdown model" is an elegant algorithm for electrical pattern formation that we extend to enable animation of lightning. The simulation can be slow, particularly in 3D, because it involves solving a large Poisson problem. Losasso et al. recently proposed an octree data structure for simulating water and smoke, and we show that this discretization can be applied to the problem of lightning simulation as well. However, implementing the incomplete Cholesky conjugate gradient (ICCG) solver for this problem can be daunting, so we provide an extensive discussion of implementation issues. ICCG solvers can usually be accelerated using "Eisenstat's trick," but the trick cannot be directly applied to the adaptive case. Fortunately, we show that an "almost incomplete Cholesky" factorization can be computed so that Eisenstat's trick can still be used. We then present a fast rendering method based on convolution that is competitive with Monte Carlo ray tracing but orders of magnitude faster, and we also show how to further improve the visual results using jittering.

  9. Establishing the fundamentals for an elephant early warning and monitoring system.

    PubMed

    Zeppelzauer, Matthias; Stoeger, Angela S

    2015-09-04

    The decline of habitat for elephants due to expanding human activity is a serious conservation problem. This has continuously escalated the human-elephant conflict in Africa and Asia. Elephants make extensive use of powerful infrasonic calls (rumbles) that travel distances of up to several kilometers. This makes elephants well-suited for acoustic monitoring because it enables detecting elephants even if they are out of sight. In sight, their distinct visual appearance makes them a good candidate for visual monitoring. We provide an integrated overview of our interdisciplinary project that established the scientific fundamentals for a future early warning and monitoring system for humans who regularly experience serious conflict with elephants. We first draw the big picture of an early warning and monitoring system, then review the developed solutions for automatic acoustic and visual detection, discuss specific challenges and present open future work necessary to build a robust and reliable early warning and monitoring system that is able to operate in situ. We present a method for the automated detection of elephant rumbles that is robust to the diverse noise sources present in situ. We evaluated the method on an extensive set of audio data recorded under natural field conditions. Results show that the proposed method outperforms existing approaches and accurately detects elephant rumbles. Our visual detection method shows that tracking elephants in wildlife videos (of different sizes and postures) is feasible and particularly robust at near distances. From our project results we draw a number of conclusions that are discussed and summarized. We clearly identified the most critical challenges and necessary improvements of the proposed detection methods and conclude that our findings have the potential to form the basis for a future automated early warning system for elephants. We discuss challenges that need to be solved and summarize open topics in the context of a future early warning and monitoring system. We conclude that a long-term evaluation of the presented methods in situ using real-time prototypes is the most important next step to transfer the developed methods into practical implementation.

  10. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  11. Stable lattice Boltzmann model for Maxwell equations in media

    NASA Astrophysics Data System (ADS)

    Hauser, A.; Verhey, J. L.

    2017-12-01

    The present work shows a method for stable simulations via the lattice Boltzmann (LB) model for electromagnetic waves (EM) transiting homogeneous media. LB models for such media were already presented in the literature, but they suffer from numerical instability when the media transitions are sharp. We use one of these models in the limit of pure vacuum derived from Liu and Yan [Appl. Math. Model. 38, 1710 (2014), 10.1016/j.apm.2013.09.009] and apply an extension that treats the effects of polarization and magnetization separately. We show simulations of simple examples in which EM waves travel into media to quantify error scaling, stability, accuracy, and time scaling. For conductive media, we use the Strang splitting and check the simulations accuracy at the example of the skin effect. Like pure EM propagation, the error for the static limits, which are constructed with a current density added in a first-order scheme, can be less than 1 % . The presented method is an easily implemented alternative for the stabilization of simulation for EM waves propagating in spatially complex structured media properties and arbitrary transitions.

  12. Meshless Lagrangian SPH method applied to isothermal lid-driven cavity flow at low-Re numbers

    NASA Astrophysics Data System (ADS)

    Fraga Filho, C. A. D.; Chacaltana, J. T. A.; Pinto, W. J. N.

    2018-01-01

    SPH is a recent particle method applied in the cavities study, without many results available in the literature. The lid-driven cavity flow is a classic problem of the fluid mechanics, extensively explored in the literature and presenting a considerable complexity. The aim of this paper is to present a solution from the Lagrangian viewpoint for this problem. The discretization of the continuum domain is performed using the Lagrangian particles. The physical laws of mass, momentum and energy conservation are presented by the Navier-Stokes equations. A serial numerical code, written in Fortran programming language, has been used to perform the numerical simulations. The application of the SPH and comparison with the literature (mesh methods and a meshless collocation method) have been done. The positions of the primary vortex centre and the non-dimensional velocity profiles passing through the geometric centre of the cavity have been analysed. The numerical Lagrangian results showed a good agreement when compared to the results found in the literature, specifically for { Re} < 100.00 . Suggestions for improvements in the SPH model presented are listed, in the search for better results for flows with higher Reynolds numbers.

  13. The Molecular Weight Distribution of Polymer Samples

    ERIC Educational Resources Information Center

    Horta, Arturo; Pastoriza, M. Alejandra

    2007-01-01

    Various methods for the determination of the molecular weight distribution (MWD) of different polymer samples are presented. The study shows that the molecular weight averages and distribution of a polymerization completely depend on the characteristics of the reaction itself.

  14. Renormalization of the Brazilian chiral nucleon-nucleon potential

    NASA Astrophysics Data System (ADS)

    Da Rocha, Carlos A.; Timóteo, Varese S.

    2013-03-01

    In this work we present a renormalization of the Brazilian nucleon-nucleon (NN) potential using a subtractive method. We show that the exchange of correlated two pion is important for isovector channels, mainly in tensor and central potentials.

  15. Multi-Objective Optimization of an In situ Bioremediation Technology to Treat Perchlorate-Contaminated Groundwater

    EPA Science Inventory

    The presentation shows how a multi-objective optimization method is integrated into a transport simulator (MT3D) for estimating parameters and cost of in-situ bioremediation technology to treat perchlorate-contaminated groundwater.

  16. Toxicological evaluation in silico and in vivo of secondary metabolites of Cissampelos sympodialis in Mus musculus mice following inhalation.

    PubMed

    Alves, Mateus Feitosa; Ferreira, Larissa Adilis Maria Paiva; Gadelha, Francisco Allysson Assis Ferreira; Ferreira, Laércia Karla Diega Paiva; Felix, Mayara Barbalho; Scotti, Marcus Tullius; Scotti, Luciana; de Oliveira, Kardilândia Mendes; Dos Santos, Sócrates Golzio; Diniz, Margareth de Fátima Formiga Melo

    2017-12-04

    The ethanolic extract of the leaves of Cissampelos sympodialis showed great pharmacological potential, with inflammatory and immunomodulatory activities, however, it showed some toxicological effects. Therefore, this study aims to verify the toxicological potential of alkaloids of the genus Cissampelos through in silico methodologies, to develop a method in LC-MS/MS verifying the presence of alkaloids in the infusion and to evaluate the toxicity of the infusion of the leaves of C. sympodialis when inhaled by Swiss mice. Results in silico showed that alkaloid 93 presented high toxicological potential along with the products of its metabolism. LC-MS/MS results showed that the infusion of the leaves of this plant contained the alkaloids warifteine and methylwarifteine. Finally, the in vivo toxicological analysis of the C. sympodialis infusion showed results, both in biochemistry, organ weights and histological analysis, that the infusion of C. sympodialis leaves presents a low toxicity.

  17. A Hair & a Fungus: Showing Kids the Size of a Microbe

    ERIC Educational Resources Information Center

    Richter, Dana L.

    2013-01-01

    A simple method is presented to show kids the size of a microbe--a fungus hypha--compared to a human hair. Common household items are used to make sterile medium on a stove or hotplate, which is dispensed in the cells of a weekly plastic pill box. Mold fungi can be easily and safely grown on the medium from the classroom environment. A microscope…

  18. Towards the "Informed Use" of Information and Communication Technology in Education: A Response to Adams' "Powerpoint, Habits of Mind, and Classroom Culture"

    ERIC Educational Resources Information Center

    Vallance, Michael; Towndrow, Phillip A.

    2007-01-01

    PowerPoint, the widely-used slide-show software package, is finding increasing currency in lecture halls and classrooms as the preferred method of communicating and presenting information. But, as Adams [Adams, C. (2006) "PowerPoint, habits of mind, and classroom culture." "Journal of Curriculum Studies," 38(4), 389-411] attempts to show, users…

  19. Infrared mapping resolves soft tissue preservation in 50 million year-old reptile skin.

    PubMed

    Edwards, N P; Barden, H E; van Dongen, B E; Manning, P L; Larson, P L; Bergmann, U; Sellers, W I; Wogelius, R A

    2011-11-07

    Non-destructive Fourier Transform InfraRed (FTIR) mapping of Eocene aged fossil reptile skin shows that biological control on the distribution of endogenous organic components within fossilized soft tissue can be resolved. Mapped organic functional units within this approximately 50 Myr old specimen from the Green River Formation (USA) include amide and sulphur compounds. These compounds are most probably derived from the original beta keratin present in the skin because fossil leaf- and other non-skin-derived organic matter from the same geological formation do not show intense amide or thiol absorption bands. Maps and spectra from the fossil are directly comparable to extant reptile skin. Furthermore, infrared results are corroborated by several additional quantitative methods including Synchrotron Rapid Scanning X-Ray Fluorescence (SRS-XRF) and Pyrolysis-Gas Chromatography/Mass Spectrometry (Py-GC/MS). All results combine to clearly show that the organic compound inventory of the fossil skin is different from the embedding sedimentary matrix and fossil plant material. A new taphonomic model involving ternary complexation between keratin-derived organic molecules, divalent trace metals and silicate surfaces is presented to explain the survival of the observed compounds. X-ray diffraction shows that suitable minerals for complex formation are present. Previously, this study would only have been possible with major destructive sampling. Non-destructive FTIR imaging methods are thus shown to be a valuable tool for understanding the taphonomy of high-fidelity preservation, and furthermore, may provide insight into the biochemistry of extinct organisms.

  20. Infrared mapping resolves soft tissue preservation in 50 million year-old reptile skin

    PubMed Central

    Edwards, N. P.; Barden, H. E.; van Dongen, B. E.; Manning, P. L.; Larson, P. L.; Bergmann, U.; Sellers, W. I.; Wogelius, R. A.

    2011-01-01

    Non-destructive Fourier Transform InfraRed (FTIR) mapping of Eocene aged fossil reptile skin shows that biological control on the distribution of endogenous organic components within fossilized soft tissue can be resolved. Mapped organic functional units within this approximately 50 Myr old specimen from the Green River Formation (USA) include amide and sulphur compounds. These compounds are most probably derived from the original beta keratin present in the skin because fossil leaf- and other non-skin-derived organic matter from the same geological formation do not show intense amide or thiol absorption bands. Maps and spectra from the fossil are directly comparable to extant reptile skin. Furthermore, infrared results are corroborated by several additional quantitative methods including Synchrotron Rapid Scanning X-Ray Fluorescence (SRS-XRF) and Pyrolysis-Gas Chromatography/Mass Spectrometry (Py-GC/MS). All results combine to clearly show that the organic compound inventory of the fossil skin is different from the embedding sedimentary matrix and fossil plant material. A new taphonomic model involving ternary complexation between keratin-derived organic molecules, divalent trace metals and silicate surfaces is presented to explain the survival of the observed compounds. X-ray diffraction shows that suitable minerals for complex formation are present. Previously, this study would only have been possible with major destructive sampling. Non-destructive FTIR imaging methods are thus shown to be a valuable tool for understanding the taphonomy of high-fidelity preservation, and furthermore, may provide insight into the biochemistry of extinct organisms. PMID:21429928

  1. Force wave transmission through the human locomotor system.

    PubMed

    Voloshin, A; Wosk, J; Brull, M

    1981-02-01

    A method to measure the capability of the human shock absorber system to attenuate input dynamic loading during the gait is presented. The experiments were carried out with two groups: healthy subjects and subjects with various pathological conditions. The results of the experiments show a considerable difference in the capability of each group's shock absorbers to attenuate force transmitted through the locomotor system. Comparison shows that healthy subjects definitely possess a more efficient shock-absorbing capacity than do those subjects with joint disorders. Presented results show that degenerative changes in joints reduce their shock absorbing capacity, which leads to overloading of the next shock absorber in the locomotor system. So, the development of osteoarthritis may be expected to result from overloading of a shock absorber's functional capacity.

  2. Differential expression analysis for RNAseq using Poisson mixed models.

    PubMed

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Predicting activities of daily living for cancer patients using an ontology-guided machine learning methodology.

    PubMed

    Min, Hua; Mobahi, Hedyeh; Irvin, Katherine; Avramovic, Sanja; Wojtusiak, Janusz

    2017-09-16

    Bio-ontologies are becoming increasingly important in knowledge representation and in the machine learning (ML) fields. This paper presents a ML approach that incorporates bio-ontologies and its application to the SEER-MHOS dataset to discover patterns of patient characteristics that impact the ability to perform activities of daily living (ADLs). Bio-ontologies are used to provide computable knowledge for ML methods to "understand" biomedical data. This retrospective study included 723 cancer patients from the SEER-MHOS dataset. Two ML methods were applied to create predictive models for ADL disabilities for the first year after a patient's cancer diagnosis. The first method is a standard rule learning algorithm; the second is that same algorithm additionally equipped with methods for reasoning with ontologies. The models showed that a patient's race, ethnicity, smoking preference, treatment plan and tumor characteristics including histology, staging, cancer site, and morphology were predictors for ADL performance levels one year after cancer diagnosis. The ontology-guided ML method was more accurate at predicting ADL performance levels (P < 0.1) than methods without ontologies. This study demonstrated that bio-ontologies can be harnessed to provide medical knowledge for ML algorithms. The presented method demonstrates that encoding specific types of hierarchical relationships to guide rule learning is possible, and can be extended to other types of semantic relationships present in biomedical ontologies. The ontology-guided ML method achieved better performance than the method without ontologies. The presented method can also be used to promote the effectiveness and efficiency of ML in healthcare, in which use of background knowledge and consistency with existing clinical expertise is critical.

  4. Automatic and Reproducible Positioning of Phase-Contrast MRI for the Quantification of Global Cerebral Blood Flow

    PubMed Central

    Liu, Peiying; Lu, Hanzhang; Filbey, Francesca M.; Pinkham, Amy E.; McAdams, Carrie J.; Adinoff, Bryon; Daliparthi, Vamsi; Cao, Yan

    2014-01-01

    Phase-Contrast MRI (PC-MRI) is a noninvasive technique to measure blood flow. In particular, global but highly quantitative cerebral blood flow (CBF) measurement using PC-MRI complements several other CBF mapping methods such as arterial spin labeling and dynamic susceptibility contrast MRI by providing a calibration factor. The ability to estimate blood supply in physiological units also lays a foundation for assessment of brain metabolic rate. However, a major obstacle before wider applications of this method is that the slice positioning of the scan, ideally placed perpendicular to the feeding arteries, requires considerable expertise and can present a burden to the operator. In the present work, we proposed that the majority of PC-MRI scans can be positioned using an automatic algorithm, leaving only a small fraction of arteries requiring manual positioning. We implemented and evaluated an algorithm for this purpose based on feature extraction of a survey angiogram, which is of minimal operator dependence. In a comparative test-retest study with 7 subjects, the blood flow measurement using this algorithm showed an inter-session coefficient of variation (CoV) of . The Bland-Altman method showed that the automatic method differs from the manual method by between and , for of the CBF measurements. This is comparable to the variance in CBF measurement using manually-positioned PC MRI alone. In a further application of this algorithm to 157 consecutive subjects from typical clinical cohorts, the algorithm provided successful positioning in 89.7% of the arteries. In 79.6% of the subjects, all four arteries could be planned using the algorithm. Chi-square tests of independence showed that the success rate was not dependent on the age or gender, but the patients showed a trend of lower success rate (p = 0.14) compared to healthy controls. In conclusion, this automatic positioning algorithm could improve the application of PC-MRI in CBF quantification. PMID:24787742

  5. Quantification and clustering of phenotypic screening data using time-series analysis for chemotherapy of schistosomiasis

    PubMed Central

    2012-01-01

    Background Neglected tropical diseases, especially those caused by helminths, constitute some of the most common infections of the world's poorest people. Development of techniques for automated, high-throughput drug screening against these diseases, especially in whole-organism settings, constitutes one of the great challenges of modern drug discovery. Method We present a method for enabling high-throughput phenotypic drug screening against diseases caused by helminths with a focus on schistosomiasis. The proposed method allows for a quantitative analysis of the systemic impact of a drug molecule on the pathogen as exhibited by the complex continuum of its phenotypic responses. This method consists of two key parts: first, biological image analysis is employed to automatically monitor and quantify shape-, appearance-, and motion-based phenotypes of the parasites. Next, we represent these phenotypes as time-series and show how to compare, cluster, and quantitatively reason about them using techniques of time-series analysis. Results We present results on a number of algorithmic issues pertinent to the time-series representation of phenotypes. These include results on appropriate representation of phenotypic time-series, analysis of different time-series similarity measures for comparing phenotypic responses over time, and techniques for clustering such responses by similarity. Finally, we show how these algorithmic techniques can be used for quantifying the complex continuum of phenotypic responses of parasites. An important corollary is the ability of our method to recognize and rigorously group parasites based on the variability of their phenotypic response to different drugs. Conclusions The methods and results presented in this paper enable automatic and quantitative scoring of high-throughput phenotypic screens focused on helmintic diseases. Furthermore, these methods allow us to analyze and stratify parasites based on their phenotypic response to drugs. Together, these advancements represent a significant breakthrough for the process of drug discovery against schistosomiasis in particular and can be extended to other helmintic diseases which together afflict a large part of humankind. PMID:22369037

  6. Quantitative analysis of the patellofemoral motion pattern using semi-automatic processing of 4D CT data.

    PubMed

    Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan

    2016-09-01

    To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.

  7. Robustifying blind image deblurring methods by simple filters

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Zeng, Xiangrong; Huangpeng, Qizi; Fan, Jun; Zhou, Jinglun; Feng, Jing

    2016-07-01

    The state-of-the-art blind image deblurring (BID) methods are sensitive to noise, and most of them can deal with only small levels of Gaussian noise. In this paper, we use simple filters to present a robust BID framework which is able to robustify exiting BID methods to high-level Gaussian noise or/and Non-Gaussian noise. Experiments on images in presence of Gaussian noise, impulse noise (salt-and-pepper noise and random-valued noise) and mixed Gaussian-impulse noise, and a real-world blurry and noisy image show that the proposed method can faster estimate sharper kernels and better images, than that obtained by other methods.

  8. An Improved Text Localization Method for Natural Scene Images

    NASA Astrophysics Data System (ADS)

    Jiang, Mengdi; Cheng, Jianghua; Chen, Minghui; Ku, Xishu

    2018-01-01

    In order to extract text information effectively from natural scene image with complex background, multi-orientation perspective and multilingual languages, we present a new method based on the improved Stroke Feature Transform (SWT). Firstly, The Maximally Stable Extremal Region (MSER) method is used to detect text candidate regions. Secondly, the SWT algorithm is used in the candidate regions, which can improve the edge detection compared with tradition SWT method. Finally, the Frequency-tuned (FT) visual saliency is introduced to remove non-text candidate regions. The experiment results show that, the method can achieve good robustness for complex background with multi-orientation perspective, various characters and font sizes.

  9. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    PubMed Central

    Saberi Nik, Hassan; Rebelo, Paulo

    2014-01-01

    We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM) is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results. PMID:25386624

  10. Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition

    NASA Astrophysics Data System (ADS)

    Alouges, François; Aussal, Matthieu; Parolin, Emile

    2017-07-01

    This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.

  11. Improved regulatory element prediction based on tissue-specific local epigenomic signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yupeng; Gorkin, David U.; Dickel, Diane E.

    Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less

  12. A comparison of electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry for flow measurements

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Stricker, J.

    1985-01-01

    Electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry are compared as methods for the accurate measurement of refractive index and density change distributions of phase objects. Experimental results are presented to show that the two methods have comparable accuracy for measuring the first derivative of the interferometric fringe shift. The phase object for the measurements is a large crystal of KD*P, whose refractive index distribution can be changed accurately and repeatably for the comparison. Although the refractive index change causes only about one interferometric fringe shift over the entire crystal, the derivative shows considerable detail for the comparison. As electronic phase measurement methods, both methods are very accurate and are intrinsically compatible with computer controlled readout and data processing. Heterodyne moire is relatively inexpensive and has high variable sensitivity. Heterodyne holographic interferometry is better developed, and can be used with poor quality optical access to the experiment.

  13. A method of emotion contagion for crowd evacuation

    NASA Astrophysics Data System (ADS)

    Cao, Mengxiao; Zhang, Guijuan; Wang, Mengsi; Lu, Dianjie; Liu, Hong

    2017-10-01

    The current evacuation model does not consider the impact of emotion and personality on crowd evacuation. Thus, there is large difference between evacuation results and the real-life behavior of the crowd. In order to generate more realistic crowd evacuation results, we present a method of emotion contagion for crowd evacuation. First, we combine OCEAN (Openness, Extroversion, Agreeableness, Neuroticism, Conscientiousness) model and SIS (Susceptible Infected Susceptible) model to construct the P-SIS (Personalized SIS) emotional contagion model. The P-SIS model shows the diversity of individuals in crowd effectively. Second, we couple the P-SIS model with the social force model to simulate emotional contagion on crowd evacuation. Finally, the photo-realistic rendering method is employed to obtain the animation of crowd evacuation. Experimental results show that our method can simulate crowd evacuation realistically and has guiding significance for crowd evacuation in the emergency circumstances.

  14. Effect of method of crystallization on the IV-III and IV-II polymorphic transitions of ammonium nitrate.

    PubMed

    Vargeese, Anuj A; Joshi, Satyawati S; Krishnamurthy, V N

    2009-01-15

    A study has been undertaken on the effect of crystallization method on the IV<-->III transition of ammonium nitrate (AN). AN is crystallized in three different ways, viz. recrystallization, evaporative crystallization and melt crystallization. When the samples were crystallized from saturated aqueous solution, ideal crystals were formed, which behaved differently from the crystals formed from the other methods. The DTA examination of the crystals showed that the crystals have different transition behaviour. The moisture uptake of the samples determined were found to have influenced by the mode of crystallization. The samples were further analyzed by powder X-ray diffraction (XRD) and scanning electron microscopy (SEM). The present study showed that the parameters like thermal history, number of previous transformations and moisture content have a very negligible influence on the IV<-->III transition of AN as compared to the method of crystallization.

  15. Development of gait segmentation methods for wearable foot pressure sensors.

    PubMed

    Crea, S; De Rossi, S M M; Donati, M; Reberšek, P; Novak, D; Vitiello, N; Lenzi, T; Podobnik, J; Munih, M; Carrozza, M C

    2012-01-01

    We present an automated segmentation method based on the analysis of plantar pressure signals recorded from two synchronized wireless foot insoles. Given the strict limits on computational power and power consumption typical of wearable electronic components, our aim is to investigate the capability of a Hidden Markov Model machine-learning method, to detect gait phases with different levels of complexity in the processing of the wearable pressure sensors signals. Therefore three different datasets are developed: raw voltage values, calibrated sensor signals and a calibrated estimation of total ground reaction force and position of the plantar center of pressure. The method is tested on a pool of 5 healthy subjects, through a leave-one-out cross validation. The results show high classification performances achieved using estimated biomechanical variables, being on average the 96%. Calibrated signals and raw voltage values show higher delays and dispersions in phase transition detection, suggesting a lower reliability for online applications.

  16. Automated branching pattern report generation for laparoscopic surgery assistance

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Matsuzaki, Tetsuro; Hayashi, Yuichiro; Kitasaka, Takayuki; Misawa, Kazunari; Mori, Kensaku

    2015-05-01

    This paper presents a method for generating branching pattern reports of abdominal blood vessels for laparoscopic gastrectomy. In gastrectomy, it is very important to understand branching structure of abdominal arteries and veins, which feed and drain specific abdominal organs including the stomach, the liver and the pancreas. In the real clinical stage, a surgeon creates a diagnostic report of the patient anatomy. This report summarizes the branching patterns of the blood vessels related to the stomach. The surgeon decides actual operative procedure. This paper shows an automated method to generate a branching pattern report for abdominal blood vessels based on automated anatomical labeling. The report contains 3D rendering showing important blood vessels and descriptions of branching patterns of each vessel. We have applied this method for fifty cases of 3D abdominal CT scans and confirmed the proposed method can automatically generate branching pattern reports of abdominal arteries.

  17. Comparison of Manual Refraction Versus Autorefraction in 60 Diabetic Retinopathy Patients

    PubMed Central

    Shirzadi, Keyvan; Shahraki, Kourosh; Yahaghi, Emad; Makateb, Ali; Khosravifard, Keivan

    2016-01-01

    Aim: The purpose of the study was to evaluate the comparison of manual refraction versus autorefraction in diabetic retinopathy patients. Material and Methods: The study was conducted at the Be’sat Army Hospital from 2013-2015. In the present study differences between two common refractometry methods (manual refractometry and Auto refractometry) in diagnosis and follow up of retinopathy in patients affected with diabetes is investigated. Results: Our results showed that there is a significant difference in visual acuity score of patients between manual and auto refractometry. Despite this fact, spherical equivalent scores of two methods of refractometry did not show a significant statistical difference in the patients. Conclusion: Although use of manual refraction is comparable with autorefraction in evaluating spherical equivalent scores in diabetic patients affected with retinopathy, but in the case of visual acuity results from these two methods are not comparable. PMID:27703289

  18. Improved regulatory element prediction based on tissue-specific local epigenomic signatures

    DOE PAGES

    He, Yupeng; Gorkin, David U.; Dickel, Diane E.; ...

    2017-02-13

    Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less

  19. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    NASA Astrophysics Data System (ADS)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  20. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  1. Automated image quality assessment for chest CT scans.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2018-02-01

    Medical image quality needs to be maintained at standards sufficient for effective clinical reading. Automated computer analytic methods may be applied to medical images for quality assessment. For chest CT scans in a lung cancer screening context, an automated quality assessment method is presented that characterizes image noise and image intensity calibration. This is achieved by image measurements in three automatically segmented homogeneous regions of the scan: external air, trachea lumen air, and descending aorta blood. Profiles of CT scanner behavior are also computed. The method has been evaluated on both phantom and real low-dose chest CT scans and results show that repeatable noise and calibration measures may be realized by automated computer algorithms. Noise and calibration profiles show relevant differences between different scanners and protocols. Automated image quality assessment may be useful for quality control for lung cancer screening and may enable performance improvements to automated computer analysis methods. © 2017 American Association of Physicists in Medicine.

  2. Tracking perturbations in Boolean networks with spectral methods

    NASA Astrophysics Data System (ADS)

    Kesseli, Juha; Rämö, Pauli; Yli-Harja, Olli

    2005-08-01

    In this paper we present a method for predicting the spread of perturbations in Boolean networks. The method is applicable to networks that have no regular topology. The prediction of perturbations can be performed easily by using a presented result which enables the efficient computation of the required iterative formulas. This result is based on abstract Fourier transform of the functions in the network. In this paper the method is applied to show the spread of perturbations in networks containing a distribution of functions found from biological data. The advances in the study of the spread of perturbations can directly be applied to enable ways of quantifying chaos in Boolean networks. Derrida plots over an arbitrary number of time steps can be computed and thus distributions of functions compared with each other with respect to the amount of order they create in random networks.

  3. An Interpolation Method for Obtaining Thermodynamic Properties Near Saturated Liquid and Saturated Vapor Lines

    NASA Technical Reports Server (NTRS)

    Nguyen, Huy H.; Martin, Michael A.

    2004-01-01

    The two most common approaches used to formulate thermodynamic properties of pure substances are fundamental (or characteristic) equations of state (Helmholtz and Gibbs functions) and a piecemeal approach that is described in Adebiyi and Russell (1992). This paper neither presents a different method to formulate thermodynamic properties of pure substances nor validates the aforementioned approaches. Rather its purpose is to present a method to generate property tables from existing property packages and a method to facilitate the accurate interpretation of fluid thermodynamic property data from those tables. There are two parts to this paper. The first part of the paper shows how efficient and usable property tables were generated, with the minimum number of data points, using an aerospace industry standard property package. The second part describes an innovative interpolation technique that has been developed to properly obtain thermodynamic properties near the saturated liquid and saturated vapor lines.

  4. Dynamic load testing on the bearing capacity of prestressed tubular concrete piles in soft ground

    NASA Astrophysics Data System (ADS)

    Yu, Chuang; Liu, Songyu

    2008-11-01

    Dynamic load testing (DLT) is a high strain test method for assessing pile performance. The shaft capacity of a driven PTC (prestressed tubular concrete) pile in marine soft ground will vary with time after installation. The DLT method has been successfully transferred to the testing of prestressed pipe piles in marine soft clay of Lianyungang area in China. DLT is investigated to determine the ultimate bearing capacity of single pile at different period after pile installation. The ultimate bearing capacity of single pile was founded to increase more than 70% during the inventing 3 months, which demonstrate the time effect of rigid pile bearing capacity in marine soft ground. Furthermore, the skin friction and axial force along the pile shaft are presented as well, which present the load transfer mechanism of pipe pile in soft clay. It shows the economy and efficiency of DLT method compared to static load testing method.

  5. Preliminary study of ultrasonic structural quality control of Swiss-type cheese.

    PubMed

    Eskelinen, J J; Alavuotunki, A P; Haeggström, E; Alatossava, T

    2007-09-01

    There is demand for a new nondestructive cheese-structure analysis method for Swiss-type cheese. Such a method would provide the cheese-making industry the means to enhance process control and quality assurance. This paper presents a feasibility study on ultrasonic monitoring of the structural quality of Swiss cheese by using a single-transducer 2-MHz longitudinal mode pulse-echo setup. A volumetric ultrasonic image of a cheese sample featuring gas holes (cheese-eyes) and defects (cracks) in the scan area is presented. The image is compared with an optical reference image constructed from dissection images of the same sample. The results show that the ultrasonic method is capable of monitoring the gas-solid structure of the cheese during the ripening process. Moreover, the method can be used to detect and to characterize cheese-eyes and cracks in ripened cheese. Industrial application demands were taken into account when conducting the measurements.

  6. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  7. Method for estimating spatially variable seepage loss and hydraulic conductivity in intermittent and ephemeral streams

    USGS Publications Warehouse

    Niswonger, R.G.; Prudic, David E.; Fogg, G.E.; Stonestrom, David A.; Buckland, E.M.

    2008-01-01

    A method is presented for estimating seepage loss and streambed hydraulic conductivity along intermittent and ephemeral streams using streamflow front velocities in initially dry channels. The method uses the kinematic wave equation for routing streamflow in channels coupled to Philip's equation for infiltration. The coupled model considers variations in seepage loss both across and along the channel. Water redistribution in the unsaturated zone is also represented in the model. Sensitivity of the streamflow front velocity to parameters used for calculating seepage loss and for routing streamflow shows that the streambed hydraulic conductivity has the greatest sensitivity for moderate to large seepage loss rates. Channel roughness, geometry, and slope are most important for low seepage loss rates; however, streambed hydraulic conductivity is still important for values greater than 0.008 m/d. Two example applications are presented to demonstrate the utility of the method.

  8. Scientific Visualization to Study Flux Transfer Events at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Rastatter, Lutz; Kuznetsova, Maria M.; Sibeck, David G.; Berrios, David H.

    2011-01-01

    In this paper we present results of modeling of reconnection at the dayside magnetopause with subsequent development of flux transfer event signatures. The tools used include new methods that have been added to the suite of visualization methods that are used at the Community Coordinated Modeling Center (CCMC). Flux transfer events result from localized reconnection that connect magnetosheath magnetic field and plasma with magnetospheric fields and plasma and results in flux rope structures that span the dayside magnetopause. The onset of flux rope formation and the three-dimensional structure of flux ropes are studied as they have been modeled by high-resolution magnetohydrodynamic simulations of the dayside magnetosphere of the Earth. We show that flux transfer events are complex three-dimensional structures that require modern visualization and analysis techniques. Two suites of visualization methods are presented and we demonstrate the usefulness of those methods through the CCMC web site to the general science user.

  9. Estimating the R-curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1985-01-01

    A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.

  10. A deviation display method for visualising data in mobile gamma-ray spectrometry.

    PubMed

    Kock, Peder; Finck, Robert R; Nilsson, Jonas M C; Ostlund, Karl; Samuelsson, Christer

    2010-09-01

    A real time visualisation method, to be used in mobile gamma-spectrometric search operations using standard detector systems is presented. The new method, called deviation display, uses a modified waterfall display to present relative changes in spectral data over energy and time. Using unshielded (137)Cs and (241)Am point sources and different natural background environments, the behaviour of the deviation displays is demonstrated and analysed for two standard detector types (NaI(Tl) and HPGe). The deviation display enhances positive significant changes while suppressing the natural background fluctuations. After an initialization time of about 10min this technique leads to a homogeneous display dominated by the background colour, where even small changes in spectral data are easy to discover. As this paper shows, the deviation display method works well for all tested gamma energies and natural background radiation levels and with both tested detector systems.

  11. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  12. Feedback control for unsteady flow and its application to the stochastic Burgers equation

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon; Temam, Roger; Moin, Parviz; Kim, John

    1993-01-01

    The study applies mathematical methods of control theory to the problem of control of fluid flow with the long-range objective of developing effective methods for the control of turbulent flows. Model problems are employed through the formalism and language of control theory to present the procedure of how to cast the problem of controlling turbulence into a problem in optimal control theory. Methods of calculus of variations through the adjoint state and gradient algorithms are used to present a suboptimal control and feedback procedure for stationary and time-dependent problems. Two types of controls are investigated: distributed and boundary controls. Several cases of both controls are numerically simulated to investigate the performances of the control algorithm. Most cases considered show significant reductions of the costs to be minimized. The dependence of the control algorithm on the time-descretization method is discussed.

  13. Multi-level Discourse Analysis in a Physics Teaching Methods Course from the Psychological Perspective of Activity Theory

    NASA Astrophysics Data System (ADS)

    Vieira, Rodrigo Drumond; Kelly, Gregory J.

    2014-11-01

    In this paper, we present and apply a multi-level method for discourse analysis in science classrooms. This method is based on the structure of human activity (activity, actions, and operations) and it was applied to study a pre-service physics teacher methods course. We argue that such an approach, based on a cultural psychological perspective, affords opportunities for analysts to perform a theoretically based detailed analysis of discourse events. Along with the presentation of analysis, we show and discuss how the articulation of different levels offers interpretative criteria for analyzing instructional conversations. We synthesize the results into a model for a teacher's practice and discuss the implications and possibilities of this approach for the field of discourse analysis in science classrooms. Finally, we reflect on how the development of teachers' understanding of their activity structures can contribute to forms of progressive discourse of science education.

  14. A quantum retrograde canon: complete population inversion in n 2-state systems

    NASA Astrophysics Data System (ADS)

    Padan, Alon; Suchowski, Haim

    2018-04-01

    We present a novel approach for analytically reducing a family of time-dependent multi-state quantum control problems to two-state systems. The presented method translates between {SU}(2)× {SU}(2) related n 2-state systems and two-state systems, such that the former undergo complete population inversion (CPI) if and only if the latter reach specific states. For even n, the method translates any two-state CPI scheme to a family of CPI schemes in n 2-state systems. In particular, facilitating CPI in a four-state system via real time-dependent nearest-neighbors couplings is reduced to facilitating CPI in a two-level system. Furthermore, we show that the method can be used for operator control, and provide conditions for producing several universal gates for quantum computation as an example. In addition, we indicate a basis for utilizing the method in optimal control problems.

  15. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  16. Reduction of furnace temperature in ultra long carbon nanotube growth by plasmonic excitation of electron Fermi gas of catalytic nanocluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saeidi, Mohammadreza, E-mail: Saeidi.mr@gmail.com, E-mail: m.saeidi@shahed.ac.ir

    2016-06-15

    In this paper, a novel physical method is presented to reduce the temperature of the furnace and prevent loss of thermal energy in ultra long carbon nanotube (CNT) growth process by catalytic chemical vapor deposition. This method is based on the plasmonic excitation of electron Fermi gas of catalytic nanocluster sitting at tip end of CNT by ultraviolet (UV) irradiation. Physical concepts of the method are explained in detail. The results of applying the presented method consequences to an appropriate tip-growth mechanism of the ultra long CNTs show that, in the presence of plasmonic excitation, the growth rate of themore » CNT is enhanced. Demonstration of temperature reduction and simultaneous increase in CNT length by UV irradiation with the proper frequency are the most important and practical result of the paper. All results are interpreted and discussed.« less

  17. Solution of an optimal control lifting body entry problem by an improved method of perturbation functions

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1975-01-01

    This paper presents a solution to a complex lifting reentry three-degree-of-freedom problem by using the calculus of variations to minimize the integral of the sum of the aerodynamics loads and heat rate input to the vehicle. The entry problem considered does not have state and/or control constraints along the trajectory. The calculus of variations method applied to this problem gives rise to a set of necessary conditions which are used to formulate a two point boundary value (TPBV) problem. This TPBV problem is then numerically solved by an improved method of perturbation functions (IMPF) using several starting co-state vectors. These vectors were chosen so that each one had a larger norm with respect to show how the envelope of convergence is significantly increased using this method and cases are presented to point this out.

  18. Light-Cone Effect of Radiation Fields in Cosmological Radiative Transfer Simulations

    NASA Astrophysics Data System (ADS)

    Ahn, Kyungjin

    2015-02-01

    We present a novel method to implement time-delayed propagation of radiation fields in cosmo-logical radiative transfer simulations. Time-delayed propagation of radiation fields requires construction of retarded-time fields by tracking the location and lifetime of radiation sources along the corresponding light-cones. Cosmological radiative transfer simulations have, until now, ignored this "light-cone effect" or implemented ray-tracing methods that are computationally demanding. We show that radiative trans-fer calculation of the time-delayed fields can be easily achieved in numerical simulations when periodic boundary conditions are used, by calculating the time-discretized retarded-time Green's function using the Fast Fourier Transform (FFT) method and convolving it with the source distribution. We also present a direct application of this method to the long-range radiation field of Lyman-Werner band photons, which is important in the high-redshift astrophysics with first stars.

  19. Design and control of the phase current of a brushless dc motor to eliminate cogging torque

    NASA Astrophysics Data System (ADS)

    Jang, G. H.; Lee, C. J.

    2006-04-01

    This paper presents a design and control method of the phase current to reduce the torque ripple of a brushless dc (BLDC) motor by eliminating cogging torque. The cogging torque is the main source of torque ripple and consequently of speed error, and it is also the excitation source to generate the vibration and noise of a motor. This research proposes a modified current wave form, which is composed of main and auxiliary currents. The former is the conventional current to generate the commutating torque. The latter generates the torque with the same magnitude and opposite sign of the corresponding cogging torque at the given position in order to eliminate the cogging torque. Time-stepping finite element method simulation considering pulse-width-modulation switching method has been performed to verify the effectiveness of the proposed method, and it shows that this proposed method reduces torque ripple by 36%. A digital-signal-processor-based controller is also developed to implement the proposed method, and it shows that this proposed method reduces the speed ripple significantly.

  20. Segmentation of Vasculature from Fluorescently Labeled Endothelial Cells in Multi-Photon Microscopy Images.

    PubMed

    Bates, Russell; Irving, Benjamin; Markelc, Bostjan; Kaeppler, Jakob; Brown, Graham; Muschel, Ruth J; Brady, Sir Michael; Grau, Vicente; Schnabel, Julia A

    2017-08-09

    Vasculature is known to be of key biological significance, especially in the study of tumors. As such, considerable effort has been focused on the automated segmentation of vasculature in medical and pre-clinical images. The majority of vascular segmentation methods focus on bloodpool labeling methods, however, particularly in the study of tumors it is of particular interest to be able to visualize both perfused and non-perfused vasculature. Imaging vasculature by highlighting the endothelium provides a way to separate the morphology of vasculature from the potentially confounding factor of perfusion. Here we present a method for the segmentation of tumor vasculature in 3D fluorescence microscopy images using signals from the endothelial and surrounding cells. We show that our method can provide complete and semantically meaningful segmentations of complex vasculature using a supervoxel-Markov Random Field approach. We show that in terms of extracting meaningful segmentations of the vasculature, our method out-performs both a state-ofthe- art method, specific to these data, as well as more classical vasculature segmentation methods.

Top