Sample records for curve analysis method

  1. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    PubMed Central

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-01-01

    Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144

  2. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers.

    PubMed

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-11-26

    Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.

  3. Evaluation of qPCR curve analysis methods for reliable biomarker discovery: bias, resolution, precision, and implications.

    PubMed

    Ruijter, Jan M; Pfaffl, Michael W; Zhao, Sheng; Spiess, Andrej N; Boggy, Gregory; Blom, Jochen; Rutledge, Robert G; Sisti, Davide; Lievens, Antoon; De Preter, Katleen; Derveaux, Stefaan; Hellemans, Jan; Vandesompele, Jo

    2013-01-01

    RNA transcripts such as mRNA or microRNA are frequently used as biomarkers to determine disease state or response to therapy. Reverse transcription (RT) in combination with quantitative PCR (qPCR) has become the method of choice to quantify small amounts of such RNA molecules. In parallel with the democratization of RT-qPCR and its increasing use in biomedical research or biomarker discovery, we witnessed a growth in the number of gene expression data analysis methods. Most of these methods are based on the principle that the position of the amplification curve with respect to the cycle-axis is a measure for the initial target quantity: the later the curve, the lower the target quantity. However, most methods differ in the mathematical algorithms used to determine this position, as well as in the way the efficiency of the PCR reaction (the fold increase of product per cycle) is determined and applied in the calculations. Moreover, there is dispute about whether the PCR efficiency is constant or continuously decreasing. Together this has lead to the development of different methods to analyze amplification curves. In published comparisons of these methods, available algorithms were typically applied in a restricted or outdated way, which does not do them justice. Therefore, we aimed at development of a framework for robust and unbiased assessment of curve analysis performance whereby various publicly available curve analysis methods were thoroughly compared using a previously published large clinical data set (Vermeulen et al., 2009) [11]. The original developers of these methods applied their algorithms and are co-author on this study. We assessed the curve analysis methods' impact on transcriptional biomarker identification in terms of expression level, statistical significance, and patient-classification accuracy. The concentration series per gene, together with data sets from unpublished technical performance experiments, were analyzed in order to assess the algorithms' precision, bias, and resolution. While large differences exist between methods when considering the technical performance experiments, most methods perform relatively well on the biomarker data. The data and the analysis results per method are made available to serve as benchmark for further development and evaluation of qPCR curve analysis methods (http://qPCRDataMethods.hfrc.nl). Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Evaluation of quantification methods for real-time PCR minor groove binding hybridization probe assays.

    PubMed

    Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V

    2007-02-01

    Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.

  5. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  6. Thermoluminescence glow curve analysis and CGCD method for erbium doped CaZrO{sub 3} phosphor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiwari, Ratnesh, E-mail: 31rati@gmail.com; Chopra, Seema

    2016-05-06

    The manuscript report the synthesis, thermoluminescence study at fixed concentration of Er{sup 3+} (1 mol%) doped CaZrO{sub 3} phosphor. The phosphors were prepared by modified solid state reaction method. The powder sample was characterized by thermoluminescence (TL) glow curve analysis. In TL glow curve the optimized concentration in 1mol% for UV irradiated sample. The kinetic parameters were calculated by computerized glow curve deconvolution (CGCD) techniaue. Trapping parameters gives the information of dosimetry loss in prepared phosphor and its usability in environmental monitoring and for personal monitoring. CGCD is the advance tool for analysis of complicated TL glow curves.

  7. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less

  8. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  9. Methods of Technological Forecasting,

    DTIC Science & Technology

    1977-05-01

    Trend Extrapolation Progress Curve Analogy Trend Correlation Substitution Analysis or Substitution Growth Curves Envelope Curve Advances in the State of...the Art Technological Mapping Contextual Mapping Matrix Input-Output Analysis Mathematical Models Simulation Models Dynamic Modelling. CHAPTER IV...Generation Interaction between Needs and Possibilities Map of the Technological Future — (‘ross- Impact Matri x Discovery Matrix Morphological Analysis

  10. Integrated analysis on static/dynamic aeroelasticity of curved panels based on a modified local piston theory

    NASA Astrophysics Data System (ADS)

    Yang, Zhichun; Zhou, Jian; Gu, Yingsong

    2014-10-01

    A flow field modified local piston theory, which is applied to the integrated analysis on static/dynamic aeroelastic behaviors of curved panels, is proposed in this paper. The local flow field parameters used in the modification are obtained by CFD technique which has the advantage to simulate the steady flow field accurately. This flow field modified local piston theory for aerodynamic loading is applied to the analysis of static aeroelastic deformation and flutter stabilities of curved panels in hypersonic flow. In addition, comparisons are made between results obtained by using the present method and curvature modified method. It shows that when the curvature of the curved panel is relatively small, the static aeroelastic deformations and flutter stability boundaries obtained by these two methods have little difference, while for curved panels with larger curvatures, the static aeroelastic deformation obtained by the present method is larger and the flutter stability boundary is smaller compared with those obtained by the curvature modified method, and the discrepancy increases with the increasing of curvature of panels. Therefore, the existing curvature modified method is non-conservative compared to the proposed flow field modified method based on the consideration of hypersonic flight vehicle safety, and the proposed flow field modified local piston theory for curved panels enlarges the application range of piston theory.

  11. Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.

    2016-03-15

    In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less

  12. Gene Scanning of an Internalin B Gene Fragment Using High-Resolution Melting Curve Analysis as a Tool for Rapid Typing of Listeria monocytogenes

    PubMed Central

    Pietzka, Ariane T.; Stöger, Anna; Huhulescu, Steliana; Allerberger, Franz; Ruppitsch, Werner

    2011-01-01

    The ability to accurately track Listeria monocytogenes strains involved in outbreaks is essential for control and prevention of listeriosis. Because current typing techniques are time-consuming, cost-intensive, technically demanding, and difficult to standardize, we developed a rapid and cost-effective method for typing of L. monocytogenes. In all, 172 clinical L. monocytogenes isolates and 20 isolates from culture collections were typed by high-resolution melting (HRM) curve analysis of a specific locus of the internalin B gene (inlB). All obtained HRM curve profiles were verified by sequence analysis. The 192 tested L. monocytogenes isolates yielded 15 specific HRM curve profiles. Sequence analysis revealed that these 15 HRM curve profiles correspond to 18 distinct inlB sequence types. The HRM curve profiles obtained correlated with the five phylogenetic groups I.1, I.2, II.1, II.2, and III. Thus, HRM curve analysis constitutes an inexpensive assay and represents an improvement in typing relative to classical serotyping or multiplex PCR typing protocols. This method provides a rapid and powerful screening tool for simultaneous preliminary typing of up to 384 samples in approximately 2 hours. PMID:21227395

  13. Clarifications regarding the use of model-fitting methods of kinetic analysis for determining the activation energy from a single non-isothermal curve.

    PubMed

    Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M

    2013-02-05

    This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.

  14. A new methodology for free wake analysis using curved vortex elements

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Teske, Milton E.; Quackenbush, Todd R.

    1987-01-01

    A method using curved vortex elements was developed for helicopter rotor free wake calculations. The Basic Curve Vortex Element (BCVE) is derived from the approximate Biot-Savart integration for a parabolic arc filament. When used in conjunction with a scheme to fit the elements along a vortex filament contour, this method has a significant advantage in overall accuracy and efficiency when compared to the traditional straight-line element approach. A theoretical and numerical analysis shows that free wake flows involving close interactions between filaments should utilize curved vortex elements in order to guarantee a consistent level of accuracy. The curved element method was implemented into a forward flight free wake analysis, featuring an adaptive far wake model that utilizes free wake information to extend the vortex filaments beyond the free wake regions. The curved vortex element free wake, coupled with this far wake model, exhibited rapid convergence, even in regions where the free wake and far wake turns are interlaced. Sample calculations are presented for tip vortex motion at various advance ratios for single and multiple blade rotors. Cross-flow plots reveal that the overall downstream wake flow resembles a trailing vortex pair. A preliminary assessment shows that the rotor downwash field is insensitive to element size, even for relatively large curved elements.

  15. Decision curve analysis: a novel method for evaluating prediction models.

    PubMed

    Vickers, Andrew J; Elkin, Elena B

    2006-01-01

    Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.

  16. The behavioral economics of drug self-administration: A review and new analytical approach for within-session procedures

    PubMed Central

    Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary

    2012-01-01

    Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021

  17. Dried blood spot analysis of creatinine with LC-MS/MS in addition to immunosuppressants analysis.

    PubMed

    Koster, Remco A; Greijdanus, Ben; Alffenaar, Jan-Willem C; Touw, Daan J

    2015-02-01

    In order to monitor creatinine levels or to adjust the dosage of renally excreted or nephrotoxic drugs, the analysis of creatinine in dried blood spots (DBS) could be a useful addition to DBS analysis. We developed a LC-MS/MS method for the analysis of creatinine in the same DBS extract that was used for the analysis of tacrolimus, sirolimus, everolimus, and cyclosporine A in transplant patients with the use of Whatman FTA DMPK-C cards. The method was validated using three different strategies: a seven-point calibration curve using the intercept of the calibration to correct for the natural presence of creatinine in reference samples, a one-point calibration curve at an extremely high concentration in order to diminish the contribution of the natural presence of creatinine, and the use of creatinine-[(2)H3] with an eight-point calibration curve. The validated range for creatinine was 120 to 480 μmol/L (seven-point calibration curve), 116 to 7000 μmol/L (1-point calibration curve), and 1.00 to 400.0 μmol/L for creatinine-[(2)H3] (eight-point calibration curve). The precision and accuracy results for all three validations showed a maximum CV of 14.0% and a maximum bias of -5.9%. Creatinine in DBS was found stable at ambient temperature and 32 °C for 1 week and at -20 °C for 29 weeks. Good correlations were observed between patient DBS samples and routine enzymatic plasma analysis and showed the capability of the DBS method to be used as an alternative for creatinine plasma measurement.

  18. Decision curve analysis revisited: overall net benefit, relationships to ROC curve analysis, and application to case-control studies

    PubMed Central

    2011-01-01

    Background Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. Methods We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. Results We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. Conclusions We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application. PMID:21696604

  19. Classification of Fowl Adenovirus Serotypes by Use of High-Resolution Melting-Curve Analysis of the Hexon Gene Region▿

    PubMed Central

    Steer, Penelope A.; Kirkpatrick, Naomi C.; O'Rourke, Denise; Noormohammadi, Amir H.

    2009-01-01

    Identification of fowl adenovirus (FAdV) serotypes is of importance in epidemiological studies of disease outbreaks and the adoption of vaccination strategies. In this study, real-time PCR and subsequent high-resolution melting (HRM)-curve analysis of three regions of the hexon gene were developed and assessed for their potential in differentiating 12 FAdV reference serotypes. The results were compared to previously described PCR and restriction enzyme analyses of the hexon gene. Both HRM-curve analysis of a 191-bp region of the hexon gene and restriction enzyme analysis failed to distinguish a number of serotypes used in this study. In addition, PCR of the region spanning nucleotides (nt) 144 to 1040 failed to amplify FAdV-5 in sufficient quantities for further analysis. However, HRM-curve analysis of the region spanning nt 301 to 890 proved a sensitive and specific method of differentiating all 12 serotypes. All melt curves were highly reproducible, and replicates of each serotype were correctly genotyped with a mean confidence value of more than 99% using normalized HRM curves. Sequencing analysis revealed that each profile was related to a unique sequence, with some sequences sharing greater than 94% identity. Melting-curve profiles were found to be related mainly to GC composition and distribution throughout the amplicons, regardless of sequence identity. The results presented in this study show that the closed-tube method of PCR and HRM-curve analysis provides an accurate, rapid, and robust genotyping technique for the identification of FAdV serotypes and can be used as a model for developing genotyping techniques for other pathogens. PMID:19036935

  20. Heuristic approach to capillary pressures averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  1. Piecewise-homotopy analysis method (P-HAM) for first order nonlinear ODE

    NASA Astrophysics Data System (ADS)

    Chin, F. Y.; Lem, K. H.; Chong, F. S.

    2013-09-01

    In homotopy analysis method (HAM), the determination for the value of the auxiliary parameter h is based on the valid region of the h-curve in which the horizontal segment of the h-curve will decide the valid h-region. All h-value taken from the valid region, provided that the order of deformation is large enough, will in principle yield an approximation series that converges to the exact solution. However it is found out that the h-value chosen within this valid region does not always promise a good approximation under finite order. This paper suggests an improved method called Piecewise-HAM (P-HAM). In stead of a single h-value, this method suggests using many h-values. Each of the h-values comes from an individual h-curve while each h-curve is plotted by fixing the time t at a different value. Each h-value is claimed to produce a good approximation only about a neighborhood centered at the corresponding t which the h-curve is based on. Each segment of these good approximations is then joined to form the approximation curve. By this, the convergence region is enhanced further. The P-HAM is illustrated and supported by examples.

  2. Learning Factors Transfer Analysis: Using Learning Curve Analysis to Automatically Generate Domain Models

    ERIC Educational Resources Information Center

    Pavlik, Philip I. Jr.; Cen, Hao; Koedinger, Kenneth R.

    2009-01-01

    This paper describes a novel method to create a quantitative model of an educational content domain of related practice item-types using learning curves. By using a pairwise test to search for the relationships between learning curves for these item-types, we show how the test results in a set of pairwise transfer relationships that can be…

  3. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  4. Influence analysis in quantitative trait loci detection.

    PubMed

    Dou, Xiaoling; Kuriki, Satoshi; Maeno, Akiteru; Takada, Toyoyuki; Shiroishi, Toshihiko

    2014-07-01

    This paper presents systematic methods for the detection of influential individuals that affect the log odds (LOD) score curve. We derive general formulas of influence functions for profile likelihoods and introduce them into two standard quantitative trait locus detection methods-the interval mapping method and single marker analysis. Besides influence analysis on specific LOD scores, we also develop influence analysis methods on the shape of the LOD score curves. A simulation-based method is proposed to assess the significance of the influence of the individuals. These methods are shown useful in the influence analysis of a real dataset of an experimental population from an F2 mouse cross. By receiver operating characteristic analysis, we confirm that the proposed methods show better performance than existing diagnostics. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A curved surface micro-moiré method and its application in evaluating curved surface residual stress

    NASA Astrophysics Data System (ADS)

    Zhang, Hongye; Wu, Chenlong; Liu, Zhanwei; Xie, Huimin

    2014-09-01

    The moiré method is typically applied to the measurement of deformations of a flat surface while, for a curved surface, this method is rarely used other than for projection moiré or moiré interferometry. Here, a novel colour charge-coupled device (CCD) micro-moiré method has been developed, based on which a curved surface micro-moiré (CSMM) method is proposed with a colour CCD and optical microscope (OM). In the CSMM method, no additional reference grating is needed as a Bayer colour filter array (CFA) installed on the OM in front of the colour CCD image sensor performs this role. Micro-moiré fringes with high contrast are directly observed with the OM through the Bayer CFA under the special condition of observing a curved specimen grating. The principle of the CSMM method based on a colour CCD micro-moiré method and its application range and error analysis are all described in detail. In an experiment, the curved surface residual stress near a welded seam on a stainless steel tube was investigated using the CSMM method.

  6. Isogeometric analysis of free-form Timoshenko curved beams including the nonlinear effects of large deformations

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Farhad; Hashemian, Ali; Moetakef-Imani, Behnam; Hadidimoud, Saied

    2018-03-01

    In the present paper, the isogeometric analysis (IGA) of free-form planar curved beams is formulated based on the nonlinear Timoshenko beam theory to investigate the large deformation of beams with variable curvature. Based on the isoparametric concept, the shape functions of the field variables (displacement and rotation) in a finite element analysis are considered to be the same as the non-uniform rational basis spline (NURBS) basis functions defining the geometry. The validity of the presented formulation is tested in five case studies covering a wide range of engineering curved structures including from straight and constant curvature to variable curvature beams. The nonlinear deformation results obtained by the presented method are compared to well-established benchmark examples and also compared to the results of linear and nonlinear finite element analyses. As the nonlinear load-deflection behavior of Timoshenko beams is the main topic of this article, the results strongly show the applicability of the IGA method to the large deformation analysis of free-form curved beams. Finally, it is interesting to notice that, until very recently, the large deformations analysis of free-form Timoshenko curved beams has not been considered in IGA by researchers.

  7. Determination of uronic acids in isolated hemicelluloses from kenaf using diffuse reflectance infrared fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method.

    PubMed

    Batsoulis, A N; Nacos, M K; Pappas, C S; Tarantilis, P A; Mavromoustakos, T; Polissiou, M G

    2004-02-01

    Hemicellulose samples were isolated from kenaf (Hibiscus cannabinus L.). Hemicellulosic fractions usually contain a variable percentage of uronic acids. The uronic acid content (expressed in polygalacturonic acid) of the isolated hemicelluloses was determined by diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and the curve-fitting deconvolution method. A linear relationship between uronic acids content and the sum of the peak areas at 1745, 1715, and 1600 cm(-1) was established with a high correlation coefficient (0.98). The deconvolution analysis using the curve-fitting method allowed the elimination of spectral interferences from other cell wall components. The above method was compared with an established spectrophotometric method and was found equivalent for accuracy and repeatability (t-test, F-test). This method is applicable in analysis of natural or synthetic mixtures and/or crude substances. The proposed method is simple, rapid, and nondestructive for the samples.

  8. Estimating the SCS runoff curve number in forest catchments of Korea

    NASA Astrophysics Data System (ADS)

    Choi, Hyung Tae; Kim, Jaehoon; Lim, Hong-geun

    2016-04-01

    To estimate flood runoff discharge is a very important work in design for many hydraulic structures in streams, rivers and lakes such as dams, bridges, culverts, and so on. So, many researchers have tried to develop better methods for estimating flood runoff discharge. The SCS runoff curve number is an empirical parameter determined by empirical analysis of runoff from small catchments and hillslope plots monitored by the USDA. This method is an efficient method for determining the approximate amount of runoff from a rainfall even in a particular area, and is very widely used all around the world. However, there is a quite difference between the conditions of Korea and USA in topography, geology and land use. Therefore, examinations in adaptability of the SCS runoff curve number need to raise the accuracy of runoff prediction using SCS runoff curve number method. The purpose of this study is to find the SCS runoff curve number based on the analysis of observed data from several experimental forest catchments monitored by the National Institute of Forest Science (NIFOS), as a pilot study to modify SCS runoff curve number for forest lands in Korea. Rainfall and runoff records observed in Gwangneung coniferous and broad leaves forests, Sinwol, Hwasoon, Gongju and Gyeongsan catchments were selected to analyze the variability of flood runoff coefficients during the last 5 years. This study shows that runoff curve numbers of the experimental forest catchments range from 55 to 65. SCS Runoff Curve number method is a widely used method for estimating design discharge for small ungauged watersheds. Therefore, this study can be helpful technically to estimate the discharge for forest watersheds in Korea with more accuracy.

  9. Soil hydraulic properties estimate based on numerical analysis of disc infiltrometer three-dimensional infiltration curve

    NASA Astrophysics Data System (ADS)

    Latorre, Borja; Peña-Sancho, Carolina; Angulo-Jaramillo, Rafaël; Moret-Fernández, David

    2015-04-01

    Measurement of soil hydraulic properties is of paramount importance in fields such as agronomy, hydrology or soil science. Fundamented on the analysis of the Haverkamp et al. (1994) model, the aim of this paper is to explain a technique to estimate the soil hydraulic properties (sorptivity, S, and hydraulic conductivity, K) from the full-time cumulative infiltration curves. The method (NSH) was validated by means of 12 synthetic infiltration curves generated with HYDRUS-3D from known soil hydraulic properties. The K values used to simulate the synthetic curves were compared to those estimated with the proposed method. A procedure to identify and remove the effect of the contact sand layer on the cumulative infiltration curve was also developed. A sensitivity analysis was performed using the water level measurement as uncertainty source. Finally, the procedure was evaluated using different infiltration times and data noise. Since a good correlation between the K used in HYDRUS-3D to model the infiltration curves and those estimated by the NSH method was obtained, (R2 =0.98), it can be concluded that this technique is robust enough to estimate the soil hydraulic conductivity from complete infiltration curves. The numerical procedure to detect and remove the influence of the contact sand layer on the K and S estimates seemed to be robust and efficient. An effect of the curve infiltration noise on the K estimate was observed, which uncertainty increased with increasing noise. Finally, the results showed that infiltration time was an important factor to estimate K. Lower values of K or smaller uncertainty needed longer infiltration times.

  10. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    PubMed

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  11. Functional principal component analysis of glomerular filtration rate curves after kidney transplant.

    PubMed

    Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo

    2017-01-01

    This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.

  12. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    1996-01-01

    We first report on our current progress in the area of explicit methods for tangent curve computation. The basic idea of this method is to decompose the domain into a collection of triangles (or tetrahedra) and assume linear variation of the vector field over each cell. With this assumption, the equations which define a tangent curve become a system of linear, constant coefficient ODE's which can be solved explicitly. There are five different representation of the solution depending on the eigenvalues of the Jacobian. The analysis of these five cases is somewhat similar to the phase plane analysis often associate with critical point classification within the context of topological methods, but it is not exactly the same. There are some critical differences. Moving from one cell to the next as a tangent curve is tracked, requires the computation of the exit point which is an intersection of the solution of the constant coefficient ODE and the edge of a triangle. There are two possible approaches to this root computation problem. We can express the tangent curve into parametric form and substitute into an implicit form for the edge or we can express the edge in parametric form and substitute in an implicit form of the tangent curve. Normally the solution of a system of ODE's is given in parametric form and so the first approach is the most accessible and straightforward. The second approach requires the 'implicitization' of these parametric curves. The implicitization of parametric curves can often be rather difficult, but in this case we have been successful and have been able to develop algorithms and subsequent computer programs for both approaches. We will give these details along with some comparisons in a forthcoming research paper on this topic.

  13. Replace-approximation method for ambiguous solutions in factor analysis of ultrasonic hepatic perfusion

    NASA Astrophysics Data System (ADS)

    Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu

    2010-03-01

    Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.

  14. Development of a Probabilistic Tsunami Hazard Analysis in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toshiaki Sakai; Tomoyoshi Takeda; Hiroshi Soraoka

    2006-07-01

    It is meaningful for tsunami assessment to evaluate phenomena beyond the design basis as well as seismic design. Because once we set the design basis tsunami height, we still have possibilities tsunami height may exceeds the determined design tsunami height due to uncertainties regarding the tsunami phenomena. Probabilistic tsunami risk assessment consists of estimating for tsunami hazard and fragility of structures and executing system analysis. In this report, we apply a method for probabilistic tsunami hazard analysis (PTHA). We introduce a logic tree approach to estimate tsunami hazard curves (relationships between tsunami height and probability of excess) and present anmore » example for Japan. Examples of tsunami hazard curves are illustrated, and uncertainty in the tsunami hazard is displayed by 5-, 16-, 50-, 84- and 95-percentile and mean hazard curves. The result of PTHA will be used for quantitative assessment of the tsunami risk for important facilities located on coastal area. Tsunami hazard curves are the reasonable input data for structures and system analysis. However the evaluation method for estimating fragility of structures and the procedure of system analysis is now being developed. (authors)« less

  15. Limitation of the Cavitron technique by conifer pit aspiration.

    PubMed

    Beikircher, B; Ameglio, T; Cochard, H; Mayr, S

    2010-07-01

    The Cavitron technique facilitates time and material saving for vulnerability analysis. The use of rotors with small diameters leads to high water pressure gradients (DeltaP) across samples, which may cause pit aspiration in conifers. In this study, the effect of pit aspiration on Cavitron measurements was analysed and a modified 'conifer method' was tested which avoids critical (i.e. pit aspiration inducing) DeltaP. Four conifer species were used (Juniperus communis, Picea abies, Pinus sylvestris, and Larix decidua) for vulnerability analysis based on the standard Cavitron technique and the conifer method. In addition, DeltaP thresholds for pit aspiration were determined and water extraction curves were constructed. Vulnerability curves obtained with the standard method showed generally a less negative P for the induction of embolism than curves of the conifer method. Differences were species-specific with the smallest effects in Juniperus. Larix showed the most pronounced shifts in P(50) (pressure at 50% loss of conductivity) between the standard (-1.5 MPa) and the conifer (-3.5 MPa) methods. Pit aspiration occurred at the lowest DeltaP in Larix and at the highest in Juniperus. Accordingly, at a spinning velocity inducing P(50), DeltaP caused only a 4% loss of conductivity induced by pit aspiration in Juniperus, but about 60% in Larix. Water extraction curves were similar to vulnerability curves indicating that spinning itself did not affect pits. Conifer pit aspiration can have major influences on Cavitron measurements and lead to an overestimation of vulnerability thresholds when a small rotor is used. Thus, the conifer method presented here enables correct vulnerability analysis by avoiding artificial conductivity losses.

  16. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  17. Statistical assessment of the learning curves of health technologies.

    PubMed

    Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T

    2001-01-01

    (1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second was a case series of consecutive laparoscopic cholecystectomy procedures performed by ten surgeons; the third was randomised trial data derived from the laparoscopic procedure arm of a multicentre trial of groin hernia repair, supplemented by data from non-randomised operations performed during the trial. RESULTS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: Of 4571 abstracts identified, 272 (6%) were later included in the study after review of the full paper. Some 51% of studies assessed a surgical minimal access technique and 95% were case series. The statistical method used most often (60%) was splitting the data into consecutive parts (such as halves or thirds), with only 14% attempting a more formal statistical analysis. The reporting of the studies was poor, with 31% giving no details of data collection methods. RESULTS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: Of 9431 abstracts assessed, 115 (1%) were deemed appropriate for further investigation and, of these, 18 were included in the study. All of the methods for complex data sets were identified in the non-clinical literature. These were discriminant analysis, two-stage estimation of learning rates, generalised estimating equations, multilevel models, latent curve models, time series models and stochastic parameter models. In addition, eight new shapes of learning curves were identified. RESULTS - TESTING OF STATISTICAL METHODS: No one particular shape of learning curve performed significantly better than another. The performance of 'operation time' as a proxy for learning differed between the three procedures. Multilevel modelling using the laparoscopic cholecystectomy data demonstrated and measured surgeon-specific and confounding effects. The inclusion of non-randomised cases, despite the possible limitations of the method, enhanced the interpretation of learning effects. CONCLUSIONS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: The statistical methods used for assessing learning effects in health technology assessment have been crude and the reporting of studies poor. CONCLUSIONS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: A number of statistical methods for assessing learning effects were identified that had not hitherto been used in health technology assessment. There was a hierarchy of methods for the identification and measurement of learning, and the more sophisticated methods for both have had little if any use in health technology assessment. This demonstrated the value of considering fields outside clinical research when addressing methodological issues in health technology assessment. CONCLUSIONS - TESTING OF STATISTICAL METHODS: It has been demonstrated that the portfolio of techniques identified can enhance investigations of learning curve effects. (ABSTRACT TRUNCATED)

  18. 1998 UBV Light Curves of Eclipsing Binary AI Draconis and Absolute Parameters

    NASA Astrophysics Data System (ADS)

    Jassur, D. M. Z.; Khaledian, M. S.; Kermani, M. H.

    New UBV photometry of Algol-Type eclipsing binary star AI Dra and the absolute physical parameters of this system have been presented. The light curve analysis carried out by the method of differential corrections indicates that both components are inside their Roche-Lobes. From combining the photometric solution with spectroscopic data obtained from velocity curve analysis, it has been found that the system consist of a main sequence primary and an evolved (subgiant) secondary.

  19. ARBAN-A new method for analysis of ergonomic effort.

    PubMed

    Holzmann, P

    1982-06-01

    ARBAN is a method for the ergonomic analysis of work, including work situations which involve widely differing body postures and loads. The idea of the method is thal all phases of the analysis process that imply specific knowledge on ergonomics are teken over by filming equipment and a computer routine. All tasks that must be carried out by the investigator in the process of analysis are so designed that they appear as evident by the use of systematic common sense. The ARBAN analysis method contains four steps: 1. Recording of the workplace situation on video or film. 2. Coding the posture and load situation at a number of closely spaced 'frozen' situations. 3. Computerisation. 4. Evaluation of the results. The computer calculates figures for the total ergonomic stress on the whole body as well as on different parts of the body separately. They are presented as 'Ergonomic stress/ time curves', where the heavy load situations occur as peaks of the curve. The work cycle may also be divided into different tasks, where the stress and duration patterns can be compared. The integral of the curves are calculated for single-figure comparison of different tasks as well as different work situations.

  20. Bayesian Analysis of Longitudinal Data Using Growth Curve Models

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.

    2007-01-01

    Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…

  1. Multivariate Curve Resolution Methods Illustrated Using Infrared Spectra of an Alcohol Dissolved in Carbon Tetrachloride

    ERIC Educational Resources Information Center

    Grung, Bjorn; Nodland, Egil; Forland, Geir Martin

    2007-01-01

    The analysis of the infrared spectra of an alcohol dissolved in carbon tetrachloride gives a better understanding of the various multivariate curve resolution methods. The resulting concentration profile is found to be very useful for calculating the degree of association and equilibrium constants of different compounds.

  2. Estimate of the soil water retention curve from the sorptivity and β parameter calculated from an upward infiltration experiment

    NASA Astrophysics Data System (ADS)

    Moret-Fernández, D.; Latorre, B.

    2017-01-01

    The water retention curve (θ(h)), which defines the relationship between the volumetric water content (θ) and the matric potential (h), is of paramount importance to characterize the hydraulic behaviour of soils. Because current methods to estimate θ(h) are, in general, tedious and time consuming, alternative procedures to determine θ(h) are needed. Using an upward infiltration curve, the main objective of this work is to present a method to determine the parameters of the van Genuchten (1980) water retention curve (α and n) from the sorptivity (S) and the β parameter defined in the 1D infiltration equation proposed by Haverkamp et al. (1994). The first specific objective is to present an equation, based on the Haverkamp et al. (1994) analysis, which allows describing an upward infiltration process. Secondary, assuming a known saturated hydraulic conductivity, Ks, calculated on a finite soil column by the Darcy's law, a numerical procedure to calculate S and β by the inverse analysis of an exfiltration curve is presented. Finally, the α and n values are numerically calculated from Ks, S and β. To accomplish the first specific objective, cumulative upward infiltration curves simulated with HYDRUS-1D for sand, loam, silt and clay soils were compared to those calculated with the proposed equation, after applying the corresponding β and S calculated from the theoretical Ks, α and n. The same curves were used to: (i) study the influence of the exfiltration time on S and β estimations, (ii) evaluate the limits of the inverse analysis, and (iii) validate the feasibility of the method to estimate α and n. Next, the θ(h) parameters estimated with the numerical method on experimental soils were compared to those obtained with pressure cells. The results showed that the upward infiltration curve could be correctly described by the modified Haverkamp et al. (1994) equation. While S was only affected by early-time exfiltration data, the β parameter had a significant influence on the long-time exfiltration curve, which accuracy increased with time. The 1D infiltration model was only suitable for β < 1.7 (sand, loam and silt). After omitting the clay soil, an excellent relationship (R2 = 0.99, p < 0.005) was observed between the theoretical α and n values of the synthetic soils and those estimated from the inverse analysis. Consistent results, with a significant relationship (p < 0.001) between the n values estimated with the pressure cell and the upward infiltration analysis, were also obtained on the experimental soils.

  3. New well testing applications of the pressure derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onur, M.

    1989-01-01

    This work presents new derivative type curves based on a new derivative group which is equal to the dimensionless pressure group divided by its logarithmic derivative with respect to dimensionless time group. One major advantage of these type curves is that the type-curve match of field pressure/pressure-derivative data with the new derivative type curves is accomplished by moving the field data plot in only the horizontal direction. This type-curve match fixes time match-point values. The pressure change versus time data is then matched with the dimensionless pressure solution to determine match-point values. Well/reservoir parameters can then be estimated in themore » standard way. This two step type-curve matching procedure increases the likelihood of obtaining a unique match. Moreover, the unique correspondence between the ordinate of the field data plot and the new derivative type curves should prove useful in determining whether given field data actually represents the well/reservoir model assumed by a selected type curve solution. It is also shown that the basic idea used in construction the type curves can be used to ensure that proper semilog straight lines are chosen when analyzing pressure data by semilog methods. Analysis of both drawdown and buildup data is considered and actual field cases are analyzed using the new derivative type curves and the semilog identification method. This work also presents new methods based on the pressure derivative to analyze buildup data obtained at a well (fracture or unfractured) produced to pseudosteady-state prior to shut-in. By using a method of analysis based on the pressure derivative, it is shown that a well's drainage area at the instant of shut-in and the flow capacity can be computed directly from buildup data even in cases where conventional semilog straight lines are not well-defined.« less

  4. Devising a method towards development of early warning tool for detection of malaria outbreak.

    PubMed

    Verma, Preeti; Sarkar, Soma; Singh, Poonam; Dhiman, Ramesh C

    2017-11-01

    Uncertainty often arises in differentiating seasonal variation from outbreaks of malaria. The present study was aimed to generalize the theoretical structure of sine curve for detecting an outbreak so that a tool for early warning of malaria may be developed. A 'case/mean-ratio scale' system was devised for labelling the outbreak in respect of two diverse districts of Assam and Rajasthan. A curve-based method of analysis was developed for determining outbreak and using the properties of sine curve. It could be used as an early warning tool for Plasmodium falciparum malaria outbreaks. In the present method of analysis, the critical C max (peak value of sine curve) value of seasonally adjusted curve for P. falciparum malaria outbreak was 2.3 for Karbi Anglong and 2.2 for Jaisalmer districts. On case/mean-ratio scale, the C max value of malaria curve between C max and 3.5, the outbreak could be labelled as minor while >3.5 may be labelled as major. In epidemic years, with mean of case/mean ratio of ≥1.00 and root mean square (RMS) ≥1.504 of case/mean ratio, outbreaks can be predicted 1-2 months in advance. The present study showed that in P. falciparum cases in Karbi Anglong (Assam) and Jaisalmer (Rajasthan) districts, the rise in C max value of curve was always followed by rise in average/RMS or both and hence could be used as an early warning tool. The present method provides better detection of outbreaks than the conventional method of mean plus two standard deviation (mean+2 SD). The identified tools are simple and may be adopted for preparedness of malaria outbreaks.

  5. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  6. Comparative evaluation of different methods for calculation of cerebral blood flow (CBF) in nonanesthetized rabbits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angelini, G.; Lanza, E.; Rozza Dionigi, A.

    1983-05-01

    The measurement of cerebral blood flow (CBF) by the extracranial detection of the radioactivity of /sup 133/Xe injected into an internal carotid artery has proved to be of considerable value for the investigation of cerebral circulation in conscious rabbits. Methods are described for calculating CBF from the curves of clearance of /sup 133/Xe, and include exponential analysis (two-component model), initial slope, and stochastic method. The different methods of curve analysis were compared in order to evaluate the fitness with the theoretical model. The initial slope and stochastic methods, compared with the biexponential model, underestimate the CBF by 35% and 46%more » respectively. Furthermore, the validity of recording the clearance curve for 10 min was tested by comparing these CBF values with those obtained from the whole curve. CBF values calculated with the shortened procedure are overestimated by 17%. A correlation exists between the ''10 min'' CBF values and the CBF calculated from the whole curve; in spite of that, the values are not accurate for limited animal populations or for single animals. The extent of the two main compartments into which the CBF is divided was also measured. There is no correlation between CBF values and the extent of the relative compartment. This fact suggests that these two parameters correspond to different biological entities.« less

  7. Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, X.; Suganuma, Y.; Fujii, M.

    2017-12-01

    Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.

  8. Runoff potentiality of a watershed through SCS and functional data analysis technique.

    PubMed

    Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z

    2014-01-01

    Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.

  9. Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique

    PubMed Central

    Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.

    2014-01-01

    Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911

  10. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  11. Sign Lines, Asymptotes, and Tangent Carriers--An Introduction to Curve Sketching.

    ERIC Educational Resources Information Center

    Spikell, Mark A.; Deane, William R.

    This paper discusses methods of sketching various types of algebraic functions from an analysis of the portions of the plane where the curve will be found and where it will not be found. The discussion is limited to rational functions. Methods and techniques presented are applicable to the secondary mathematics curriculum from algebra through…

  12. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  13. Prediction Analysis for Measles Epidemics

    NASA Astrophysics Data System (ADS)

    Sumi, Ayako; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi; Olsen, Lars Folke; Kobayashi, Nobumichi

    2003-12-01

    A newly devised procedure of prediction analysis, which is a linearized version of the nonlinear least squares method combined with the maximum entropy spectral analysis method, was proposed. This method was applied to time series data of measles case notification in several communities in the UK, USA and Denmark. The dominant spectral lines observed in each power spectral density (PSD) can be safely assigned as fundamental periods. The optimum least squares fitting (LSF) curve calculated using these fundamental periods can essentially reproduce the underlying variation of the measles data. An extension of the LSF curve can be used to predict measles case notification quantitatively. Some discussions including a predictability of chaotic time series are presented.

  14. Data preparation for functional data analysis of PM10 in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Shaadan, Norshahida; Jemain, Abdul Aziz; Deni, Sayang Mohd

    2014-07-01

    The use of curves or functional data in the study analysis is increasingly gaining momentum in the various fields of research. The statistical method to analyze such data is known as functional data analysis (FDA). The first step in FDA is to convert the observed data points which are repeatedly recorded over a period of time or space into either a rough (raw) or smooth curve. In the case of the smooth curve, basis functions expansion is one of the methods used for the data conversion. The data can be converted into a smooth curve either by using the regression smoothing or roughness penalty smoothing approach. By using the regression smoothing approach, the degree of curve's smoothness is very dependent on k number of basis functions; meanwhile for the roughness penalty approach, the smoothness is dependent on a roughness coefficient given by parameter λ Based on previous studies, researchers often used the rather time-consuming trial and error or cross validation method to estimate the appropriate number of basis functions. Thus, this paper proposes a statistical procedure to construct functional data or curves for the hourly and daily recorded data. The Bayesian Information Criteria is used to determine the number of basis functions while the Generalized Cross Validation criteria is used to identify the parameter λ The proposed procedure is then applied on a ten year (2001-2010) period of PM10 data from 30 air quality monitoring stations that are located in Peninsular Malaysia. It was found that the number of basis functions required for the construction of the PM10 daily curve in Peninsular Malaysia was in the interval of between 14 and 20 with an average value of 17; the first percentile is 15 and the third percentile is 19. Meanwhile the initial value of the roughness coefficient was in the interval of between 10-5 and 10-7 and the mode was 10-6. An example of the functional descriptive analysis is also shown.

  15. Clinical MR-mammography: are computer-assisted methods superior to visual or manual measurements for curve type analysis? A systematic approach.

    PubMed

    Baltzer, Pascal Andreas Thomas; Freiberg, Christian; Beger, Sebastian; Vag, Tibor; Dietzel, Matthias; Herzog, Aimee B; Gajda, Mieczyslaw; Camara, Oumar; Kaiser, Werner A

    2009-09-01

    Enhancement characteristics after administration of a contrast agent are regarded as a major criterion for differential diagnosis in magnetic resonance mammography (MRM). However, no consensus exists about the best measurement method to assess contrast enhancement kinetics. This systematic investigation was performed to compare visual estimation with manual region of interest (ROI) and computer-aided diagnosis (CAD) analysis for time curve measurements in MRM. A total of 329 patients undergoing surgery after MRM (1.5 T) were analyzed prospectively. Dynamic data were measured using visual estimation, including ROI as well as CAD methods, and classified depending on initial signal increase and delayed enhancement. Pathology revealed 469 lesions (279 malignant, 190 benign). Kappa agreement between the methods ranged from 0.78 to 0.81. Diagnostic accuracies of 74.4% (visual), 75.7% (ROI), and 76.6% (CAD) were found without statistical significant differences. According to our results, curve type measurements are useful as a diagnostic criterion in breast lesions irrespective of the method used.

  16. Curve fitting air sample filter decay curves to estimate transuranic content.

    PubMed

    Hayes, Robert B; Chiou, Hung Cheng

    2004-01-01

    By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.

  17. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    NASA Astrophysics Data System (ADS)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  18. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    PubMed

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  19. Measurement of M²-Curve for Asymmetric Beams by Self-Referencing Interferometer Wavefront Sensor.

    PubMed

    Du, Yongzhao

    2016-11-29

    For asymmetric laser beams, the values of beam quality factor M x 2 and M y 2 are inconsistent if one selects a different coordinate system or measures beam quality with different experimental conditionals, even when analyzing the same beam. To overcome this non-uniqueness, a new beam quality characterization method named as M²-curve is developed. The M²-curve not only contains the beam quality factor M x 2 and M y 2 in the x -direction and y -direction, respectively; but also introduces a curve of M x α 2 versus rotation angle α of coordinate axis. Moreover, we also present a real-time measurement method to demonstrate beam propagation factor M²-curve with a modified self-referencing Mach-Zehnder interferometer based-wavefront sensor (henceforth SRI-WFS). The feasibility of the proposed method is demonstrated with the theoretical analysis and experiment in multimode beams. The experimental results showed that the proposed measurement method is simple, fast, and a single-shot measurement procedure without movable parts.

  20. Measurement of M2-Curve for Asymmetric Beams by Self-Referencing Interferometer Wavefront Sensor

    PubMed Central

    Du, Yongzhao

    2016-01-01

    For asymmetric laser beams, the values of beam quality factor Mx2 and My2 are inconsistent if one selects a different coordinate system or measures beam quality with different experimental conditionals, even when analyzing the same beam. To overcome this non-uniqueness, a new beam quality characterization method named as M2-curve is developed. The M2-curve not only contains the beam quality factor Mx2 and My2 in the x-direction and y-direction, respectively; but also introduces a curve of Mxα2 versus rotation angle α of coordinate axis. Moreover, we also present a real-time measurement method to demonstrate beam propagation factor M2-curve with a modified self-referencing Mach-Zehnder interferometer based-wavefront sensor (henceforth SRI-WFS). The feasibility of the proposed method is demonstrated with the theoretical analysis and experiment in multimode beams. The experimental results showed that the proposed measurement method is simple, fast, and a single-shot measurement procedure without movable parts. PMID:27916845

  1. Interaction Analysis of Longevity Interventions Using Survival Curves.

    PubMed

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-06

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.

  2. Interaction Analysis of Longevity Interventions Using Survival Curves

    PubMed Central

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-01

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622

  3. A new method for detection and discrimination of Pepino mosaic virus isolates using high resolution melting analysis of the triple gene block 3.

    PubMed

    Hasiów-Jaroszewska, Beata; Komorowska, Beata

    2013-10-01

    Diagnostic methods distinguished different Pepino mosaic virus (PepMV) genotypes but the methods do not detect sequence variation in particular gene segments. The necrotic and non-necrotic isolates (pathotypes) of PepMV share a 99% sequence similarity. These isolates differ from each other at one nucleotide site in the triple gene block 3. In this study, a combination of real-time reverse transcription polymerase chain reaction and high resolution melting curve analysis of triple gene block 3 was developed for simultaneous detection and differentiation of PepMV pathotypes. The triple gene block 3 region carrying a transition A → G was amplified using two primer pairs from twelve virus isolates, and was subjected to high resolution melting curve analysis. The results showed two distinct melting curve profiles related to each pathotype. The results also indicated that the high resolution melting method could readily differentiate between necrotic and non-necrotic PepMV pathotypes. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Analysis of Classes of Superlinear Semipositone Problems with Nonlinear Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Morris, Quinn A.

    We study positive radial solutions for classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We consider p-Laplacian problems (p > 1) with reaction terms which are superlinear at infinity and semipositone. In the case p = 2, using variational methods, we establish the existence of a solution, and via detailed analysis of the Green's function, we prove the positivity of the solution. In the case p ≠ 2, we again use variational methods to establish the existence of a solution, but the positivity of the solution is achieved via sophisticated a priori estimates. In the case p ≠ 2, the Green's function analysis is no longer available. Our results significantly enhance the literature on superlinear semipositone problems. Finally, we provide algorithms for the numerical generation of exact bifurcation curves for one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and using nonlinear solvers in Mathematica, generate bifurcation curves. In the nonautonomous case, we employ shooting methods in Mathematica to generate bifurcation curves.

  5. A Data-driven Study of RR Lyrae Near-IR Light Curves: Principal Component Analysis, Robust Fits, and Metallicity Estimates

    NASA Astrophysics Data System (ADS)

    Hajdu, Gergely; Dékány, István; Catelan, Márcio; Grebel, Eva K.; Jurcsik, Johanna

    2018-04-01

    RR Lyrae variables are widely used tracers of Galactic halo structure and kinematics, but they can also serve to constrain the distribution of the old stellar population in the Galactic bulge. With the aim of improving their near-infrared photometric characterization, we investigate their near-infrared light curves, as well as the empirical relationships between their light curve and metallicities using machine learning methods. We introduce a new, robust method for the estimation of the light-curve shapes, hence the average magnitudes of RR Lyrae variables in the K S band, by utilizing the first few principal components (PCs) as basis vectors, obtained from the PC analysis of a training set of light curves. Furthermore, we use the amplitudes of these PCs to predict the light-curve shape of each star in the J-band, allowing us to precisely determine their average magnitudes (hence colors), even in cases where only one J measurement is available. Finally, we demonstrate that the K S-band light-curve parameters of RR Lyrae variables, together with the period, allow the estimation of the metallicity of individual stars with an accuracy of ∼0.2–0.25 dex, providing valuable chemical information about old stellar populations bearing RR Lyrae variables. The methods presented here can be straightforwardly adopted for other classes of variable stars, bands, or for the estimation of other physical quantities.

  6. Lactase persistence genotyping on whole blood by loop-mediated isothermal amplification and melting curve analysis.

    PubMed

    Abildgaard, Anders; Tovbjerg, Sara K; Giltay, Axel; Detemmerman, Liselot; Nissen, Peter H

    2018-03-26

    The lactase persistence phenotype is controlled by a regulatory enhancer region upstream of the Lactase (LCT) gene. In northern Europe, specifically the -13910C > T variant has been associated with lactase persistence whereas other persistence variants, e.g. -13907C > G and -13915 T > G, have been identified in Africa and the Middle East. The aim of the present study was to compare a previously developed high resolution melting assay (HRM) with a novel method based on loop-mediated isothermal amplification and melting curve analysis (LAMP-MC) with both whole blood and DNA as input material. To evaluate the LAMP-MC method, we used 100 whole blood samples and 93 DNA samples in a two tiered study. First, we studied the ability of the LAMP-MC method to produce specific melting curves for several variants of the LCT enhancer region. Next, we performed a blinded comparison between the LAMP-MC method and our existing HRM method with clinical samples of unknown genotype. The LAMP-MC method produced specific melting curves for the variants at position -13909, -13910, -13913 whereas the -13907C > G and -13915 T > G variants produced indistinguishable melting profiles. The LAMP-MC assay is a simple method for lactase persistence genotyping and compares well with our existing HRM method. Copyright © 2018. Published by Elsevier B.V.

  7. Analysis of mixed model in gear transmission based on ADAMS

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2012-09-01

    The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.

  8. Identification of Reliable Components in Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS): a Data-Driven Approach across Metabolic Processes.

    PubMed

    Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun

    2015-11-04

    There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.

  9. Determination of glucose-6-phosphate dehydrogenase cut-off values in a Tunisian population.

    PubMed

    Laouini, Naouel; Sahli, Chaima Abdelhafidh; Jouini, Latifa; Haloui, Sabrine; Fredj, Sondes Hadj; Daboubi, Rym; Siala, Hajer; Ouali, Faida; Becher, Meriam; Toumi, Nourelhouda; Bibi, Amina; Messsaoud, Taieb

    2017-07-26

    Glucose-6-phosphate dehydrogenase (G6PD) deficiency is the commonest enzymopathy worldwide. The incidence depends essentially on the methods used for the assessment. In this respect, we attempted in this study to set cut-off values of G6PD activity to discriminate among normal, heterozygous, and deficient individuals using the World Health Organization (WHO) classification and the receiver operating characteristics (ROC) curve analysis. Blood samples from 250 female and 302 male subjects were enrolled in this study. The G6PD activity was determined using a quantitative assay. The common G6PD mutations in Tunisia were determined using the amplification refractory mutation system (ARMS-PCR) method. The ROC curve was used to choice the best cut-off. Normal G6PD values were 7.69±2.37, 7.86±2.39, and 7.51±2.35 U/g Hb for the entire, male, and female groups, respectively. Cut-off values for the total, male, and female were determined using the WHO classification and ROC curves analysis. In the male population, both cut-offs established using ROC curve analysis (4.00 U/g Hb) and the 60% level (3.82 U/g Hb), respectively are sensitive and specific resulting in a good efficiency of discrimination between deficient and normal males. For the female group the ROC cut-off (5.84 U/g Hb) seems better than the 60% level cut-off (3.88 U/g Hb) to discriminate between normal and heterozygote or homozygote women with higher Youden Index. The establishment of the normal values for a population is important for a better evaluation of the assay result. The ROC curve analysis is an alternative method to determine the status of patients since it correlates DNA analysis and G6PD activity.

  10. The Analysis of Seawater: A Laboratory-Centered Learning Project in General Chemistry.

    ERIC Educational Resources Information Center

    Selco, Jodye I.; Roberts, Julian L., Jr.; Wacks, Daniel B.

    2003-01-01

    Describes a sea-water analysis project that introduces qualitative and quantitative analysis methods and laboratory methods such as gravimetric analysis, potentiometric titration, ion-selective electrodes, and the use of calibration curves. Uses a problem-based cooperative teaching approach. (Contains 24 references.) (YDS)

  11. Relative loading on biplane wings

    NASA Technical Reports Server (NTRS)

    Diehl, Walter S

    1934-01-01

    Recent improvements in stress analysis methods have made it necessary to revise and to extend the loading curves to cover all conditions of flight. This report is concerned with a study of existing biplane data by combining the experimental and theoretical data to derive a series of curves from which the lift curves of the individual wings of a biplane may be obtained.

  12. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  13. Numerical approach in defining milling force taking into account curved cutting-edge of applied mills

    NASA Astrophysics Data System (ADS)

    Bondarenko, I. R.

    2018-03-01

    The paper tackles the task of applying the numerical approach to determine the cutting forces of carbon steel machining with curved cutting edge mill. To solve the abovementioned task the curved surface of the cutting edge was subject to step approximation, and the chips section was split into discrete elements. As a result, the cutting force was defined as the sum of elementary forces observed during the cut of every element. Comparison and analysis of calculations with regard to the proposed method and the method with Kienzle dependence showed its sufficient accuracy, which makes it possible to apply the method in practice.

  14. Improving Accuracy and Temporal Resolution of Learning Curve Estimation for within- and across-Session Analysis

    PubMed Central

    Tabelow, Karsten; König, Reinhard; Polzehl, Jörg

    2016-01-01

    Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning. PMID:27303809

  15. Multimodal approach to seismic pavement testing

    USGS Publications Warehouse

    Ryden, N.; Park, C.B.; Ulriksen, P.; Miller, R.D.

    2004-01-01

    A multimodal approach to nondestructive seismic pavement testing is described. The presented approach is based on multichannel analysis of all types of seismic waves propagating along the surface of the pavement. The multichannel data acquisition method is replaced by multichannel simulation with one receiver. This method uses only one accelerometer-receiver and a light hammer-source, to generate a synthetic receiver array. This data acquisition technique is made possible through careful triggering of the source and results in such simplification of the technique that it is made generally available. Multiple dispersion curves are automatically and objectively extracted using the multichannel analysis of surface waves processing scheme, which is described. Resulting dispersion curves in the high frequency range match with theoretical Lamb waves in a free plate. At lower frequencies there are several branches of dispersion curves corresponding to the lower layers of different stiffness in the pavement system. The observed behavior of multimodal dispersion curves is in agreement with theory, which has been validated through both numerical modeling and the transfer matrix method, by solving for complex wave numbers. ?? ASCE / JUNE 2004.

  16. Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method

    NASA Astrophysics Data System (ADS)

    Verachtert, R.; Lombaert, G.; Degrande, G.

    2018-03-01

    This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.

  17. Decision curve analysis revisited: overall net benefit, relationships to ROC curve analysis, and application to case-control studies.

    PubMed

    Rousson, Valentin; Zumbrunn, Thomas

    2011-06-22

    Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.

  18. Recession curve analysis for groundwater levels: case study in Latvia

    NASA Astrophysics Data System (ADS)

    Gailuma, A.; Vītola, I.; Abramenko, K.; Lauva, D.; Vircavs, V.; Veinbergs, A.; Dimanta, Z.

    2012-04-01

    Recession curve analysis is powerful and effective analysis technique in many research areas related with hydrogeology where observations have to be made, such as water filtration and absorption of moisture, irrigation and drainage, planning of hydroelectric power production and chemical leaching (elution of chemical substances) as well as in other areas. The analysis of the surface runoff hydrograph`s recession curves, which is performed to conceive the after-effects of interaction of precipitation and surface runoff, has approved in practice. The same method for analysis of hydrograph`s recession curves can be applied for the observations of the groundwater levels. There are manually prepared hydrograph for analysis of recession curves for observation wells (MG2, BG2 and AG1) in agricultural monitoring sites in Latvia. Within this study from the available monitoring data of groundwater levels were extracted data of declining periods, splitted by month. The drop-down curves were manually (by changing the date) moved together, until to find the best match, thereby obtaining monthly drop-down curves, representing each month separately. Monthly curves were combined and manually joined, for obtaining characterizing drop-down curves of the year for each well. Within the process of decreased recession curve analysis, from the initial curve was cut out upward areas, leaving only the drops of the curve, consequently, the curve is transformed more closely to the groundwater flow, trying to take out the impact of rain or drought periods from the curve. Respectively, the drop-down curve is part of the data, collected with hydrograph, where data with the discharge dominates, without considering impact of precipitation. Using the recession curve analysis theory, ready tool "A Visual Basic Spreadsheet Macro for Recession Curve Analysis" was used for selection of data and logarithmic functions matching (K. Posavec et.al., GROUND WATER 44, no. 5: 764-767, 2006), as well as functions were developed by manual processing of data. For displaying data the mathematical model of data equalization was used, finding the corresponding or closest logarithmic function of the recession for the graph. Obtained recession curves were similar but not identical. With full knowledge of the fluctuations of ground water level, it is possible to indirectly (without taking soil samples) determine the filtration coefficient: more rapid decline in the recession curve correspond for the better filtration conditions. This research could be very useful in construction planning, road constructions, agriculture etc. Acknowledgments The authors gratefully acknowledge the funding from ESF Project "Establishment of interdisciplinary scientist group and modeling system for groundwater research" (Agreement No. 2009/0212/1DP/1.1.1.2.0/09/APIA/VIAA/060EF7)

  19. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  20. Application of a Novel DCPD Adjustment Method for the J-R Curve Characterization: A study based on ORNL and ASTM Interlaboratory Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiang; Sokolov, Mikhail A; Nanstad, Randy K

    Material fracture toughness in the fully ductile region can be described by a J-integral vs. crack growth resistance curve (J-R curve). As a conventional J-R curve measurement method, the elastic unloading compliance (EUC) method becomes impractical for elevated temperature testing due to relaxation of the material and friction induced back-up shape of the J-R curve. One alternative solution of J-R curve testing applies the Direct Current Potential Drop (DCPD) technique for measuring crack extension. However, besides crack growth, potential drop can also be influenced by plastic deformation, crack tip blunting, etc., and uncertainties exist in the current DCPD methodology especiallymore » in differentiating potential drop due to stable crack growth and due to material deformation. Thus, using DCPD for J-R curve determination remains a challenging task. In this study, a new adjustment procedure for applying DCPD to derive the J-R curve has been developed for conventional fracture toughness specimens, including compact tension, three-point bend, and disk-shaped compact specimens. Data analysis has been performed on Oak Ridge National Laboratory (ORNL) and American Society for Testing and Materials (ASTM) interlaboratory results covering different specimen thicknesses, test temperatures, and materials, to evaluate the applicability of the new DCPD adjustment procedure for J-R curve characterization. After applying the newly-developed procedure, direct comparison between the DCPD method and the normalization method on the same specimens indicated close agreement for the overall J-R curves, as well as the provisional values of fracture toughness near the onset of ductile crack extension, Jq, and of tearing modulus.« less

  1. Determination of volume-time curves for the right ventricle and its outflow tract for functional analyses.

    PubMed

    Gabbert, Dominik D; Entenmann, Andreas; Jerosch-Herold, Michael; Frettlöh, Felicitas; Hart, Christopher; Voges, Inga; Pham, Minh; Andrade, Ana; Pardun, Eileen; Wegner, P; Hansen, Traudel; Kramer, Hans-Heiner; Rickers, Carsten

    2013-12-01

    The determination of right ventricular volumes and function is of increasing interest for the postoperative care of patients with congenital heart defects. The presentation of volumetry data in terms of volume-time curves allows a comprehensive functional assessment. By using manual contour tracing, the generation of volume-time curves is exceedingly time-consuming. This study describes a fast and precise method for determining volume-time curves for the right ventricle and for the right ventricular outflow tract. The method applies contour detection and includes a feature for identifying the right ventricular outflow tract volume. The segregation of the outflow tract is performed by four-dimensional curved smooth boundary surfaces defined by prespecified anatomical landmarks. The comparison with manual contour tracing demonstrates that the method is accurate and improves the precision of the measurement. Compared to manual contour tracing the bias is <0.1% ± 4.1% (right ventricle) and -2.6% ± 20.0% (right ventricular outflow tract). The standard deviations of inter- and intraobserver variabilities for determining the volume of the right ventricular outflow tract are reduced to less than half the values of manual contour tracing. The time consumption per patient is reduced from 341 ± 80 min (right ventricle) and 56 ± 11 min (right ventricular outflow tract) using manual contour tracing to 46 ± 9 min for a combined analysis of right ventricle and right ventricular outflow tract. The analysis of volume-time curves for the right ventricle and its outflow tract discloses new evaluation methods in clinical routine and science. Copyright © 2013 Wiley Periodicals, Inc.

  2. Identification of Preferential Groundwater Flow Pathways from Local Tracer Breakthrough Curves

    NASA Astrophysics Data System (ADS)

    Kokkinaki, A.; Sleep, B. E.; Dearden, R.; Wealthall, G.

    2009-12-01

    Characterizing preferential groundwater flow paths in the subsurface is a key factor in the design of in situ remediation technologies. When applying reaction-based remediation methods, such as enhanced bioremediation, preferential flow paths result in fast solute migration and potentially ineffective delivery of reactants, thereby adversely affecting treatment efficiency. The presence of such subsurface conduits was observed at the SABRe (Source Area Bioremediation) research site. Non-uniform migration of contaminants and electron donor during the field trials of enhanced bioremediation supported this observation. To better determine the spatial flow field of the heterogeneous aquifer, a conservative tracer test was conducted. Breakthrough curves were obtained at a reference plane perpendicular to the principal groundwater flow direction. The resulting dataset was analyzed using three different methods: peak arrival times, analytical solution fitting and moment analysis. Interpretation using the peak arrival time method indicated areas of fast plume migration. However, some of the high velocities are supported by single data points, thus adding considerable uncertainty to the estimated velocity distribution. Observation of complete breakthrough curves indicated different types of solute breakthrough, corresponding to different transport mechanisms. Sharp peaks corresponded to high conductivity preferential flow pathways, whereas more dispersed breakthrough curves with long tails were characteristic of significant dispersive mixing and dilution. While analytical solutions adequately quantified flow characteristics for the first type of curves, they failed to do so for the second type, in which case they gave unrealistic results. Therefore, a temporal moment analysis was performed to obtain complete spatial distributions of mass recovery, velocity and dispersivity. Though the results of moment analysis qualitatively agreed with the results of previous methods, more realistic estimates of velocities were obtained and the presence of one major preferential flow pathway was confirmed. However, low mass recovery and deviations from the 10% scaling rule for dispersivities indicate that insufficient spatial and temporal monitoring, as well as interpolation and truncation errors introduced uncertainty in the flow and transport parameters estimated by the method of moments. The results of the three analyses are valuable for enhancing the understanding of mass transport and remediation performance. Comparing the different interpretation methods, increasing the amount of concentration data considered in the analysis, the derived velocity fields were smoother and the estimated local velocities and dispersivities became more realistic. In conclusion, moment analysis is a method that represents a smoothed average of the velocity across the entire breakthrough curve, whereas the peak arrival time, which may be a less well constrained estimate, represents the physical peak arrival and typically yields a higher velocity than the moment analysis. This is an important distinction when applying the results of the tracer test to field sites.

  3. A method for phenomenological analysis of ecological data.

    NASA Technical Reports Server (NTRS)

    Huang, H.-W.; Morowitz, H. J.

    1972-01-01

    The experimental meaning of the phenomenological differential equations for a competing population is reviewed. It is concluded that it is virtually impossible to construct the differential equations precise enough for studying stability. We consider instead a method of phenomenological analysis which can be applied to a set of population curves. We suggest an ecological index calculated from the population curves, which indicates a group property of the entire system. As a function of time, the index is presumably insensitive to Volterra type fluctuations. A marked increase of the index's value however indicates a marked change of the environmental conditions. It is not easy to deduce the group property from the population curves alone, because a change in population is in general due to the superposition of external disturbances and Volterra fluctuations.

  4. pROC: an open-source package for R and S+ to analyze and compare ROC curves.

    PubMed

    Robin, Xavier; Turck, Natacha; Hainard, Alexandre; Tiberti, Natalia; Lisacek, Frédérique; Sanchez, Jean-Charles; Müller, Markus

    2011-03-17

    Receiver operating characteristic (ROC) curves are useful tools to evaluate classifiers in biomedical and bioinformatics applications. However, conclusions are often reached through inconsistent use or insufficient statistical analysis. To support researchers in their ROC curves analysis we developed pROC, a package for R and S+ that contains a set of tools displaying, analyzing, smoothing and comparing ROC curves in a user-friendly, object-oriented and flexible interface. With data previously imported into the R or S+ environment, the pROC package builds ROC curves and includes functions for computing confidence intervals, statistical tests for comparing total or partial area under the curve or the operating points of different classifiers, and methods for smoothing ROC curves. Intermediary and final results are visualised in user-friendly interfaces. A case study based on published clinical and biomarker data shows how to perform a typical ROC analysis with pROC. pROC is a package for R and S+ specifically dedicated to ROC analysis. It proposes multiple statistical tests to compare ROC curves, and in particular partial areas under the curve, allowing proper ROC interpretation. pROC is available in two versions: in the R programming language or with a graphical user interface in the S+ statistical software. It is accessible at http://expasy.org/tools/pROC/ under the GNU General Public License. It is also distributed through the CRAN and CSAN public repositories, facilitating its installation.

  5. Arctic curves in path models from the tangent method

    NASA Astrophysics Data System (ADS)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  6. [Application of melting curve to analyze genotype of Duffy blood group antigen Fy-a/b].

    PubMed

    Chen, Xue; Zhou, Chang-Hua; Hong, Ying; Gong, Tian-Xiang

    2012-12-01

    This study was aimed to establish the real-time multiple-PCR with melting curve analysis for Duffy blood group Fy-a/b genotyping. According to the sequence of mRNA coding for β-actin and Fy-a/b, the primers of β-actin and Fy-a/b were synthesized. The real-time multiple-PCR with melting curve analysis for Fy-a/b genotyping was established. The Fy-a/b genotyping of 198 blood donors in Chinese Chengdu area has been investigated by melting curve analysis and PCR-SSP. The results showed that the results of Fy-a/b genotype by melting curve analysis were consistent with PCR-SSP. In all of 198 donors in Chinese Chengdu, 178 were Fy(a) (+) (89.9%), 19 were Fy(a) (+) Fy(b) (+) (9.6%), and 1 was Fy(b) (+) (0.5%). The gene frequency of Fy(a) was 0.947, while that of Fy(b) was 0.053. It is concluded that the genotyping method of Duffy blood group with melting curve analysis is established, which can be used as a high-throughput screening tool for Duffy blood group genotyping; and the Fy(a) genotype is the major of Duffy blood group of donors in Chinese Chengdu area.

  7. [A method for the analysis of overlapped peaks in the high performance liquid chromatogram based on spectrum analysis].

    PubMed

    Liu, Bao; Fan, Xiaoming; Huo, Shengnan; Zhou, Lili; Wang, Jun; Zhang, Hui; Hu, Mei; Zhu, Jianhua

    2011-12-01

    A method was established to analyse the overlapped chromatographic peaks based on the chromatographic-spectra data detected by the diode-array ultraviolet detector. In the method, the three-dimensional data were de-noised and normalized firstly; secondly the differences and clustering analysis of the spectra at different time points were calculated; then the purity of the whole chromatographic peak were analysed and the region were sought out in which the spectra of different time points were stable. The feature spectra were extracted from the spectrum-stable region as the basic foundation. The nonnegative least-square method was chosen to separate the overlapped peaks and get the flow curve which was based on the feature spectrum. The three-dimensional divided chromatographic-spectrum peak could be gained by the matrix operations of the feature spectra with the flow curve. The results displayed that this method could separate the overlapped peaks.

  8. Converting HAZUS capacity curves to seismic hazard-compatible building fragility functions: effect of hysteretic models

    USGS Publications Warehouse

    Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem

    2008-01-01

    A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.

  9. New Insight into Combined Model and Revised Model for RTD Curves in a Multi-strand Tundish

    NASA Astrophysics Data System (ADS)

    Lei, Hong

    2015-12-01

    The analysis for the residence time distribution (RTD) curve is one of the important experimental technologies to optimize the tundish design. But there are some issues about RTD analysis model. Firstly, the combined (or mixed) model and the revised model give different analysis results for the same RTD curve. Secondly, different upper limits of integral in the numerator for the mean residence time give different results for the same RTD curve. Thirdly, the negative dead volume fraction sometimes appears at the outer strand of the multi-strand tundish. In order to solve the above problems, it is necessary to have a deep insight into the RTD curve and to propose a reasonable method to analyze the RTD curve. The results show that (1) the revised model is not appropriate to treat with the RTD curve; (2) the conception of the visual single-strand tundish and the combined model with the dimensionless time at the cut-off point are applied to estimate the flow characteristics in the multi-strand tundish; and that (3) the mean residence time at each exit is the key parameter to estimate the similarity of fluid flow among strands.

  10. MASW on the standard seismic prospective scale using full spread recording

    NASA Astrophysics Data System (ADS)

    Białas, Sebastian; Majdański, Mariusz; Trzeciak, Maciej; Gałczyński, Edward; Maksym, Andrzej

    2015-04-01

    The Multichannel Analysis of Surface Waves (MASW) is one of seismic survey methods that use the dispersion curve of surface waves in order to describe the stiffness of the surface. Is is used mainly for geotechnical engineering scale with total length of spread between 5 - 450 m and spread offset between 1 - 100 m, the hummer is the seismic source on this surveys. The standard procedure of MASW survey is: data acquisition, dispersion analysis and inversion of extracting dispersion curve to obtain the closest theoretical curve. The final result includes share-wave velocity (Vs) values at different depth along the surveyed lines. The main goal of this work is to expand this engineering method to the bigger scale with the length of standard prospecting spread of 20 km using 4.5 Hz version of vertical component geophones. The standard vibroseis and explosive method are used as the seismic source. The acquisition were conducted on the full spread all the time during each single shoot. The seismic data acquisition used for this analysis were carried out on the Braniewo 2014 project in north of Poland. The results achieved during standard MASW procedure says that this method can be used on much bigger scale as well. The different methodology of this analysis requires only much stronger seismic source.

  11. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  12. Proposed method for determining the thickness of glass in solar collector panels

    NASA Technical Reports Server (NTRS)

    Moore, D. M.

    1980-01-01

    An analytical method was developed for determining the minimum thickness for simply supported, rectangular glass plates subjected to uniform normal pressure environmental loads such as wind, earthquake, snow, and deadweight. The method consists of comparing an analytical prediction of the stress in the glass panel to a glass breakage stress determined from fracture mechanics considerations. Based on extensive analysis using the nonlinear finite element structural analysis program ARGUS, design curves for the structural analysis of simply supported rectangular plates were developed. These curves yield the center deflection, center stress and corner stress as a function of a dimensionless parameter describing the load intensity. A method of estimating the glass breakage stress as a function of a specified failure rate, degree of glass temper, design life, load duration time, and panel size is also presented.

  13. Conducting Meta-Analyses Based on p Values

    PubMed Central

    van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.

    2016-01-01

    Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466

  14. A dimension-wise analysis method for the structural-acoustic system with interval parameters

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong

    2017-04-01

    The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.

  15. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    DOEpatents

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  16. Derivative based sensitivity analysis of gamma index

    PubMed Central

    Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.

    2015-01-01

    Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary values for evaluating the STTP against the RP. Even though the STTP passed the simple gamma pass criteria, it was found failing at many locations when the derivatives were used as the boundary values. The proposed derivative-based method can identify a noisy curve and can prove to be a useful tool for improving the sensitivity of the gamma index. PMID:26865761

  17. On measuring the scattering coefficient in a nondiffuse sound field

    NASA Astrophysics Data System (ADS)

    Kanev, N. G.

    2017-11-01

    The laws of sound decay in a cubic room, one wall of which is absorbing and the other scattering, are obtained. It is shown that under certain conditions, sound decay in a room occurs nonexponentially and the shape of the decay curve depends on the scattering coefficient of the walls. This makes it possible to suggest a method for measuring the scattering coefficient by the analysis the decay curve when the walls have sound-scattering materials and structures. Expressions are obtained for approximating the measured decay curve, and the boundaries of the method's applicability are determined.

  18. Method and system for real-time analysis of biosensor data

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel

    2014-08-19

    A method of biosensor-based detection of toxins includes the steps of providing a fluid to be analyzed having a plurality of photosynthetic organisms therein, wherein chemical, biological or radiological agents alter a nominal photosynthetic activity of the photosynthetic organisms. At a first time a measured photosynthetic activity curve is obtained from the photosynthetic organisms. The measured curve is automatically compared to a reference photosynthetic activity curve to determine differences therebetween. The presence of the chemical, biological or radiological agents, or precursors thereof, are then identified if present in the fluid using the differences.

  19. Simultaneous detection of Fusarium culmorum and F. graminearum in plant material by duplex PCR with melting curve analysis.

    PubMed

    Brandfass, Christoph; Karlovsky, Petr

    2006-01-23

    Fusarium head blight (FHB) is a disease of cereal crops, which has a severe impact on wheat and barley production worldwide. Apart from reducing the yield and impairing grain quality, FHB leads to contamination of grain with toxic secondary metabolites (mycotoxins), which pose a health risk to humans and livestock. The Fusarium species primarily involved in FHB are F. graminearum and F. culmorum. A key prerequisite for a reduction in the incidence of FHB is an understanding of its epidemiology. We describe a duplex-PCR-based method for the simultaneous detection of F. culmorum and F. graminearum in plant material. Species-specific PCR products are identified by melting curve analysis performed in a real-time thermocycler in the presence of the fluorescent dye SYBR Green I. In contrast to multiplex real-time PCR assays, the method does not use doubly labeled hybridization probes. PCR with product differentiation by melting curve analysis offers a cost-effective means of qualitative analysis for the presence of F. culmorum and F. graminearum in plant material. This method is particularly suitable for epidemiological studies involving a large number of samples.

  20. Preconcentration for Improved Long-Term Monitoring of Contaminants in Groundwater: Sorbent Development

    DTIC Science & Technology

    2013-02-11

    calibration curves was ±5%. Ion chromatography (IC) was used for analysis of perchlorate and other ionic targets. Analysis was carried out on a...The methods utilize liquid or gas chromatography , techniques that do not lend themselves well to portable devices and methods. Portable methods are...

  1. Finding Planets in K2: A New Method of Cleaning the Data

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Mullally, Fergal; Thompson, Susan E.

    2017-01-01

    We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.

  2. Analysis of Curved Target-Type Thrust Reversers

    DTIC Science & Technology

    1974-06-07

    methods f-or two -dimensional cases, the Levi - Civita method provides a \\ariet> t>l bucket shapes and enables one to round off the sharp corners of...surface In the present work three methods arc employed to investigate the deflection of mviscid. incompressible curved surfaces: Levi - Civitas ...shapes are shown in Fig. V A special case for (T, =0 31416 and fr2= 0.47124, and A = 0. ,*46, (/< = 6X ), is shown in Fig 4. Fvidently, Levi - Civita "s

  3. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  4. Development of seismic fragility curves for low-rise masonry infilled reinforced concrete buildings by a coefficient-based method

    NASA Astrophysics Data System (ADS)

    Su, Ray Kai Leung; Lee, Chien-Liang

    2013-06-01

    This study presents a seismic fragility analysis and ultimate spectral displacement assessment of regular low-rise masonry infilled (MI) reinforced concrete (RC) buildings using a coefficient-based method. The coefficient-based method does not require a complicated finite element analysis; instead, it is a simplified procedure for assessing the spectral acceleration and displacement of buildings subjected to earthquakes. A regression analysis was first performed to obtain the best-fitting equations for the inter-story drift ratio (IDR) and period shift factor of low-rise MI RC buildings in response to the peak ground acceleration of earthquakes using published results obtained from shaking table tests. Both spectral acceleration- and spectral displacement-based fragility curves under various damage states (in terms of IDR) were then constructed using the coefficient-based method. Finally, the spectral displacements of low-rise MI RC buildings at the ultimate (or nearcollapse) state obtained from this paper and the literature were compared. The simulation results indicate that the fragility curves obtained from this study and other previous work correspond well. Furthermore, most of the spectral displacements of low-rise MI RC buildings at the ultimate state from the literature fall within the bounded spectral displacements predicted by the coefficient-based method.

  5. Method of analysis of asbestiform minerals by thermoluminescence

    DOEpatents

    Fisher, Gerald L.; Bradley, Edward W.

    1980-01-01

    A method for the qualitative and quantitative analysis of asbestiform minerals, including the steps of subjecting a sample to be analyzed to the thermoluminescent analysis, annealing the sample, subjecting the sample to ionizing radiation, and subjecting the sample to a second thermoluminescent analysis. Glow curves are derived from the two thermoluminescent analyses and their shapes then compared to established glow curves of known asbestiform minerals to identify the type of asbestiform in the sample. Also, during at least one of the analyses, the thermoluminescent response for each sample is integrated during a linear heating period of the analysis in order to derive the total thermoluminescence per milligram of sample. This total is a measure of the quantity of asbestiform in the sample and may also be used to identify the source of the sample.

  6. Analysis of hardening behavior of sheet metals by a new simple shear test method taking into account the Bauschinger effect

    NASA Astrophysics Data System (ADS)

    Bang, Sungsik; Rickhey, Felix; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo

    2013-12-01

    In this study we establish a process to predict hardening behavior considering the Bauschinger effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Bauschinger effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with FEA. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments.

  7. THE LIQUEFACTION RISK ANALYSIS OF CEMENT-TREATED SANDY GROUND CONSIDERING THE SPATIAL VARIABILITY OF SOIL STRENGTH

    NASA Astrophysics Data System (ADS)

    Kataoka, Norio; Kasama, Kiyonobu; Zen, Kouki; Chen, Guangqi

    This paper presents a probabilistic method for assessi ng the liquefaction risk of cement-treated ground, which is an anti-liquefaction ground improved by cemen t-mixing. In this study, the liquefaction potential of cement-treated ground is analyzed statistically using Monte Carlo Simulation based on the nonlinear earthquake response analysis consid ering the spatial variability of so il properties. The seismic bearing capacity of partially liquefied ground is analyzed in order to estimat e damage costs induced by partial liquefaction. Finally, the annual li quefaction risk is calcu lated by multiplying the liquefaction potential with the damage costs. The results indicated that the proposed new method enables to evaluate the probability of liquefaction, to estimate the damage costs using the hazard curv e, fragility curve induced by liquefaction, and liq uefaction risk curve.

  8. Bayesian hierarchical functional data analysis via contaminated informative priors.

    PubMed

    Scarpa, Bruno; Dunson, David B

    2009-09-01

    A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.

  9. A computer program (MACPUMP) for interactive aquifer-test analysis

    USGS Publications Warehouse

    Day-Lewis, F. D.; Person, M.A.; Konikow, Leonard F.

    1995-01-01

    This report introduces MACPUMP (Version 1.0), an aquifer-test-analysis package for use with Macintosh4 computers. The report outlines the input- data format, describes the solutions encoded in the program, explains the menu-items, and offers a tutorial illustrating the use of the program. The package reads list-directed aquifer-test data from a file, plots the data to the screen, generates and plots type curves for several different test conditions, and allows mouse-controlled curve matching. MACPUMP features pull-down menus, a simple text viewer for displaying data-files, and optional on-line help windows. This version includes the analytical solutions for nonleaky and leaky confined aquifers, using both type curves and straight-line methods, and for the analysis of single-well slug tests using type curves. An executable version of the code and sample input data sets are included on an accompanying floppy disk.

  10. Analysis of censored data.

    PubMed

    Lucijanic, Marko; Petrovecki, Mladen

    2012-01-01

    Analyzing events over time is often complicated by incomplete, or censored, observations. Special non-parametric statistical methods were developed to overcome difficulties in summarizing and comparing censored data. Life-table (actuarial) method and Kaplan-Meier method are described with an explanation of survival curves. For the didactic purpose authors prepared a workbook based on most widely used Kaplan-Meier method. It should help the reader understand how Kaplan-Meier method is conceptualized and how it can be used to obtain statistics and survival curves needed to completely describe a sample of patients. Log-rank test and hazard ratio are also discussed.

  11. Methods for threshold determination in multiplexed assays

    DOEpatents

    Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J

    2014-06-24

    Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.

  12. B and V photometry and analysis of the eclipsing binary RZ CAS

    NASA Astrophysics Data System (ADS)

    Riazi, N.; Bagheri, M. R.; Faghihi, F.

    1994-01-01

    Photoelectric light curves of the eclipsing binary RZ Cas are presented for B and V filters. The light curves are analyzed for light and geometrical elements, starting with a previously suggested preliminary method. The approximate results thus obtained are then optimised through the Wilson-Devinney computer programs.

  13. Fractal Analysis of Rock Joint Profiles

    NASA Astrophysics Data System (ADS)

    Audy, Ondřej; Ficker, Tomáš

    2017-10-01

    Surface reliefs of rock joints are analyzed in geotechnics when shear strength of rocky slopes is estimated. The rock joint profiles actually are self-affine fractal curves and computations of their fractal dimensions require special methods. Many papers devoted to the fractal properties of these profiles were published in the past but only a few of those papers employed a convenient computational method that would have guaranteed a sound value of that dimension. As a consequence, anomalously low dimensions were presented. This contribution deals with two computational modifications that lead to sound fractal dimensions of the self-affine rock joint profiles. These are the modified box-counting method and the modified yard-stick method sometimes called the compass method. Both these methods are frequently applied to self-similar fractal curves but the self-affine profile curves due to their self-affine nature require modified computational procedures implemented in computer programs.

  14. Waveform fitting and geometry analysis for full-waveform lidar feature extraction

    NASA Astrophysics Data System (ADS)

    Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu

    2016-10-01

    This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.

  15. Barcoding Melting Curve Analysis for Rapid, Sensitive, and Discriminating Authentication of Saffron (Crocus sativus L.) from Its Adulterants

    PubMed Central

    Cao, Liang; Yuan, Yuan; Chen, Min; Jin, Yan; Huang, Luqi

    2014-01-01

    Saffron (Crocus sativus L.) is one of the most important and expensive medicinal spice products in the world. Because of its high market value and premium price, saffron is often adulterated through the incorporation of other materials, such as Carthamus tinctorius L. and Calendula officinalis L. flowers, Hemerocallis L. petals, Daucus carota L. fleshy root, Curcuma longa L. rhizomes, Zea may L., and Nelumbo nucifera Gaertn. stigmas. To develop a straightforward, nonsequencing method for rapid, sensitive, and discriminating detection of these adulterants in traded saffron, we report here the application of a barcoding melting curve analysis method (Bar-MCA) that uses the universal chloroplast plant DNA barcoding region trnH-psbA to identify adulterants. When amplified at DNA concentrations and annealing temperatures optimized for the curve analysis, peaks were formed at specific locations for saffron (81.92°C) and the adulterants: D. carota (81.60°C), C. tinctorius (80.10°C), C. officinalis (79.92°C), Dendranthema morifolium (Ramat.) Tzvel. (79.62°C), N. nucifera (80.58°C), Hemerocallis fulva (L.) L. (84.78°C), and Z. mays (84.33°C). The constructed melting curves for saffron and its adulterants have significantly different peak locations or shapes. In conclusion, Bar-MCA could be a faster and more cost-effective method to authenticate saffron and detect its adulterants. PMID:25548775

  16. Barcoding melting curve analysis for rapid, sensitive, and discriminating authentication of saffron (Crocus sativus L.) from its adulterants.

    PubMed

    Jiang, Chao; Cao, Liang; Yuan, Yuan; Chen, Min; Jin, Yan; Huang, Luqi

    2014-01-01

    Saffron (Crocus sativus L.) is one of the most important and expensive medicinal spice products in the world. Because of its high market value and premium price, saffron is often adulterated through the incorporation of other materials, such as Carthamus tinctorius L. and Calendula officinalis L. flowers, Hemerocallis L. petals, Daucus carota L. fleshy root, Curcuma longa L. rhizomes, Zea may L., and Nelumbo nucifera Gaertn. stigmas. To develop a straightforward, nonsequencing method for rapid, sensitive, and discriminating detection of these adulterants in traded saffron, we report here the application of a barcoding melting curve analysis method (Bar-MCA) that uses the universal chloroplast plant DNA barcoding region trnH-psbA to identify adulterants. When amplified at DNA concentrations and annealing temperatures optimized for the curve analysis, peaks were formed at specific locations for saffron (81.92°C) and the adulterants: D. carota (81.60°C), C. tinctorius (80.10°C), C. officinalis (79.92°C), Dendranthema morifolium (Ramat.) Tzvel. (79.62°C), N. nucifera (80.58°C), Hemerocallis fulva (L.) L. (84.78°C), and Z. mays (84.33°C). The constructed melting curves for saffron and its adulterants have significantly different peak locations or shapes. In conclusion, Bar-MCA could be a faster and more cost-effective method to authenticate saffron and detect its adulterants.

  17. Sensitivity curves for searches for gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Thrane, Eric; Romano, Joseph D.

    2013-12-01

    We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.

  18. The hyperbolic chemical bond: Fourier analysis of ground and first excited state potential energy curves of HX (X = H-Ne).

    PubMed

    Harrison, John A

    2008-09-04

    RHF/aug-cc-pVnZ, UHF/aug-cc-pVnZ, and QCISD/aug-cc-pVnZ, n = 2-5, potential energy curves of H2 X (1) summation g (+) are analyzed by Fourier transform methods after transformation to a new coordinate system via an inverse hyperbolic cosine coordinate mapping. The Fourier frequency domain spectra are interpreted in terms of underlying mathematical behavior giving rise to distinctive features. There is a clear difference between the underlying mathematical nature of the potential energy curves calculated at the HF and full-CI levels. The method is particularly suited to the analysis of potential energy curves obtained at the highest levels of theory because the Fourier spectra are observed to be of a compact nature, with the envelope of the Fourier frequency coefficients decaying in magnitude in an exponential manner. The finite number of Fourier coefficients required to describe the CI curves allows for an optimum sampling strategy to be developed, corresponding to that required for exponential and geometric convergence. The underlying random numerical noise due to the finite convergence criterion is also a clearly identifiable feature in the Fourier spectrum. The methodology is applied to the analysis of MRCI potential energy curves for the ground and first excited states of HX (X = H-Ne). All potential energy curves exhibit structure in the Fourier spectrum consistent with the existence of resonances. The compact nature of the Fourier spectra following the inverse hyperbolic cosine coordinate mapping is highly suggestive that there is some advantage in viewing the chemical bond as having an underlying hyperbolic nature.

  19. Response analysis of curved bridge with unseating failure control system under near-fault ground motions

    NASA Astrophysics Data System (ADS)

    Zuo, Ye; Sun, Guangjun; Li, Hongjing

    2018-01-01

    Under the action of near-fault ground motions, curved bridges are prone to pounding, local damage of bridge components and even unseating. A multi-scale fine finite element model of a typical three-span curved bridge is established by considering the elastic-plastic behavior of piers and pounding effect of adjacent girders. The nonlinear time-history method is used to study the seismic response of the curved bridge equipped with unseating failure control system under the action of near-fault ground motion. An in-depth analysis is carried to evaluate the control effect of the proposed unseating failure control system. The research results indicate that under the near-fault ground motion, the seismic response of the curved bridge is strong. The unseating failure control system perform effectively to reduce the pounding force of the adjacent girders and the probability of deck unseating.

  20. Flavor release measurement from gum model system.

    PubMed

    Ovejero-López, Isabel; Haahr, Anne-Mette; van den Berg, Frans; Bredie, Wender L P

    2004-12-29

    Flavor release from a mint-flavored chewing gum model system was measured by atmospheric pressure chemical ionization mass spectroscopy (APCI-MS) and sensory time-intensity (TI). A data analysis method for handling the individual curves from both methods is presented. The APCI-MS data are ratio-scaled using the signal from acetone in the breath of subjects. Next, APCI-MS and sensory TI curves are smoothed by low-pass filtering. Principal component analysis of the individual curves is used to display graphically the product differentiation by APCI-MS or TI signals. It is shown that differences in gum composition can be measured by both instrumental and sensory techniques, providing comparable information. The peppermint oil level (0.5-2% w/w) in the gum influenced both the retronasal concentration and the perceived peppermint flavor. The sweeteners' (sorbitol or xylitol) effect is less apparent. Sensory adaptation and sensitivity differences of human perception versus APCI-MS detection might explain the divergence between the two dynamic measurement methods.

  1. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    NASA Astrophysics Data System (ADS)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  2. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    NASA Astrophysics Data System (ADS)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  3. An interactive graphics program to retrieve, display, compare, manipulate, curve fit, difference and cross plot wind tunnel data

    NASA Technical Reports Server (NTRS)

    Elliott, R. D.; Werner, N. M.; Baker, W. M.

    1975-01-01

    The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.

  4. Evaluation of Two Multiplex PCR-High-Resolution Melt Curve Analysis Methods for Differentiation of Campylobacter jejuni and Campylobacter coli Intraspecies.

    PubMed

    Banowary, Banya; Dang, Van Tuan; Sarker, Subir; Connolly, Joanne H; Chenu, Jeremy; Groves, Peter; Raidal, Shane; Ghorashi, Seyed Ali

    2018-03-01

    Campylobacter infection is a common cause of bacterial gastroenteritis in humans and remains a significant global public health issue. The capability of two multiplex PCR (mPCR)-high-resolution melt (HRM) curve analysis methods (i.e., mPCR1-HRM and mPCR2-HRM) to detect and differentiate 24 poultry isolates and three reference strains of Campylobacter jejuni and Campylobacter coli was investigated. Campylobacter jejuni and C. coli were successfully differentiated in both assays, but the differentiation power of mPCR2-HRM targeting the cadF gene was found superior to that of mPCR1-HRM targeting the gpsA gene or a hypothetical protein gene. However, higher intraspecies variation within C. coli and C. jejuni isolates was detected in mPCR1-HRM when compared with mPCR2-HRM. Both assays were rapid and required minimum interpretation skills for discrimination between and within Campylobacter species when using HRM curve analysis software.

  5. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2013-01-01

    The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

  6. Boundary element analysis of corrosion problems for pumps and pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyasaka, M.; Amaya, K.; Kishimoto, K.

    1995-12-31

    Three-dimensional (3D) and axi-symmetric boundary element methods (BEM) were developed to quantitatively estimate cathodic protection and macro-cell corrosion. For 3D analysis, a multiple-region method (MRM) was developed in addition to a single-region method (SRM). The validity and usefulness of the BEMs were demonstrated by comparing numerical results with experimental data from galvanic corrosion systems of a cylindrical model and a seawater pipe, and from a cathodic protection system of an actual seawater pump. It was shown that a highly accurate analysis could be performed for fluid machines handling seawater with complex 3D fields (e.g. seawater pump) by taking account ofmore » flow rate and time dependencies of polarization curve. Compared to the 3D BEM, the axi-symmetric BEM permitted large reductions in numbers of elements and nodes, which greatly simplified analysis of axi-symmetric fields such as pipes. Computational accuracy and CPU time were compared between analyses using two approximation methods for polarization curves: a logarithmic-approximation method and a linear-approximation method.« less

  7. Gaia eclipsing binary and multiple systems. Supervised classification and self-organizing maps

    NASA Astrophysics Data System (ADS)

    Süveges, M.; Barblan, F.; Lecoeur-Taïbi, I.; Prša, A.; Holl, B.; Eyer, L.; Kochoska, A.; Mowlavi, N.; Rimoldini, L.

    2017-07-01

    Context. Large surveys producing tera- and petabyte-scale databases require machine-learning and knowledge discovery methods to deal with the overwhelming quantity of data and the difficulties of extracting concise, meaningful information with reliable assessment of its uncertainty. This study investigates the potential of a few machine-learning methods for the automated analysis of eclipsing binaries in the data of such surveys. Aims: We aim to aid the extraction of samples of eclipsing binaries from such databases and to provide basic information about the objects. We intend to estimate class labels according to two different, well-known classification systems, one based on the light curve morphology (EA/EB/EW classes) and the other based on the physical characteristics of the binary system (system morphology classes; detached through overcontact systems). Furthermore, we explore low-dimensional surfaces along which the light curves of eclipsing binaries are concentrated, and consider their use in the characterization of the binary systems and in the exploration of biases of the full unknown Gaia data with respect to the training sets. Methods: We have explored the performance of principal component analysis (PCA), linear discriminant analysis (LDA), Random Forest classification and self-organizing maps (SOM) for the above aims. We pre-processed the photometric time series by combining a double Gaussian profile fit and a constrained smoothing spline, in order to de-noise and interpolate the observed light curves. We achieved further denoising, and selected the most important variability elements from the light curves using PCA. Supervised classification was performed using Random Forest and LDA based on the PC decomposition, while SOM gives a continuous 2-dimensional manifold of the light curves arranged by a few important features. We estimated the uncertainty of the supervised methods due to the specific finite training set using ensembles of models constructed on randomized training sets. Results: We obtain excellent results (about 5% global error rate) with classification into light curve morphology classes on the Hipparcos data. The classification into system morphology classes using the Catalog and Atlas of Eclipsing binaries (CALEB) has a higher error rate (about 10.5%), most importantly due to the (sometimes strong) similarity of the photometric light curves originating from physically different systems. When trained on CALEB and then applied to Kepler-detected eclipsing binaries subsampled according to Gaia observing times, LDA and SOM provide tractable, easy-to-visualize subspaces of the full (functional) space of light curves that summarize the most important phenomenological elements of the individual light curves. The sequence of light curves ordered by their first linear discriminant coefficient is compared to results obtained using local linear embedding. The SOM method proves able to find a 2-dimensional embedded surface in the space of the light curves which separates the system morphology classes in its different regions, and also identifies a few other phenomena, such as the asymmetry of the light curves due to spots, eccentric systems, and systems with a single eclipse. Furthermore, when data from other surveys are projected to the same SOM surface, the resulting map yields a good overview of the general biases and distortions due to differences in time sampling or population.

  8. Broadband turbulent spectra in gamma-ray burst light curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Putten, Maurice H. P. M.; Guidorzi, Cristiano; Frontera, Filippo, E-mail: mvp@sejong.ac.kr

    2014-05-10

    Broadband power density spectra offer a window to understanding turbulent behavior in the emission mechanism and, at the highest frequencies, in the putative inner engines powering long gamma-ray bursts (GRBs). We describe a chirp search method alongside Fourier analysis for signal detection in the Poisson noise-dominated, 2 kHz sampled, BeppoSAX light curves. An efficient numerical implementation is described in O(Nnlog n) operations, where N is the number of chirp templates and n is the length of the light-curve time series, suited for embarrassingly parallel processing. For the detection of individual chirps over a 1 s duration, the method is onemore » order of magnitude more sensitive in signal-to-noise ratio than Fourier analysis. The Fourier-chirp spectra of GRB 010408 and GRB 970816 show a continuation of the spectral slope with up to 1 kHz of turbulence identified in low-frequency Fourier analysis. The same continuation is observed in an average spectrum of 42 bright, long GRBs. An outlook on a similar analysis of upcoming gravitational wave data is included.« less

  9. Graphical evaluation of complexometric titration curves.

    PubMed

    Guinon, J L

    1985-04-01

    A graphical method, based on logarithmic concentration diagrams, for construction, without any calculations, of complexometric titration curves is examined. The titration curves obtained for different kinds of unidentate, bidentate and quadridentate ligands clearly show why only chelating ligands are usually used in titrimetric analysis. The method has also been applied to two practical cases where unidentate ligands are used: (a) the complexometric determination of mercury(II) with halides and (b) the determination of cyanide with silver, which involves both a complexation and a precipitation system; for this purpose construction of the diagrams for the HgCl(2)/HgCl(+)/Hg(2+) and Ag(CN)(2)(-)/AgCN/CN(-) systems is considered in detail.

  10. The feedback control research on straight and curved road with car-following model

    NASA Astrophysics Data System (ADS)

    Zheng, Yi-Ming; Cheng, Rong-Jun; Ge, Hong-Xia

    2017-07-01

    Taking account of the road consisting of curved part and straight part, an extended car-following model is proposed in this paper. A control signal including the velocity difference between the considered vehicle and the vehicle in front is taken into account. The control theory method is applied into analysis of the stability condition for the model. Numerical simulations are implemented to prove that the stability of the traffic flow strengthens effectively with an increase of the radius of curved road, and the control signal can suppress the traffic congestion. The results are in good agree with the theoretical analysis.

  11. Applications of species accumulation curves in large-scale biological data analysis.

    PubMed

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2015-09-01

    The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.

  12. Applications of species accumulation curves in large-scale biological data analysis

    PubMed Central

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2016-01-01

    The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899

  13. FY17 Status Report on the Initial EPP Finite Element Analysis of Grade 91 Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, M. C.; Sham, T. -L.

    This report describes a modification to the elastic-perfectly plastic (EPP) strain limits design method to account for cyclic softening in Gr. 91 steel. The report demonstrates that the unmodified EPP strain limits method described in current ASME code case is not conservative for materials with substantial cyclic softening behavior like Gr. 91 steel. However, the EPP strain limits method can be modified to be conservative for softening materials by using softened isochronous stress-strain curves in place of the standard curves developed from unsoftened creep experiments. The report provides softened curves derived from inelastic material simulations and factors describing the transformationmore » of unsoftened curves to a softened state. Furthermore, the report outlines a method for deriving these factors directly from creep/fatigue tests. If the material softening saturates the proposed EPP strain limits method can be further simplified, providing a methodology based on temperature-dependent softening factors that could be implemented in an ASME code case allowing the use of the EPP strain limits method with Gr. 91. Finally, the report demonstrates the conservatism of the modified method when applied to inelastic simulation results and two bar experiments.« less

  14. On analyzing free-response data on location level

    NASA Astrophysics Data System (ADS)

    Bandos, Andriy I.; Obuchowski, Nancy A.

    2017-03-01

    Free-response ROC (FROC) data are typically collected when primary question of interest is focused on the proportions of the correct detection-localization of known targets and frequencies of false positive responses, which can be multiple per subject (image). These studies are particularly relevant for CAD and related applications. The fundamental tool of the location-level FROC analysis is the FROC curve. Although there are many methods of FROC analysis, as we describe in this work, some of the standard and popular approaches, while important, are not suitable for analyzing specifically the location-level FROC performance as summarized by the FROC curve. Analysis of the FROC curve, on the other hand, might not be straightforward. Recently we developed an approach for the location-level analysis of the FROC data using the well-known tools for clustered ROC analysis. In the current work, based on previously developed concepts, and using specific examples, we demonstrate the key reasons why specifically location-level FROC performance cannot be fully addressed by the common approaches as well as illustrate the proposed solution. Specifically, we consider the two most salient FROC approaches, namely JAFROC and the area under the exponentially transformed FROC curve (AFE) and show that clearly superior FROC curves can have lower values for these indices. We describe the specific features that make these approaches inconsistent with FROC curves. This work illustrates some caveats for using the common approaches for location-level FROC analysis and provides guidelines for the appropriate assessment or comparison of FROC systems.

  15. Constructing Pairing-Friendly Elliptic Curves under Embedding Degree 1 for Securing Critical Infrastructures.

    PubMed

    Wang, Maocai; Dai, Guangming; Choo, Kim-Kwang Raymond; Jayaraman, Prem Prakash; Ranjan, Rajiv

    2016-01-01

    Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user's public key based on the user's identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al.

  16. Constructing Pairing-Friendly Elliptic Curves under Embedding Degree 1 for Securing Critical Infrastructures

    PubMed Central

    Dai, Guangming

    2016-01-01

    Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user’s public key based on the user’s identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al. PMID:27564373

  17. Shape information from glucose curves: Functional data analysis compared with traditional summary measures

    PubMed Central

    2013-01-01

    Background Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2–3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. Methods OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Results Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as “general level” (FPC1), “time to peak” (FPC2) and “oscillations” (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (−0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (−0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. Conclusions FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy. PMID:23327294

  18. An adaptive-binning method for generating constant-uncertainty/constant-significance light curves with Fermi -LAT data

    DOE PAGES

    Lott, B.; Escande, L.; Larsson, S.; ...

    2012-07-19

    Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LATmore » analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.« less

  19. Meta-analysis of Diagnostic Accuracy and ROC Curves with Covariate Adjusted Semiparametric Mixtures.

    PubMed

    Doebler, Philipp; Holling, Heinz

    2015-12-01

    Many screening tests dichotomize a measurement to classify subjects. Typically a cut-off value is chosen in a way that allows identification of an acceptable number of cases relative to a reference procedure, but does not produce too many false positives at the same time. Thus for the same sample many pairs of sensitivities and false positive rates result as the cut-off is varied. The curve of these points is called the receiver operating characteristic (ROC) curve. One goal of diagnostic meta-analysis is to integrate ROC curves and arrive at a summary ROC (SROC) curve. Holling, Böhning, and Böhning (Psychometrika 77:106-126, 2012a) demonstrated that finite semiparametric mixtures can describe the heterogeneity in a sample of Lehmann ROC curves well; this approach leads to clusters of SROC curves of a particular shape. We extend this work with the help of the [Formula: see text] transformation, a flexible family of transformations for proportions. A collection of SROC curves is constructed that approximately contains the Lehmann family but in addition allows the modeling of shapes beyond the Lehmann ROC curves. We introduce two rationales for determining the shape from the data. Using the fact that each curve corresponds to a natural univariate measure of diagnostic accuracy, we show how covariate adjusted mixtures lead to a meta-regression on SROC curves. Three worked examples illustrate the method.

  20. Using newly developed multiplex polymerase chain reaction and melting curve analysis for detection and discrimination of β-lactamases in Escherichia coli isolates from intensive care patients.

    PubMed

    Chromá, Magdaléna; Hricová, Kristýna; Kolář, Milan; Sauer, Pavel; Koukalová, Dagmar

    2011-11-01

    A total of 78 bacterial strains with known β-lactamases were used to optimize a rapid detection system consisting of multiplex polymerase chain reaction and melting curve analysis to amplify and identify blaTEM, blaSHV, and blaCTX-M genes in a single reaction. Additionally, to evaluate the applicability of this method, 32 clinical isolates of Escherichia coli displaying an extended-spectrum β-lactamase phenotype from patients hospitalized at intensive care units were tested. Results were analyzed by the Rotor-Gene operating software and Rotor-Gene ScreenClust HRM Software. The individual melting curves differed by a temperature shift or curve shape, according to the presence of β-lactamase genes. With the use of this method and direct sequencing, blaCTX-M-15-like was identified as the most prevalent β-lactamase gene. In conclusion, this novel detection system seems to be a suitable tool for rapid detection of present β-lactamase genes and their characterization. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. CFD and Thermo Mechanical Analysis on Effect of Curved vs Step Surface in IC Engine Cylinder Head

    NASA Astrophysics Data System (ADS)

    Balaji, S.; Ganesh, N.; Kumarasamy, A.

    2017-05-01

    Current research in IC engines mainly focus on various methods to achieve higher efficiency and high specific power. As a single design parameter, combustion chamber peak spring pressure has increased more than before. Apart from the structural aspects of withstanding these loads, designer faces challenges of resolving thermal aspects of cylinder head. Methods to enhance the heat transfer without compromising load withstanding capability are being constantly explored. Conventional cylinder heads have got sat inner surface. In this paper we have suggested a modification in inner surface to enhance the heat transfer capability. To increase the heat transfer rate, inner same deck surface is configured as a curved and stepped surface instead of sat. We have reported the effectiveness of extend of curvature in the inner same deck surface in a different technical paper. Here, we are making a direct comparison between stepped and curved surface only. From this analysis it has been observed that curved surface reduces the ame deck temperature considerably without compromising the structural strength factors compared to step and sat surface.

  2. Application of the differential decay-curve method to γ-γ fast-timing lifetime measurements

    NASA Astrophysics Data System (ADS)

    Petkov, P.; Régis, J.-M.; Dewald, A.; Kisyov, S.

    2016-10-01

    A new procedure for the analysis of delayed-coincidence lifetime experiments focused on the Fast-timing case is proposed following the approach of the Differential decay-curve method. Examples of application of the procedure on experimental data reveal its reliability for lifetimes even in the sub-nanosecond range. The procedure is expected to improve both precision/reliability and treatment of systematic errors and scarce data as well as to provide an option for cross-check with the results obtained by means of other analyzing methods.

  3. Gaussian decomposition of high-resolution melt curve derivatives for measuring genome-editing efficiency

    PubMed Central

    Zaboikin, Michail; Freter, Carl

    2018-01-01

    We describe a method for measuring genome editing efficiency from in silico analysis of high-resolution melt curve data. The melt curve data derived from amplicons of genome-edited or unmodified target sites were processed to remove the background fluorescent signal emanating from free fluorophore and then corrected for temperature-dependent quenching of fluorescence of double-stranded DNA-bound fluorophore. Corrected data were normalized and numerically differentiated to obtain the first derivatives of the melt curves. These were then mathematically modeled as a sum or superposition of minimal number of Gaussian components. Using Gaussian parameters determined by modeling of melt curve derivatives of unedited samples, we were able to model melt curve derivatives from genetically altered target sites where the mutant population could be accommodated using an additional Gaussian component. From this, the proportion contributed by the mutant component in the target region amplicon could be accurately determined. Mutant component computations compared well with the mutant frequency determination from next generation sequencing data. The results were also consistent with our earlier studies that used difference curve areas from high-resolution melt curves for determining the efficiency of genome-editing reagents. The advantage of the described method is that it does not require calibration curves to estimate proportion of mutants in amplicons of genome-edited target sites. PMID:29300734

  4. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    DOE PAGES

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    2016-09-17

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North Americanmore » datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate.« less

  5. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North Americanmore » datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate.« less

  6. Signatures of Steady Heating in Time Lag Analysis of Coronal Emission

    NASA Technical Reports Server (NTRS)

    Viall, Nicholeen M.; Klimchuk, James A.

    2016-01-01

    Among the multitude of methods used to investigate coronal heating, the time lag method of Viall Klimchuk is becoming increasingly prevalent as an analysis technique that is complementary to those that are traditionally used.The time lag method cross correlates light curves at a given spatial location obtained in spectral bands that sample different temperature plasmas. It has been used most extensively with data from the Atmospheric Imaging Assembly on the Solar Dynamics Observatory. We have previously applied the time lag method to entire active regions and surrounding the quiet Sun and created maps of the results. We find that the majority of time lags are consistent with the cooling of coronal plasma that has been impulsively heated. Additionally, a significant fraction of the map area has a time lag of zero. This does not indicate a lack of variability. Rather, strong variability must be present, and it must occur in phase between the different channels. We have previously shown that these zero time lags are consistent with the transition region response to coronal nanoflares, although other explanations are possible. A common misconception is that the zero time lag indicates steady emission resulting from steady heating. Using simulated and observed light curves, we demonstrate here that highly correlated light curves at zero time lag are not compatible with equilibrium solutions. Such light curves can only be created by evolution

  7. Dynamic rating curve assessment for hydrometric stations and computation of the associated uncertainties: Quality and station management indicators

    NASA Astrophysics Data System (ADS)

    Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan

    2014-09-01

    A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.

  8. Motion state analysis of space target based on optical cross section

    NASA Astrophysics Data System (ADS)

    Tian, Qichen; Li, Zhi; Xu, Can; Liu, Chenghao

    2017-10-01

    In order to solve the problem that the movement state analysis method of the space target based on OCS is not related to the real motion state. This paper proposes a method based on OCS for analyzing the state of space target motion. This paper first establish a three-dimensional model of real STSS satellite, then change the satellite's surface into element, and assign material to each panel according to the actual conditions of the satellite. This paper set up a motion scene according to the orbit parameters of STSS satellite in STK, and the motion states are set to three axis steady state and slowly rotating unstable state respectively. In these two states, the occlusion condition of the surface element is firstly determined, and the effective face element is selected. Then, the coordinates of the observation station and the solar coordinates in the satellite body coordinate system are input into the OCS calculation program, and the OCS variation curves of the three axis steady state and the slow rotating unstable state STSS satellite are obtained. Combining the satellite surface structure and the load situation, the OCS change curve of the three axis stabilized satellite is analyzed, and the conclude that the OCS curve fluctuates up and down when the sunlight is irradiated to the load area; By using Spectral analysis method, autocorrelation analysis and the cross residual method, the rotation speed of OCS satellite in slow rotating unstable state is analyzed, and the rotation speed of satellite is successfully reversed. By comparing the three methods, it is found that the cross residual method is more accurate.

  9. Evaluation of predictive capacities of biomarkers based on research synthesis.

    PubMed

    Hattori, Satoshi; Zhou, Xiao-Hua

    2016-11-10

    The objective of diagnostic studies or prognostic studies is to evaluate and compare predictive capacities of biomarkers. Suppose we are interested in evaluation and comparison of predictive capacities of continuous biomarkers for a binary outcome based on research synthesis. In analysis of each study, subjects are often classified into two groups of the high-expression and low-expression groups according to a cut-off value, and statistical analysis is based on a 2 × 2 table defined by the response and the high expression or low expression of the biomarker. Because the cut-off is study specific, it is difficult to interpret a combined summary measure such as an odds ratio based on the standard meta-analysis techniques. The summary receiver operating characteristic curve is a useful method for meta-analysis of diagnostic studies in the presence of heterogeneity of cut-off values to examine discriminative capacities of biomarkers. We develop a method to estimate positive or negative predictive curves, which are alternative to the receiver operating characteristic curve based on information reported in published papers of each study. These predictive curves provide a useful graphical presentation of pairs of positive and negative predictive values and allow us to compare predictive capacities of biomarkers of different scales in the presence of heterogeneity in cut-off values among studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. S-curve networks and an approximate method for estimating degree distributions of complex networks

    NASA Astrophysics Data System (ADS)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  11. Determination of the human spine curve based on laser triangulation.

    PubMed

    Poredoš, Primož; Čelan, Dušan; Možina, Janez; Jezeršek, Matija

    2015-02-05

    The main objective of the present method was to automatically obtain a spatial curve of the thoracic and lumbar spine based on a 3D shape measurement of a human torso with developed scoliosis. Manual determination of the spine curve, which was based on palpation of the thoracic and lumbar spinous processes, was found to be an appropriate way to validate the method. Therefore a new, noninvasive, optical 3D method for human torso evaluation in medical practice is introduced. Twenty-four patients with confirmed clinical diagnosis of scoliosis were scanned using a specially developed 3D laser profilometer. The measuring principle of the system is based on laser triangulation with one-laser-plane illumination. The measurement took approximately 10 seconds at 700 mm of the longitudinal translation along the back. The single point measurement accuracy was 0.1 mm. Computer analysis of the measured surface returned two 3D curves. The first curve was determined by manual marking (manual curve), and the second was determined by detecting surface curvature extremes (automatic curve). The manual and automatic curve comparison was given as the root mean square deviation (RMSD) for each patient. The intra-operator study involved assessing 20 successive measurements of the same person, and the inter-operator study involved assessing measurements from 8 operators. The results obtained for the 24 patients showed that the typical RMSD between the manual and automatic curve was 5.0 mm in the frontal plane and 1.0 mm in the sagittal plane, which is a good result compared with palpatory accuracy (9.8 mm). The intra-operator repeatability of the presented method in the frontal and sagittal planes was 0.45 mm and 0.06 mm, respectively. The inter-operator repeatability assessment shows that that the presented method is invariant to the operator of the computer program with the presented method. The main novelty of the presented paper is the development of a new, non-contact method that provides a quick, precise and non-invasive way to determine the spatial spine curve for patients with developed scoliosis and the validation of the presented method using the palpation of the spinous processes, where no harmful ionizing radiation is present.

  12. Evaluation of asymmetries of blood flow rate and of circulation time by intravenous radionuclide cerebral angiography in patients with ischemic completed stroke.

    PubMed

    Bartolini, A; Primavera, A; Gasparetto, B

    1984-12-01

    155 patients with ischemic completed stroke of varying severity and outcome have been evaluated by radionuclide cerebral angiography with analysis of regional time-activity curves. Two parameters have been evaluated: area under the upslope of the curve (Aup) reflecting regional blood flow rate and moment of the whole curve reflecting tracer circulation time (rABCT) Combination of these two methods ensured increased detection of perfusion asymmetries.

  13. WTAQ - A computer program for aquifer-test analysis of confined and unconfined aquifers

    USGS Publications Warehouse

    Barlow, P.M.; Moench, A.F.

    2004-01-01

    Computer program WTAQ was developed to implement a Laplace-transform analytical solution for axial-symmetric flow to a partially penetrating, finite-diameter well in a homogeneous and anisotropic unconfined (water-table) aquifer. The solution accounts for wellbore storage and skin effects at the pumped well, delayed response at an observation well, and delayed or instantaneous drainage from the unsaturated zone. For the particular case of zero drainage from the unsaturated zone, the solution simplifies to that of axial-symmetric flow in a confined aquifer. WTAQ calculates theoretical time-drawdown curves for the pumped well and observation wells and piezometers. The theoretical curves are used with measured time-drawdown data to estimate hydraulic parameters of confined or unconfined aquifers by graphical type-curve methods or by automatic parameter-estimation methods. Parameters that can be estimated are horizontal and vertical hydraulic conductivity, specific storage, and specific yield. A sample application illustrates use of WTAQ for estimating hydraulic parameters of a hypothetical, unconfined aquifer by type-curve methods. Copyright ASCE 2004.

  14. Advanced composites structural concepts and materials technologies for primary aircraft structures: Structural response and failure analysis

    NASA Technical Reports Server (NTRS)

    Dorris, William J.; Hairr, John W.; Huang, Jui-Tien; Ingram, J. Edward; Shah, Bharat M.

    1992-01-01

    Non-linear analysis methods were adapted and incorporated in a finite element based DIAL code. These methods are necessary to evaluate the global response of a stiffened structure under combined in-plane and out-of-plane loading. These methods include the Arc Length method and target point analysis procedure. A new interface material model was implemented that can model elastic-plastic behavior of the bond adhesive. Direct application of this method is in skin/stiffener interface failure assessment. Addition of the AML (angle minus longitudinal or load) failure procedure and Hasin's failure criteria provides added capability in the failure predictions. Interactive Stiffened Panel Analysis modules were developed as interactive pre-and post-processors. Each module provides the means of performing self-initiated finite elements based analysis of primary structures such as a flat or curved stiffened panel; a corrugated flat sandwich panel; and a curved geodesic fuselage panel. This module brings finite element analysis into the design of composite structures without the requirement for the user to know much about the techniques and procedures needed to actually perform a finite element analysis from scratch. An interactive finite element code was developed to predict bolted joint strength considering material and geometrical non-linearity. The developed method conducts an ultimate strength failure analysis using a set of material degradation models.

  15. Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine.

    PubMed

    Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L; Balleteros, Francisco

    2016-12-07

    Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets.

  16. Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine

    PubMed Central

    Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L.; Balleteros, Francisco

    2016-01-01

    Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets. PMID:27941604

  17. EMG circuit design and AR analysis of EMG signs.

    PubMed

    Hardalaç, Firat; Canal, Rahmi

    2004-12-01

    In this study, electromyogram (EMG) circuit was designed and tested on 27 people. Autoregressive (AR) analysis of EMG signals recorded on the ulnar nerve region of the right hand in resting position was performed. AR method, especially in the calculation of the spectrums of stable signs, is used for frequency analysis of signs, which give frequency response as sharp peaks and valleys. In this study, as the result of AR method analysis of EMG signals frequency-time domain, frequency spectrum curves (histogram curves) were obtained. As the images belonging to these histograms were evaluated, fibrillation potential widths of the muscle fibers of the ulnar nerve region of the people (material of the study) were examined. According to the degeneration degrees of the motor nerves, nine people had myopathy, nine had neuropathy, and nine were normal.

  18. Evaluation and statistical judgement of neural responses to sinusoidal stimulation in cases with superimposed drift and noise.

    PubMed

    Jastreboff, P W

    1979-06-01

    Time histograms of neural responses evoked by sinuosidal stimulation often contain a slow drifting and an irregular noise which disturb Fourier analysis of these responses. Section 2 of this paper evaluates the extent to which a linear drift influences the Fourier analysis, and develops a combined Fourier and linear regression analysis for detecting and correcting for such a linear drift. Usefulness of this correcting method is demonstrated for the time histograms of actual eye movements and Purkinje cell discharges evoked by sinusoidal rotation of rabbits in the horizontal plane. In Sect. 3, the analysis of variance is adopted for estimating the probability of the random occurrence of the response curve extracted by Fourier analysis from noise. This method proved to be useful for avoiding false judgements as to whether the response curve was meaningful, particularly when the response was small relative to the contaminating noise.

  19. Soil-site relationships of the upland oaks

    Treesearch

    Willard H. Carmean

    1971-01-01

    Site quality for upland oaks can be estimated directly by using site-index curves, or indirect estimations can be made by using soil-site prediction methods. Presently available harmonized site-index curves may not be suitable for all upland oak species, or may not be suitable throughout their range. New stem-analysis data show that different species of oak have...

  20. Mathematics of quantitative kinetic PCR and the application of standard curves.

    PubMed

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  1. New methods for the geometrical analysis of tubular organs.

    PubMed

    Grélard, Florent; Baldacci, Fabien; Vialard, Anne; Domenger, Jean-Philippe

    2017-12-01

    This paper presents new methods to study the shape of tubular organs. Determining precise cross-sections is of major importance to perform geometrical measurements, such as diameter, wall-thickness estimation or area measurement. Our first contribution is a robust method to estimate orthogonal planes based on the Voronoi Covariance Measure. Our method is not relying on a curve-skeleton computation beforehand. This means our orthogonal plane estimator can be used either on the skeleton or on the volume. Another important step towards tubular organ characterization is achieved through curve-skeletonization, as skeletons allow to compare two tubular organs, and to perform virtual endoscopy. Our second contribution is dedicated to correcting common defects of the skeleton by new pruning and recentering methods. Finally, we propose a new method for curve-skeleton extraction. Various results are shown on different types of segmented tubular organs, such as neurons, airway-tree and blood vessels. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Flood-frequency analyses, Manual of Hydrology: Part 3

    USGS Publications Warehouse

    Dalrymple, Tate

    1960-01-01

    This report describes the method used by the U.S. Geological Survey to determine the magnitude and frequency of momentary peak discharges at any place on a stream, whether a gaging-station record is available or not. The method is applicable to a region of any size, as a river basin or a State, so long as the region is hydrologically homogeneous. The analysis provides two curves. The first expresses the flood discharge-time relation, showing variation of peak discharge, expressed as a ratio to the mean annual flood, with recurrence interval. The second relates the mean annual flood to the size of drainage area alone, or to the size area and other significant basin characteristics. A frequency curve may be defined for any place in the region by use of these two curves. The procedure is: (a) measure the drainage area and other appropriate basin characteristics from maps; (b) from the second curve, select the mean annual flood corresponding to the proper drainage area factors; (c) from the first curve, select ratios of peak discharge to mean annual flood for selected recurrence intervals, as 2, 10, 25, and 50 years; and (d) multiply these ratios by the mean annual flood and plot the resulting discharges of known frequency to define the frequency curve. Two reports not previously given general circulation are included as sections of this report. These are 'Plotting Positions in Frequency Analysis' by W. B. Langbein, and 'Characteristics of Frequency Curves Based on a Theoretical 1,000-Year Record' by M. A. Benson.

  3. Practical quality control tools for curves and surfaces

    NASA Technical Reports Server (NTRS)

    Small, Scott G.

    1992-01-01

    Curves (geometry) and surfaces created by Computer Aided Geometric Design systems in the engineering environment must satisfy two basic quality criteria: the geometric shape must have the desired engineering properties; and the objects must be parameterized in a way which does not cause computational difficulty for geometric processing and engineering analysis. Interactive techniques are described which are in use at Boeing to evaluate the quality of aircraft geometry prior to Computational Fluid Dynamic analysis, including newly developed methods for examining surface parameterization and its effects.

  4. The learning curve of robot-assisted laparoscopic fundoplication in children: a prospective evaluation and CUSUM analysis.

    PubMed

    Cundy, Thomas P; Rowland, Simon P; Gattas, Nicholas E; White, Alan D; Najmaldin, Azad S

    2015-06-01

    Fundoplication is a leading application of robotic surgery in children, yet the learning curve for this procedure (RF) remains ill-defined. This study aims to identify various learning curve transition points, using cumulative summation (CUSUM) analysis. A prospective database was examined to identify RF cases undertaken during 2006-2014. Time-based surgical process outcomes were evaluated, as well as clinical outcomes. A total of 57 RF cases were included. Statistically significant transitions beyond the learning phase were observed at cases 42, 34 and 37 for docking, console and total operating room times, respectively. A steep early learning phase for docking time was overcome after 12 cases. There were three Clavien-Dindo grade ≥ 3 complications, with two patients requiring redo fundoplication. We identified numerous well-defined learning curve trends to affirm that experience confers significant temporal improvements. Our findings highlight the value of the CUSUM method for learning curve evaluation. Copyright © 2014 John Wiley & Sons, Ltd.

  5. The dispersion analysis of drift velocity in the study of solar wind flows

    NASA Astrophysics Data System (ADS)

    Olyak, Maryna

    2013-09-01

    In this work I consider a method for the study of the solar wind flows at distances from the Sun more than 1 AU. The method is based on the analysis of drift velocity dispersion that was obtained from the simultaneous scintillation observations in two antennas. I considered dispersion dependences for different models of the solar wind, and I defined its specificity for each model. I have determined that the presence of several solar wind flows significantly affects the shape and the slope of the dispersion curve. The maximum slope angle is during the passage of the fast solar wind flow near the Earth. If a slow flow passes near the Earth, the slope of the dispersion curve decreases. This allows a more precise definition of the velocity and flow width compared to the traditional scintillation method. Using the comparison of experimental and theoretical dispersion curves, I calculated the velocity and width of solar wind flows and revealed the presence of significant velocity fluctuations which accounted for about 60% of the average velocity.

  6. Elliptic Curve Integral Points on y2 = x3 + 3x ‑ 14

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhong

    2018-03-01

    The positive integer points and integral points of elliptic curves are very important in the theory of number and arithmetic algebra, it has a wide range of applications in cryptography and other fields. There are some results of positive integer points of elliptic curve y 2 = x 3 + ax + b, a, b ∈ Z In 1987, D. Zagier submit the question of the integer points on y 2 = x 3 ‑ 27x + 62, it count a great deal to the study of the arithmetic properties of elliptic curves. In 2009, Zhu H L and Chen J H solved the problem of the integer points on y 2 = x 3 ‑ 27x + 62 by using algebraic number theory and P-adic analysis method. In 2010, By using the elementary method, Wu H M obtain all the integral points of elliptic curves y 2 = x 3 ‑ 27x ‑ 62. In 2015, Li Y Z and Cui B J solved the problem of the integer points on y 2 = x 3 ‑ 21x ‑ 90 By using the elementary method. In 2016, Guo J solved the problem of the integer points on y 2 = x 3 + 27x + 62 by using the elementary method. In 2017, Guo J proved that y 2 = x 3 ‑ 21x + 90 has no integer points by using the elementary method. Up to now, there is no relevant conclusions on the integral points of elliptic curves y 2 = x 3 + 3x ‑ 14, which is the subject of this paper. By using congruence and Legendre Symbol, it can be proved that elliptic curve y 2 = x 3 + 3x ‑ 14 has only one integer point: (x, y) = (2, 0).

  7. Antigen-antibody biorecognition events as discriminated by noise analysis of force spectroscopy curves.

    PubMed

    Bizzarri, Anna Rita; Cannistraro, Salvatore

    2014-08-22

    Atomic force spectroscopy is able to extract kinetic and thermodynamic parameters of biomolecular complexes provided that the registered unbinding force curves could be reliably attributed to the rupture of the specific complex interactions. To this aim, a commonly used strategy is based on the analysis of the stretching features of polymeric linkers which are suitably introduced in the biomolecule-substrate immobilization procedure. Alternatively, we present a method to select force curves corresponding to specific biorecognition events, which relies on a careful analysis of the force fluctuations of the biomolecule-functionalized cantilever tip during its approach to the partner molecules immobilized on a substrate. In the low frequency region, a characteristic 1/f (α) noise with α equal to one (flickering noise) is found to replace white noise in the cantilever fluctuation power spectrum when, and only when, a specific biorecognition process between the partners occurs. The method, which has been validated on a well-characterized antigen-antibody complex, represents a fast, yet reliable alternative to the use of linkers which may involve additional surface chemistry and reproducibility concerns.

  8. [Mathematical model of micturition allowing a detailed analysis of free urine flowmetry].

    PubMed

    Valentini, F; Besson, G; Nelson, P

    1999-04-01

    A mathematical model of micturition allowing precise analysis of uroflowmetry curves (VBN method) is described together with some of its applications. The physiology of micturition and possible diagnostic hypotheses able to explain the shape of the uroflowmetry curve can be expressed by a series of differential equations. Integration of the system allows the validity of these hypotheses to be tested by simulation. A theoretical uroflowmetry is calculated in less than 1 second and analysis of a dysuric uroflowmetry takes about 5 minutes. The efficacy of the model is due to its rapidity and the precision of the comparisons between measured and predicted values. The method has been applied to almost one thousand curves. The uroflowmetries of normal subjects are restored without adjustment with a quadratic error of less than 1%, while those of dysuric patients require identification of one or two adaptive parameters characteristic of the underlying disease. These parameters remain constant during the same session, but vary with the disease and/or the treatment. This model could become a tool for noninvasive urodynamic studies.

  9. Analysis of the Magnetic Field Influence on the Rheological Properties of Healthy Persons Blood

    PubMed Central

    Nawrocka-Bogusz, Honorata

    2013-01-01

    The influence of magnetic field on whole blood rheological properties remains a weakly known phenomenon. An in vitro analysis of the magnetic field influence on the rheological properties of healthy persons blood is presented in this work. The study was performed on blood samples taken from 25 healthy nonsmoking persons and included comparative analysis of the results of both the standard rotary method (flow curve measurement) and the oscillatory method known also as the mechanical dynamic analysis, performed before and after exposition of blood samples to magnetic field. The principle of the oscillatory technique lies in determining the amplitude and phase of the oscillations of the studied sample subjected to action of a harmonic force of controlled amplitude and frequency. The flow curve measurement involved determining the shear rate dependence of blood viscosity. The viscoelastic properties of the blood samples were analyzed in terms of complex blood viscosity. All the measurements have been performed by means of the Contraves LS40 rheometer. The data obtained from the flow curve measurements complemented by hematocrit and plasma viscosity measurements have been analyzed using the rheological model of Quemada. No significant changes of the studied rheological parameters have been found. PMID:24078918

  10. Bayesian analysis of stage-fall-discharge rating curves and their uncertainties

    NASA Astrophysics Data System (ADS)

    Mansanarez, Valentin; Le Coz, Jérôme; Renard, Benjamin; Lang, Michel; Pierrefeu, Gilles; Le Boursicaud, Raphaël; Pobanz, Karine

    2016-04-01

    Stage-fall-discharge (SFD) rating curves are traditionally used to compute streamflow records at sites where the energy slope of the flow is variable due to variable backwater effects. Building on existing Bayesian approaches, we introduce an original hydraulics-based method for developing SFD rating curves used at twin gauge stations and estimating their uncertainties. Conventional power functions for channel and section controls are used, and transition to a backwater-affected channel control is computed based on a continuity condition, solved either analytically or numerically. The difference between the reference levels at the two stations is estimated as another uncertain parameter of the SFD model. The method proposed in this presentation incorporates information from both the hydraulic knowledge (equations of channel or section controls) and the information available in the stage-fall-discharge observations (gauging data). The obtained total uncertainty combines the parametric uncertainty and the remnant uncertainty related to the model of rating curve. This method provides a direct estimation of the physical inputs of the rating curve (roughness, width, slope bed, distance between twin gauges, etc.). The performance of the new method is tested using an application case affected by the variable backwater of a run-of-the-river dam: the Rhône river at Valence, France. In particular, a sensitivity analysis to the prior information and to the gauging dataset is performed. At that site, the stage-fall-discharge domain is well documented with gaugings conducted over a range of backwater affected and unaffected conditions. The performance of the new model was deemed to be satisfactory. Notably, transition to uniform flow when the overall range of the auxiliary stage is gauged is correctly simulated. The resulting curves are in good agreement with the observations (gaugings) and their uncertainty envelopes are acceptable for computing streamflow records. Similar conclusions were drawn from the application to other similar sites.

  11. The BMPix and PEAK Tools: New Methods for Automated Laminae Recognition and Counting - Application to Glacial Varves From Antarctic Marine Sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.

    2009-12-01

    We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  12. Thermal analysis and microstructural characterization of Mg-Al-Zn system alloys

    NASA Astrophysics Data System (ADS)

    Król, M.; Tański, T.; Sitek, W.

    2015-11-01

    The influence of Zn amount and solidification rate on the characteristic temperature of the evaluation of magnesium dendrites during solidification at different cooling rates (0.6-2.5°C) were examined by thermal derivative analysis (TDA). The dendrite coherency point (DCP) is presented with a novel approach based on second derivative cooling curve. Solidification behavior was examined via one thermocouple thermal analysis method. Microstructural assessments were described by optical light microscopy, scanning electron microscopy and energy dispersive X-ray spectroscopy. These studies showed that utilization of d2T/dt2 vs. the time curve methodology provides for analysis of the dendrite coherency point

  13. Design, Fabrication and Test of Composite Curved Frames for Helicopter Fuselage Structure

    NASA Technical Reports Server (NTRS)

    Lowry, D. W.; Krebs, N. E.; Dobyns, A. L.

    1984-01-01

    Aspects of curved beam effects and their importance in designing composite frame structures are discussed. The curved beam effect induces radial flange loadings which in turn causes flange curling. This curling increases the axial flange stresses and induces transverse bending. These effects are more important in composite structures due to their general inability to redistribute stresses by general yielding, such as in metal structures. A detailed finite element analysis was conducted and used in the design of composite curved frame specimens. Five specimens were statically tested and compared with predicted and test strains. The curved frame effects must be accurately accounted for to avoid premature fracture; finite element methods can accurately predict most of the stresses and no elastic relief from curved beam effects occurred in the composite frames tested. Finite element studies are presented for comparative curved beam effects on composite and metal frames.

  14. Removing Shape-Preserving Transformations in Square-Root Elastic (SRE) Framework for Shape Analysis of Curves

    PubMed Central

    Joshi, Shantanu H.; Klassen, Eric; Srivastava, Anuj; Jermyn, Ian

    2011-01-01

    This paper illustrates and extends an efficient framework, called the square-root-elastic (SRE) framework, for studying shapes of closed curves, that was first introduced in [2]. This framework combines the strengths of two important ideas - elastic shape metric and path-straightening methods - for finding geodesics in shape spaces of curves. The elastic metric allows for optimal matching of features between curves while path-straightening ensures that the algorithm results in geodesic paths. This paper extends this framework by removing two important shape preserving transformations: rotations and re-parameterizations, by forming quotient spaces and constructing geodesics on these quotient spaces. These ideas are demonstrated using experiments involving 2D and 3D curves. PMID:21738385

  15. Probabilistic assessment method of the non-monotonic dose-responses-Part I: Methodological approach.

    PubMed

    Chevillotte, Grégoire; Bernard, Audrey; Varret, Clémence; Ballet, Pascal; Bodin, Laurent; Roudot, Alain-Claude

    2017-08-01

    More and more studies aim to characterize non-monotonic dose response curves (NMDRCs). The greatest difficulty is to assess the statistical plausibility of NMDRCs from previously conducted dose response studies. This difficulty is linked to the fact that these studies present (i) few doses tested, (ii) a low sample size per dose, and (iii) the absence of any raw data. In this study, we propose a new methodological approach to probabilistically characterize NMDRCs. The methodology is composed of three main steps: (i) sampling from summary data to cover all the possibilities that may be presented by the responses measured by dose and to obtain a new raw database, (ii) statistical analysis of each sampled dose-response curve to characterize the slopes and their signs, and (iii) characterization of these dose-response curves according to the variation of the sign in the slope. This method allows characterizing all types of dose-response curves and can be applied both to continuous data and to discrete data. The aim of this study is to present the general principle of this probabilistic method which allows to assess the non-monotonic dose responses curves, and to present some results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. MEM spectral analysis for predicting influenza epidemics in Japan.

    PubMed

    Sumi, Ayako; Kamo, Ken-ichi

    2012-03-01

    The prediction of influenza epidemics has long been the focus of attention in epidemiology and mathematical biology. In this study, we tested whether time series analysis was useful for predicting the incidence of influenza in Japan. The method of time series analysis we used consists of spectral analysis based on the maximum entropy method (MEM) in the frequency domain and the nonlinear least squares method in the time domain. Using this time series analysis, we analyzed the incidence data of influenza in Japan from January 1948 to December 1998; these data are unique in that they covered the periods of pandemics in Japan in 1957, 1968, and 1977. On the basis of the MEM spectral analysis, we identified the periodic modes explaining the underlying variations of the incidence data. The optimum least squares fitting (LSF) curve calculated with the periodic modes reproduced the underlying variation of the incidence data. An extension of the LSF curve could be used to predict the incidence of influenza quantitatively. Our study suggested that MEM spectral analysis would allow us to model temporal variations of influenza epidemics with multiple periodic modes much more effectively than by using the method of conventional time series analysis, which has been used previously to investigate the behavior of temporal variations in influenza data.

  17. Method development towards qualitative and semi-quantitative analysis of multiple pesticides from food surfaces and extracts by desorption electrospray ionization mass spectrometry as a preselective tool for food control.

    PubMed

    Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine

    2017-03-01

    Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.

  18. Learning curve evaluation using cumulative summation analysis-a clinical example of pediatric robot-assisted laparoscopic pyeloplasty.

    PubMed

    Cundy, Thomas P; Gattas, Nicholas E; White, Alan D; Najmaldin, Azad S

    2015-08-01

    The cumulative summation (CUSUM) method for learning curve analysis remains under-utilized in the surgical literature in general, and is described in only a small number of publications within the field of pediatric surgery. This study introduces the CUSUM analysis technique and applies it to evaluate the learning curve for pediatric robot-assisted laparoscopic pyeloplasty (RP). Clinical data were prospectively recorded for consecutive pediatric RP cases performed by a single-surgeon. CUSUM charts and tests were generated for set-up time, docking time, console time, operating time, total operating room time, and postoperative complications. Conversions and avoidable operating room delay were separately evaluated with respect to case experience. Comparisons between case experience and time-based outcomes were assessed using the Student's t-test and ANOVA for bi-phasic and multi-phasic learning curves respectively. Comparison between case experience and complication frequency was assessed using the Kruskal-Wallis test. A total of 90 RP cases were evaluated. The learning curve transitioned beyond the learning phase at cases 10, 15, 42, 57, and 58 for set-up time, docking time, console time, operating time, and total operating room time respectively. All comparisons of mean operating times between the learning phase and subsequent phases were statistically significant (P=<0.001-0.01). No significant difference was observed between case experience and frequency of post-operative complications (P=0.125), although the CUSUM chart demonstrated a directional change in slope for the last 12 cases in which there were high proportions of re-do cases and patients <6 months of age. The CUSUM method has a valuable role for learning curve evaluation and outcome quality monitoring. In applying this statistical technique to the largest reported single surgeon series of pediatric RP, we demonstrate numerous distinctly shaped learning curves and well-defined learning phase transition points. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Coronal deformity correction in adolescent idiopathic scoliosis patients using the fulcrum-bending radiograph: a prospective comparative analysis of the proximal thoracic, main thoracic, and thoracolumbar/lumbar curves.

    PubMed

    Li, Jingfeng; Dumonski, Mark L; Samartzis, Dino; Hong, Joseph; He, Shisheng; Zhu, Xiaodong; Wang, Chuanfeng; Vaccaro, Alexander R; Albert, Todd J; Li, Ming

    2011-01-01

    The aim of the prospective, comparative radiographic analysis was to determine the role of the fulcrum-bending radiograph (FBR) for the assessment of the proximal thoracic (PT), main thoracic (MT), and the thoracolumbar/lumbar (TL/L) curves in patients undergoing posterior spinal pedicle screw fixation and fusion for adolescent idiopathic scoliosis (AIS). The FBR demonstrated statistically better correction than other preoperative methods for the assessment of frontal plane correction of the MT curves. The fulcrum-bending correction index (FBCI) has been considered a superior method than the correction rate for comparing curve correction undergoing posterior spinal fusion because it accounts for the curve flexibility. However, their applicability to assess the PT and TL/L curves in AIS patients remains speculative. The relation between FBR and correction obtained by pedicle screws fixation is still unknown. Thirty-eight consecutive AIS patients who underwent pedicle screw fixation and posterior fusion were included in this study. The assessment of preoperative radiographs included standing posterior-anterior (PA), FBR, supine side-bending, and postoperative standing PA and lateral plain radiographs. The flexibility of the curve, as well as the FBCI, was calculated for all patients. Postoperatively, radiographs were assessed at immediate (i.e. 1 week), 3-month, 6-month, 12-month, and 2-year follow-up. Cobb angles were obtained from the PT, MT, and TL/L curves. The study consisted of 9 PT, 37 MT, and 12 TL/L curves, with a mean age of 15.1 years. The mean FBR flexibility of the PT, MT, and the TL/L curves was 42.6, 61.1, and 66.2%, respectively. The mean operative correction rates in the PT, MT, and TL/L curves were 43.4, 69.3, and 73.9%, respectively, and the mean FBCI was 103.8, 117.0, and 114.8%, respectively. Fulcrum-bending flexibility was positively correlated with the operative correction rate in PT, MT, and TL/L curves. Although the correction rate in MT and TL/L curves was higher than PT curves, the FBCI in PT, MT, and TL/L curves was not significantly different (p < 0.05). The FBR can be used to assist in the assessment of PT, MT, and TL/L curve corrections in AIS patients. When curve flexibility is taken into account by FBR, the ability of pedicle screws to correct PT, MT, and TL/L curves is the same.

  20. Analysis of HIV Using a High Resolution Melting (HRM) Diversity Assay: Automation of HRM Data Analysis Enhances the Utility of the Assay for Analysis of HIV Incidence

    PubMed Central

    Cousins, Matthew M.; Swan, David; Magaret, Craig A.; Hoover, Donald R.; Eshleman, Susan H.

    2012-01-01

    Background HIV diversity may be a useful biomarker for discriminating between recent and non-recent HIV infection. The high resolution melting (HRM) diversity assay was developed to quantify HIV diversity in viral populations without sequencing. In this assay, HIV diversity is expressed as a single numeric HRM score that represents the width of a melting peak. HRM scores are highly associated with diversity measures obtained with next generation sequencing. In this report, a software package, the HRM Diversity Assay Analysis Tool (DivMelt), was developed to automate calculation of HRM scores from melting curve data. Methods DivMelt uses computational algorithms to calculate HRM scores by identifying the start (T1) and end (T2) melting temperatures for a DNA sample and subtracting them (T2–T1 = HRM score). DivMelt contains many user-supplied analysis parameters to allow analyses to be tailored to different contexts. DivMelt analysis options were optimized to discriminate between recent and non-recent HIV infection and to maximize HRM score reproducibility. HRM scores calculated using DivMelt were compared to HRM scores obtained using a manual method that is based on visual inspection of DNA melting curves. Results HRM scores generated with DivMelt agreed with manually generated HRM scores obtained from the same DNA melting data. Optimal parameters for discriminating between recent and non-recent HIV infection were identified. DivMelt provided greater discrimination between recent and non-recent HIV infection than the manual method. Conclusion DivMelt provides a rapid, accurate method of determining HRM scores from melting curve data, facilitating use of the HRM diversity assay for large-scale studies. PMID:23240016

  1. Defining the learning curve in laparoscopic paraesophageal hernia repair: a CUSUM analysis.

    PubMed

    Okrainec, Allan; Ferri, Lorenzo E; Feldman, Liane S; Fried, Gerald M

    2011-04-01

    There are numerous reports in the literature documenting high recurrence rates after laparoscopic paraesophageal hernia repair. The purpose of this study was to determine the learning curve for this procedure using the Cumulative Summation (CUSUM) technique. Forty-six consecutive patients with paraesophageal hernia were evaluated prospectively after laparoscopic paraesophageal hernia repair. Upper GI series was performed 3 months postoperatively to look for recurrence. Patients were stratified based on the surgeon's early (first 20 cases) and late experience (>20 cases). The CUSUM method was then used to further analyze the learning curve. Nine patients (21%) had anatomic recurrence. There was a trend toward a higher recurrence rate during the first 20 cases, although this did not achieve statistical significance (33% vs. 13%, p = 0.10). However, using a CUSUM analysis to plot the learning curve, we found that the recurrence rate diminishes after 18 cases and reaches an acceptable rate after 26 cases. Surgeon experience is an important predictor of recurrence after laparoscopic paraesophageal hernia repair. CUSUM analysis revealed there is a significant learning curve to become proficient at this procedure, with approximately 20 cases required before a consistent decrease in hernia recurrence rate is observed.

  2. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  3. SU-E-J-122: The CBCT Dose Calculation Using a Patient Specific CBCT Number to Mass Density Conversion Curve Based On a Novel Image Registration and Organ Mapping Method in Head-And-Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, J; Lasio, G; Chen, S

    2015-06-15

    Purpose: To develop a CBCT HU correction method using a patient specific HU to mass density conversion curve based on a novel image registration and organ mapping method for head-and-neck radiation therapy. Methods: There are three steps to generate a patient specific CBCT HU to mass density conversion curve. First, we developed a novel robust image registration method based on sparseness analysis to register the planning CT (PCT) and the CBCT. Second, a novel organ mapping method was developed to transfer the organs at risk (OAR) contours from the PCT to the CBCT and corresponding mean HU values of eachmore » OAR were measured in both the PCT and CBCT volumes. Third, a set of PCT and CBCT HU to mass density conversion curves were created based on the mean HU values of OARs and the corresponding mass density of the OAR in the PCT. Then, we compared our proposed conversion curve with the traditional Catphan phantom based CBCT HU to mass density calibration curve. Both curves were input into the treatment planning system (TPS) for dose calculation. Last, the PTV and OAR doses, DVH and dose distributions of CBCT plans are compared to the original treatment plan. Results: One head-and-neck cases which contained a pair of PCT and CBCT was used. The dose differences between the PCT and CBCT plans using the proposed method are −1.33% for the mean PTV, 0.06% for PTV D95%, and −0.56% for the left neck. The dose differences between plans of PCT and CBCT corrected using the CATPhan based method are −4.39% for mean PTV, 4.07% for PTV D95%, and −2.01% for the left neck. Conclusion: The proposed CBCT HU correction method achieves better agreement with the original treatment plan compared to the traditional CATPhan based calibration method.« less

  4. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less

  5. The Obsessive Compulsive Scale of the Child Behavior Checklist Predicts Obsessive-Compulsive Disorder: A Receiver Operating Characteristic Curve Analysis

    ERIC Educational Resources Information Center

    Hudziak, James J.; Althoff, Robert R.; Stanger, Catherine; van Beijsterveldt, C. E. M.; Nelson, Elliot C.; Hanna, Gregory L.; Boomsma, Dorret I.; Todd, Richard D.

    2006-01-01

    Background: The purpose of this study was to determine a score on the Obsessive Compulsive Scale (OCS) from the Child Behavior Checklist (CBCL) to screen for obsessive compulsive disorder (OCD) in children and to rigorously test the specificity and sensitivity of a single cutpoint. Methods: A receiver operating characteristic (ROC) curve analysis…

  6. High resolution melting analysis is a more sensitive and effective alternative to gel-based platforms in analysis of SSR--an example in citrus.

    PubMed

    Distefano, Gaetano; Caruso, Marco; La Malfa, Stefano; Gentile, Alessandra; Wu, Shu-Biao

    2012-01-01

    High resolution melting curve analysis (HRM) has been used as an efficient, accurate and cost-effective tool to detect single nucleotide polymorphisms (SNPs) or insertions or deletions (INDELs). However, its efficiency, accuracy and applicability to discriminate microsatellite polymorphism have not been extensively assessed. The traditional protocols used for SSR genotyping include PCR amplification of the DNA fragment and the separation of the fragments on electrophoresis-based platform. However, post-PCR handling processes are laborious and costly. Furthermore, SNPs present in the sequences flanking repeat motif cannot be detected by polyacrylamide-gel-electrophoresis based methods. In the present study, we compared the discriminating power of HRM with the traditional electrophoresis-based methods and provided a panel of primers for HRM genotyping in Citrus. The results showed that sixteen SSR markers produced distinct polymorphic melting curves among the Citrus spp investigated through HRM analysis. Among those, 10 showed more genotypes by HRM analysis than capillary electrophoresis owing to the presence of SNPs in the amplicons. For the SSR markers without SNPs present in the flanking region, HRM also gave distinct melting curves which detected same genotypes as were shown in capillary electrophoresis (CE) analysis. Moreover, HRM analysis allowed the discrimination of most of the 15 citrus genotypes and the resulting genetic distance analysis clustered them into three main branches. In conclusion, it has been approved that HRM is not only an efficient and cost-effective alternative of electrophoresis-based method for SSR markers, but also a method to uncover more polymorphisms contributed by SNPs present in SSRs. It was therefore suggested that the panel of SSR markers could be used in a variety of applications in the citrus biodiversity and breeding programs using HRM analysis. Furthermore, we speculate that the HRM analysis can be employed to analyse SSR markers in a wide range of applications in all other species.

  7. Methods for characterizing subsurface volatile contaminants using in-situ sensors

    DOEpatents

    Ho, Clifford K [Albuquerque, NM

    2006-02-21

    An inverse analysis method for characterizing diffusion of vapor from an underground source of volatile contaminant using data taken by an in-situ sensor. The method uses one-dimensional solutions to the diffusion equation in Cartesian, cylindrical, or spherical coordinates for isotropic and homogenous media. If the effective vapor diffusion coefficient is known, then the distance from the source to the in-situ sensor can be estimated by comparing the shape of the predicted time-dependent vapor concentration response curve to the measured response curve. Alternatively, if the source distance is known, then the effective vapor diffusion coefficient can be estimated using the same inverse analysis method. A triangulation technique can be used with multiple sensors to locate the source in two or three dimensions. The in-situ sensor can contain one or more chemiresistor elements housed in a waterproof enclosure with a gas permeable membrane.

  8. The Shock and Vibration Digest. Volume 16, Number 3

    DTIC Science & Technology

    1984-03-01

    Fluid-induced Statistical Energy Analysis Method excitation, Wind tunnel testing V.R. Miller and L.L. Faulkner Flight Dynamics Lab., Air Force...84475 wall by the statistical energy analysis (SEA) method. The fuselage structure is represented as a series of curved, iso- Probabilistic Fracture...heavy are demonstrated in three-dimensional form. floor, a statistical energy analysis (SEA) model is presented. Only structural systems (i.e., no

  9. Stability analysis of flexible wind turbine blades using finite element method

    NASA Technical Reports Server (NTRS)

    Kamoulakos, A.

    1982-01-01

    Static vibration and flutter analysis of a straight elastic axis blade was performed based on a finite element method solution. The total potential energy functional was formulated according to linear beam theory. The inertia and aerodynamic loads were formulated according to the blade absolute acceleration and absolute velocity vectors. In vibration analysis, the direction of motion of the blade during the first out-of-lane and first in-plane modes was examined; numerical results involve NASA/DOE Mod-0, McCauley propeller, north wind turbine and flat plate behavior. In flutter analysis, comparison cases were examined involving several references. Vibration analysis of a nonstraight elastic axis blade based on a finite element method solution was performed in a similar manner with the straight elastic axis blade, since it was recognized that a curved blade can be approximated by an assembly of a sufficient number of straight blade elements at different inclinations with respect to common system of axes. Numerical results involve comparison between the behavior of a straight and a curved cantilever beam during the lowest two in-plane and out-of-plane modes.

  10. Concentration Regimes of Biopolymers Xanthan, Tara, and Clairana, Comparing Dynamic Light Scattering and Distribution of Relaxation Time

    PubMed Central

    Oliveira, Patrícia D.; Michel, Ricardo C.; McBride, Alan J. A.; Moreira, Angelita S.; Lomba, Rosana F. T.; Vendruscolo, Claire T.

    2013-01-01

    The aim of this work was to evaluate the utilization of analysis of the distribution of relaxation time (DRT) using a dynamic light back-scattering technique as alternative method for the determination of the concentration regimes in aqueous solutions of biopolymers (xanthan, clairana and tara gums) by an analysis of the overlap (c*) and aggregation (c**) concentrations. The diffusion coefficients were obtained over a range of concentrations for each biopolymer using two methods. The first method analysed the behaviour of the diffusion coefficient as a function of the concentration of the gum solution. This method is based on the analysis of the diffusion coefficient versus the concentration curve. Using the slope of the curves, it was possible to determine the c* and c** for xanthan and tara gum. However, it was not possible to determine the concentration regimes for clairana using this method. The second method was based on an analysis of the DRTs, which showed different numbers of relaxation modes. It was observed that the concentrations at which the number of modes changed corresponded to the c* and c**. Thus, the DRT technique provided an alternative method for the determination of the critical concentrations of biopolymers. PMID:23671627

  11. Parametric analysis of ATM solar array.

    NASA Technical Reports Server (NTRS)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  12. A Comparison of a Machine Learning Model with EuroSCORE II in Predicting Mortality after Elective Cardiac Surgery: A Decision Curve Analysis

    PubMed Central

    Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril

    2017-01-01

    Background The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. Methods and finding We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755–0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691–0.783) and 0.742 (0.698–0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. Conclusions According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction. PMID:28060903

  13. Comparison of two methods based on photoplethysmography for the diagnosis of peripheral arterial disease.

    PubMed

    Høyer, Christian; Nielsen, Nikolaj Schandorph; Jordansen, Malene Kragh Overvad; Zacho, Helle Damgaard

    2017-12-01

    To examine the interchangeability of two methods for distal pressure measurement based on photoplethysmography using a truncated or full display of the arterial inflow curve, respectively. Toe and ankle pressures were obtained from 69 patients suspected of peripheral arterial disease (PAD). Observer reproducibility of the curve readings was examined by blinded reassessment of the pressure curves in a randomly selected subgroup (60 limbs). There were no significant differences in mean pressures between the two methods (p for all > .455). The limits of agreement for the differences were -15.0-15.4 mmHg for right toe pressures, -16.3-16.2 mmHg for left toe pressures, -14.2-15.7 mmHg for right ankle pressures, and -18.3-17.7 mmHg for left ankle pressures. Correlation analysis revealed intraclass correlation coefficients ≥0.960 for all measuring sites. Cohen's Kappa showed excellent agreement in diagnostic classification, with κ = 0.930 for the diagnosis of PAD and perfect agreement in the diagnosis of critical limb ischemia (κ = 1.000). The analysis of intra-observer variation for curve reading showed limits of agreement of -3.9-4.0 for toe pressures and -7.6-7.7 for ankle pressures for the method involving truncated display and -3.1-3.2 for toe pressures and -6.3-8.6 for ankle pressures for the method involving full display of the signal. The present study shows minimal differences in diagnostic classification, as well as in ankle and toe pressures, between the full display and the truncated display of the photoplethysmographic pulse signal. Furthermore, the inter-observer variation was low for both of the photoplethysmographic methods investigated.

  14. Curved-flow, rolling-flow, and oscillatory pure-yawing wind-tunnel test methods for determination of dynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Chambers, J. R.; Grafton, S. B.; Lutze, F. H.

    1981-01-01

    The test capabilities of the Stability Wind Tunnel of the Virginia Polytechnic Institute and State University are described, and calibrations for curved and rolling flow techniques are given. Oscillatory snaking tests to determine pure yawing derivatives are considered. Representative aerodynamic data obtained for a current fighter configuration using the curved and rolling flow techniques are presented. The application of dynamic derivatives obtained in such tests to the analysis of airplane motions in general, and to high angle of attack flight conditions in particular, is discussed.

  15. Continuous relaxation and retardation spectrum method for viscoelastic characterization of asphalt concrete

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.

    2012-08-01

    This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.

  16. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  17. Analysis of Self-Associating Proteins by Singular Value Decomposition of Solution Scattering Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, Tim E.; Craig, Bruce A.; Kondrashkina, Elena

    2008-07-08

    We describe a method by which a single experiment can reveal both association model (pathway and constants) and low-resolution structures of a self-associating system. Small-angle scattering data are collected from solutions at a range of concentrations. These scattering data curves are mass-weighted linear combinations of the scattering from each oligomer. Singular value decomposition of the data yields a set of basis vectors from which the scattering curve for each oligomer is reconstructed using coefficients that depend on the association model. A search identifies the association pathway and constants that provide the best agreement between reconstructed and observed data. Using simulatedmore » data with realistic noise, our method finds the correct pathway and association constants. Depending on the simulation parameters, reconstructed curves for each oligomer differ from the ideal by 0.050.99% in median absolute relative deviation. The reconstructed scattering curves are fundamental to further analysis, including interatomic distance distribution calculation and low-resolution ab initio shape reconstruction of each oligomer in solution. This method can be applied to x-ray or neutron scattering data from small angles to moderate (or higher) resolution. Data can be taken under physiological conditions, or particular conditions (e.g., temperature) can be varied to extract fundamental association parameters ({Delta}H{sub ass}, S{sub ass}).« less

  18. Analysis of the width-w non-adjacent form in conjunction with hyperelliptic curve cryptography and with lattices☆

    PubMed Central

    Krenn, Daniel

    2013-01-01

    In this work the number of occurrences of a fixed non-zero digit in the width-w non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange’s method. PMID:23805020

  19. Analysis of the width-[Formula: see text] non-adjacent form in conjunction with hyperelliptic curve cryptography and with lattices.

    PubMed

    Krenn, Daniel

    2013-06-17

    In this work the number of occurrences of a fixed non-zero digit in the width-[Formula: see text] non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange's method.

  20. Large discrepancies in the excitation function data of the 68Zn(p, x)64Cu reaction: a possible explanation

    NASA Astrophysics Data System (ADS)

    Steyn, G. F.; Szelecsényi, F.; Kovács, Z.; van der Walt, T. N.; Dolley, S. G.; Vermeulen, C.

    2006-05-01

    The excitation function of the 68Zn(p, x)64Cu reaction was investigated in an attempt to clarify a serious discrepancy in the recently published data. New measurements based on both a weak γ-line of 1345.8 keV (0.47%) as well as the 511 keV annihilation radiation were performed after radiochemically separating the Cu from the Zn target matrix. In the case of the 511 keV measurements, the method of decay-curve analysis was employed as the annihilation radiation is not specific for a particular radionuclide. The results from the two methods were found to be in excellent agreement. Simulations were also performed to test the method of 511 keV decay-curve analysis for the effects of possible intruder contaminants.

  1. Zirconium determination by cooling curve analysis during the pyroprocessing of used nuclear fuel

    NASA Astrophysics Data System (ADS)

    Westphal, B. R.; Price, J. C.; Bateman, K. J.; Marsden, K. C.

    2015-02-01

    An alternative method to sampling and chemical analyses has been developed to monitor the concentration of zirconium in real-time during the casting of uranium products from the pyroprocessing of used nuclear fuel. The method utilizes the solidification characteristics of the uranium products to determine zirconium levels based on standard cooling curve analyses and established binary phase diagram data. Numerous uranium products have been analyzed for their zirconium content and compared against measured zirconium data. From this data, the following equation was derived for the zirconium content of uranium products:

  2. Gaussian-Beam/Physical-Optics Design Of Beam Waveguide

    NASA Technical Reports Server (NTRS)

    Veruttipong, Watt; Chen, Jacqueline C.; Bathker, Dan A.

    1993-01-01

    In iterative method of designing wideband beam-waveguide feed for paraboloidal-reflector antenna, Gaussian-beam approximation alternated with more nearly exact physical-optics analysis of diffraction. Includes curved and straight reflectors guiding radiation from feed horn to subreflector. For iterative design calculations, curved mirrors mathematically modeled as thin lenses. Each distance Li is combined length of two straight-line segments intersecting at one of flat mirrors. Method useful for designing beam-waveguide reflectors or mirrors required to have diameters approximately less than 30 wavelengths at one or more intended operating frequencies.

  3. Computerized measurement and analysis of scoliosis: a more accurate representation of the shape of the curve.

    PubMed

    Jeffries, B F; Tarlton, M; De Smet, A A; Dwyer, S J; Brower, A C

    1980-02-01

    A computer program was created to identify and accept spatial data regarding the location of the thoracic and lumbar vertebral bodies on scoliosis films. With this information, the spine can be mathematically reconstructed and a scoliotic angle calculated. There was a 0.968 positive correlation between the computer and manual methods of measuring scoliosis. The computer method was more reproducible with a standard deviation of only 1.3 degrees. Computerized measurement of scoliosis also provides better evaluation of the true shape of the curve.

  4. Group Velocity Dispersion Curves from Wigner-Ville Distributions

    NASA Astrophysics Data System (ADS)

    Lloyd, Simon; Bokelmann, Goetz; Sucic, Victor

    2013-04-01

    With the widespread adoption of ambient noise tomography, and the increasing number of local earthquakes recorded worldwide due to dense seismic networks and many very dense temporary experiments, we consider it worthwhile to evaluate alternative Methods to measure surface wave group velocity dispersions curves. Moreover, the increased computing power of even a simple desktop computer makes it feasible to routinely use methods other than the typically employed multiple filtering technique (MFT). To that end we perform tests with synthetic and observed seismograms using the Wigner-Ville distribution (WVD) frequency time analysis, and compare dispersion curves measured with WVD and MFT with each other. Initial results suggest WVD to be at least as good as MFT at measuring dispersion, albeit at a greater computational expense. We therefore need to investigate if, and under which circumstances, WVD yields better dispersion curves than MFT, before considering routinely applying the method. As both MFT and WVD generally work well for teleseismic events and at longer periods, we explore how well the WVD method performs at shorter periods and for local events with smaller epicentral distances. Such dispersion information could potentially be beneficial for improving velocity structure resolution within the crust.

  5. Analysis of the variation of atmospheric electric field during solar events

    NASA Astrophysics Data System (ADS)

    Tacza, J.; Raulin, J. P.

    2016-12-01

    We present the capability of a new network of electric field mill sensors to monitor the atmospheric electric field at various locations in South America. The first task is to obtain a diurnal curve of atmospheric electric field variations under fair weather conditions, which we will consider as a reference curve. To accomplish this, we made daily, monthly, seasonal and annual averages. For all sensor location, the results show significant similarities with the Carnegie curve. The Carnegie curve is the characteristic curve in universal time of atmospheric electric field in fair weather and one thinks it is related to the currents flowing in the global atmospheric electric circuit. Ultimately, we pretend to study departures of the daily observations from the standard curve. This difference can be caused by solar, geophysical and atmospheric phenomena such as the solar activity cycle, solar flares and energetic charged particles, galactic cosmic rays, seismic activity and/or specific meteorological events. As an illustration we investigate solar effects on the atmospheric electric field observed at CASLEO (Lat. 31.798°S, Long. 69.295°W, Altitude: 2552 masl) by the method of superposed epoch analysis, between January 2010 and December 2015.

  6. A Novel Method to Reconstruct the Force Curve by Higher Harmonics of the First Two Flexural Modes in Frequency Modulation Atomic Force Microscope (FM-AFM).

    PubMed

    Zhang, Suoxin; Qian, Jianqiang; Li, Yingzi; Zhang, Yingxu; Wang, Zhenyu

    2018-06-04

    Atomic force microscope (AFM) is an idealized tool to measure the physical and chemical properties of the sample surfaces by reconstructing the force curve, which is of great significance to materials science, biology, and medicine science. Frequency modulation atomic force microscope (FM-AFM) collects the frequency shift as feedback thus having high force sensitivity and it accomplishes a true noncontact mode, which means great potential in biological sample detection field. However, it is a challenge to establish the relationship between the cantilever properties observed in practice and the tip-sample interaction theoretically. Moreover, there is no existing method to reconstruct the force curve in FM-AFM combining the higher harmonics and the higher flexural modes. This paper proposes a novel method that a full force curve can be reconstructed by any order higher harmonics of the first two flexural modes under any vibration amplitude in FM-AFM. Moreover, in the small amplitude regime, short range forces are reconstructed more accurately by higher harmonics analysis compared with fundamental harmonics using the Sader-Jarvis formula.

  7. Thermoluminescence glow curve deconvolution and trapping parameters determination of dysprosium doped magnesium borate glass

    NASA Astrophysics Data System (ADS)

    Salama, E.; Soliman, H. A.

    2018-07-01

    In this paper, thermoluminescence glow curves of gamma irradiated magnesium borate glass doped with dysprosium were studied. The number of interfering peaks and in turn the number of electron trap levels are determined using the Repeated Initial Rise (RIR) method. At different heating rates (β), the glow curves were deconvoluted into two interfering peaks based on the results of RIR method. Kinetic parameters such as trap depth, kinetic order (b) and frequency factor (s) for each electron trap level is determined using the Peak Shape (PS) method. The obtained results indicated that, the magnesium borate glass doped with dysprosium has two electron trap levels with the average depth energies of 0.63 and 0.79 eV respectively. These two traps have second order kinetic and are formed at low temperature region. The obtained results due to the glow curve analysis could be used to explain some observed properties such as, high thermal fading and light sensitivity for such thermoluminescence material. In this work, systematic procedures to determine the kinetic parameters of any thermoluminescence material are successfully introduced.

  8. CONFIRMATION OF HOT JUPITER KEPLER-41b VIA PHASE CURVE ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintana, Elisa V.; Rowe, Jason F.; Caldwell, Douglas A.

    We present high precision photometry of Kepler-41, a giant planet in a 1.86 day orbit around a G6V star that was recently confirmed through radial velocity measurements. We have developed a new method to confirm giant planets solely from the photometric light curve, and we apply this method herein to Kepler-41 to establish the validity of this technique. We generate a full phase photometric model by including the primary and secondary transits, ellipsoidal variations, Doppler beaming, and reflected/emitted light from the planet. Third light contamination scenarios that can mimic a planetary transit signal are simulated by injecting a full rangemore » of dilution values into the model, and we re-fit each diluted light curve model to the light curve. The resulting constraints on the maximum occultation depth and stellar density combined with stellar evolution models rules out stellar blends and provides a measurement of the planet's mass, size, and temperature. We expect about two dozen Kepler giant planets can be confirmed via this method.« less

  9. Estimating sunspot number

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.

    1984-01-01

    An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.

  10. Videodensitometric Methods for Cardiac Output Measurements

    NASA Astrophysics Data System (ADS)

    Mischi, Massimo; Kalker, Ton; Korsten, Erik

    2003-12-01

    Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.

  11. Economic benefit evaluation for renewable energy transmitted by HVDC based on production simulation (PS) and analytic hierarchy process(AHP)

    NASA Astrophysics Data System (ADS)

    Zhang, Jinfang; Zheng, Kuan; Liu, Jun; Huang, Xinting

    2018-02-01

    In order to support North and West China’s RE (RE) development and enhance accommodation in reasonable high level, HVDC’s traditional operation curves need some change to follow the output characteristic of RE, which helps to shrink curtailment electricity and curtailment ratio of RE. In this paper, an economic benefit analysis method based on production simulation (PS) and Analytic hierarchy process (AHP) has been proposed. PS is the basic tool to analyze chosen power system operation situation, and AHP method could give a suitable comparison result among many candidate schemes. Based on four different transmission curve combinations, related economic benefit has been evaluated by PS and AHP. The results and related index have shown the efficiency of suggested method, and finally it has been validated that HVDC operation curve in following RE output mode could have benefit in decreasing RE curtailment level and improving economic operation.

  12. Analysis of the glow curve of SrB 4O 7:Dy compounds employing the GOT model

    NASA Astrophysics Data System (ADS)

    Ortega, F.; Molina, P.; Santiago, M.; Spano, F.; Lester, M.; Caselli, E.

    2006-02-01

    The glow curve of SrB 4O 7:Dy phosphors has been analysed with the general one trap model (GOT). To solve the differential equation describing the GOT model a novel algorithm has been employed, which reduces significantly the deconvolution time with respect to the time required by usual integration algorithms, such as the Runge-Kutta method.

  13. Development and inter-laboratory validation of unlabeled probe melting curve analysis for detection of JAK2 V617F mutation in polycythemia vera.

    PubMed

    Wu, Zhiyuan; Yuan, Hong; Zhang, Xinju; Liu, Weiwei; Xu, Jinhua; Zhang, Wei; Guan, Ming

    2011-01-01

    JAK2 V617F, a somatic point mutation that leads to constitutive JAK2 phosphorylation and kinase activation, has been incorporated into the WHO classification and diagnostic criteria of myeloid neoplasms. Although various approaches such as restriction fragment length polymorphism, amplification refractory mutation system and real-time PCR have been developed for its detection, a generic rapid closed-tube method, which can be utilized on routine genetic testing instruments with stability and cost-efficiency, has not been described. Asymmetric PCR for detection of JAK2 V617F with a 3'-blocked unlabeled probe, saturate dye and subsequent melting curve analysis was performed on a Rotor-Gene® Q real-time cycler to establish the methodology. We compared this method to the existing amplification refractory mutation systems and direct sequencing. Hereafter, the broad applicability of this unlabeled probe melting method was also validated on three diverse real-time systems (Roche LightCycler® 480, Applied Biosystems ABI® 7500 and Eppendorf Mastercycler® ep realplex) in two different laboratories. The unlabeled probe melting analysis could genotype JAK2 V617F mutation explicitly with a 3% mutation load detecting sensitivity. At level of 5% mutation load, the intra- and inter-assay CVs of probe-DNA heteroduplex (mutation/wild type) covered 3.14%/3.55% and 1.72%/1.29% respectively. The method could equally discriminate mutant from wild type samples on the other three real-time instruments. With a high detecting sensitivity, unlabeled probe melting curve analysis is more applicable to disclose JAK2 V617F mutation than conventional methodologies. Verified with the favorable inter- and intra-assay reproducibility, unlabeled probe melting analysis provided a generic mutation detecting alternative for real-time instruments.

  14. Use of lignin extracted from different plant sources as standards in the spectrophotometric acetyl bromide lignin method.

    PubMed

    Fukushima, Romualdo S; Kerley, Monty S

    2011-04-27

    A nongravimetric acetyl bromide lignin (ABL) method was evaluated to quantify lignin concentration in a variety of plant materials. The traditional approach to lignin quantification required extraction of lignin with acidic dioxane and its isolation from each plant sample to construct a standard curve via spectrophotometric analysis. Lignin concentration was then measured in pre-extracted plant cell walls. However, this presented a methodological complexity because extraction and isolation procedures are lengthy and tedious, particularly if there are many samples involved. This work was targeted to simplify lignin quantification. Our hypothesis was that any lignin, regardless of its botanical origin, could be used to construct a standard curve for the purpose of determining lignin concentration in a variety of plants. To test our hypothesis, lignins were isolated from a range of diverse plants and, along with three commercial lignins, standard curves were built and compared among them. Slopes and intercepts derived from these standard curves were close enough to allow utilization of a mean extinction coefficient in the regression equation to estimate lignin concentration in any plant, independent of its botanical origin. Lignin quantification by use of a common regression equation obviates the steps of lignin extraction, isolation, and standard curve construction, which substantially expedites the ABL method. Acetyl bromide lignin method is a fast, convenient analytical procedure that may routinely be used to quantify lignin.

  15. Guide to Using Onionskin Analysis Code (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Morzinski, Jerome Arthur

    2016-09-15

    This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less

  16. Use of laser ablation-inductively coupled plasma-time of flight-mass spectrometry to identify the elemental composition of vanilla and determine the geographic origin by discriminant function analysis.

    PubMed

    Hondrogiannis, Ellen M; Ehrlinger, Erin; Poplaski, Alyssa; Lisle, Meredith

    2013-11-27

    A total of 11 elements found in 25 vanilla samples from Uganda, Madagascar, Indonesia, and Papua New Guinea were measured by laser ablation-inductively coupled plasma-time-of-flight-mass spectrometry (LA-ICP-TOF-MS) for the purpose of collecting data that could be used to discriminate among the origins. Pellets were prepared of the samples, and elemental concentrations were obtained on the basis of external calibration curves created using five National Institute of Standards and Technology (NIST) standards and one Chinese standard with (13)C internal standardization. These curves were validated using NIST 1573a (tomato leaves) as a check standard. Discriminant analysis was used to successfully classify the vanilla samples by their origin. Our method illustrates the feasibility of using LA-ICP-TOF-MS with an external calibration curve for high-throughput screening of spice screening analysis.

  17. Extension of Ko Straight-Beam Displacement Theory to Deformed Shape Predictions of Slender Curved Structures

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Fleischer, Van Tran

    2011-01-01

    The Ko displacement theory originally developed for shape predictions of straight beams is extended to shape predictions of curved beams. The surface strains needed for shape predictions were analytically generated from finite-element nodal stress outputs. With the aid of finite-element displacement outputs, mathematical functional forms for curvature-effect correction terms are established and incorporated into straight-beam deflection equations for shape predictions of both cantilever and two-point supported curved beams. The newly established deflection equations for cantilever curved beams could provide quite accurate shape predictions for different cantilever curved beams, including the quarter-circle cantilever beam. Furthermore, the newly formulated deflection equations for two-point supported curved beams could provide accurate shape predictions for a range of two-point supported curved beams, including the full-circular ring. Accuracy of the newly developed curved-beam deflection equations is validated through shape prediction analysis of curved beams embedded in the windward shallow spherical shell of a generic crew exploration vehicle. A single-point collocation method for optimization of shape predictions is discussed in detail

  18. Parameter setting for peak fitting method in XPS analysis of nitrogen in sewage sludge

    NASA Astrophysics Data System (ADS)

    Tang, Z. J.; Fang, P.; Huang, J. H.; Zhong, P. Y.

    2017-12-01

    Thermal decomposition method is regarded as an important route to treat increasing sewage sludge, while the high content of N causes serious nitrogen related problems, then figuring out the existing form and content of nitrogen of sewage sludge become essential. In this study, XPSpeak 4.1 was used to investigate the functional forms of nitrogen in sewage sludge, peak fitting method was adopted and the best-optimized parameters were determined. According to the result, the N1s spectra curve can be resolved into 5 peaks: pyridine-N (398.7±0.4eV), pyrrole-N(400.5±0.3eV), protein-N(400.4eV), ammonium-N(401.1±0.3eV) and nitrogen oxide-N(403.5±0.5eV). Based on the the experimental data obtained from elemental analysis and spectrophotometry method, the optimum parameters of curve fitting method were decided: background type: Tougaard, FWHM 1.2, 50% Lorentzian-Gaussian. XPS methods can be used as a practical tool to analysis the nitrogen functional groups of sewage sludge, which can reflect the real content of nitrogen of different forms.

  19. A geometry-based approach to determining time-temperature superposition shifts in aging experiments

    DOE PAGES

    Maiti, Amitesh

    2015-12-21

    A powerful way to expand the time and frequency range of material properties is through a method called time-temperature superposition (TTS). Traditionally, TTS has been applied to the dynamical mechanical and flow properties of thermo-rheologically simple materials, where a well-defined master curve can be objectively and accurately obtained by appropriate shifts of curves at different temperatures. However, TTS analysis can also be useful in many other situations where there is scatter in the data and where the principle holds only approximately. In such cases, shifting curves can become a subjective exercise and can often lead to significant errors in themore » long-term prediction. This mandates the need for an objective method of determining TTS shifts. Here, we adopt a method based on minimizing the “arc length” of the master curve, which is designed to work in situations where there is overlapping data at successive temperatures. We examine the accuracy of the method as a function of increasing noise in the data, and explore the effectiveness of data smoothing prior to TTS shifting. In conclusion, we validate the method using existing experimental data on the creep strain of an aramid fiber and the powder coarsening of an energetic material.« less

  20. Determination Plastic Properties of a Material by Spherical Indentation Base on the Representative Stress Approach

    NASA Astrophysics Data System (ADS)

    Budiarsa, I. N.; Gde Antara, I. N.; Dharma, Agus; Karnata, I. N.

    2018-04-01

    Under an indentation, the material undergoes a complex deformation. One of the most effective ways to analyse indentation has been the representative method. The concept coupled with finite element (FE) modelling has been used successfully in analysing sharp indenters. It is of great importance to extend this method to spherical indentation and associated hardness system. One particular case is the Rockwell B test, where the hardness is determined by two points on the P-h curve of a spherical indenter. In this case, an established link between materials parameters and P-h curves can naturally lead to direct hardness estimation from the materials parameters (e.g. yield stress (y) and work hardening coefficients (n)). This could provide a useful tool for both research and industrial applications. Two method to predict p-h curve in spherical indentation has been established. One is use method using C1-C2 polynomial equation approach and another one by depth approach. Both approach has been successfully. An effective method in representing the P-h curves using a normalized representative stress concept was established. The concept and methodology developed is used to predict hardness (HRB) values of materials through direct analysis and validated with experimental data on selected samples of steel.

  1. Enrollment Projection within a Decision-Making Framework.

    ERIC Educational Resources Information Center

    Armstrong, David F.; Nunley, Charlene Wenckowski

    1981-01-01

    Two methods used to predict enrollment at Montgomery College in Maryland are compared and evaluated, and the administrative context in which they are used is considered. The two methods involve time series analysis (curve fitting) and indicator techniques (yield from components). (MSE)

  2. Foundation Analysis East Coast Air Combat Maneuvering Range Offshore Kitty Hawk, North Carolina.

    DTIC Science & Technology

    1976-09-01

    1976 86 2 3 025 TABLE OF CONTENTS SECTION TITLE PAGE 1.0 INTRODUCTION 1.1 Introduction 1.01 1.2 Methods of Analysis 1.01 1.3 Personnel Resumes 1.02...piling into the desired penetration. 1.2 METHODS OF ANALYSIS The method employed to perform the computation of pipe pile capacity curves, as presented...AD-A163 522 FOUNDATION ANALYSIS EAST COAST AIR COMBAT NANsUVERING 14S RANGE OFFSHORE KITT.. CU) CREST ENGINEERING INC TULSA OK SEP 76 27-M7-97 CNES

  3. Analysis of several Boolean operation based trajectory generation strategies for automotive spray applications

    NASA Astrophysics Data System (ADS)

    Gao, Guoyou; Jiang, Chunsheng; Chen, Tao; Hui, Chun

    2018-05-01

    Industrial robots are widely used in various processes of surface manufacturing, such as thermal spraying. The established robot programming methods are highly time-consuming and not accurate enough to fulfil the demands of the actual market. There are many off-line programming methods developed to reduce the robot programming effort. This work introduces the principle of several based robot trajectory generation strategy on planar surface and curved surface. Since the off-line programming software is widely used and thus facilitates the robot programming efforts and improves the accuracy of robot trajectory, the analysis of this work is based on the second development of off-line programming software Robot studio™. To meet the requirements of automotive paint industry, this kind of software extension helps provide special functions according to the users defined operation parameters. The presented planning strategy generates the robot trajectory by moving an orthogonal surface according to the information of coating surface, a series of intersection curves are then employed to generate the trajectory points. The simulation results show that the path curve created with this method is successive and smooth, which corresponds to the requirements of automotive spray industrial applications.

  4. Qualitative and quantitative analysis of an additive element in metal oxide nanometer film using laser induced breakdown spectroscopy.

    PubMed

    Xiu, Junshan; Liu, Shiming; Sun, Meiling; Dong, Lili

    2018-01-20

    The photoelectric performance of metal ion-doped TiO 2 film will be improved with the changing of the compositions and concentrations of additive elements. In this work, the TiO 2 films doped with different Sn concentrations were obtained with the hydrothermal method. Qualitative and quantitative analysis of the Sn element in TiO 2 film was achieved with laser induced breakdown spectroscopy (LIBS) with the calibration curves plotted accordingly. The photoelectric characteristics of TiO 2 films doped with different Sn content were observed with UV visible absorption spectra and J-V curves. All results showed that Sn doping could improve the optical absorption to be red-shifted and advance the photoelectric properties of the TiO 2 films. We had obtained that when the concentration of Sn doping in TiO 2 films was 11.89  mmol/L, which was calculated by the LIBS calibration curves, the current density of the film was the largest, which indicated the best photoelectric performance. It indicated that LIBS was a potential and feasible measured method, which was applied to qualitative and quantitative analysis of the additive element in metal oxide nanometer film.

  5. DSC, X-ray and FTIR studies of a gemfibrozil/dimethyl-β-cyclodextrin inclusion complex produced by co-grinding.

    PubMed

    Aigner, Z; Berkesi, O; Farkas, G; Szabó-Révész, P

    2012-01-05

    The steps of formation of an inclusion complex produced by the co-grinding of gemfibrozil and dimethyl-β-cyclodextrin were investigated by differential scanning calorimetry (DSC), X-ray powder diffractometry (XRPD) and Fourier transform infrared (FTIR) spectroscopy with curve-fitting analysis. The endothermic peak at 59.25°C reflecting the melting of gemfibrozil progressively disappeared from the DSC curves of the products on increase of the duration of co-grinding. The crystallinity of the samples too gradually decreased, and after 35min of co-grinding the product was totally amorphous. Up to this co-grinding time, XRPD and FTIR investigations indicated a linear correlation between the cyclodextrin complexation and the co-grinding time. After co-grinding for 30min, the ratio of complex formation did not increase. These studies demonstrated that co-grinding is a suitable method for the complexation of gemfibrozil with dimethyl-β-cyclodextrin. XRPD analysis revealed the amorphous state of the gemfibrozil-dimethyl-β-cyclodextrin product. FTIR spectroscopy with curve-fitting analysis may be useful as a semiquantitative analytical method for discriminating the molecular and amorphous states of gemfibrozil. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Decoding of the light changes in eclipsing Wolf-Rayet binaries. I. A non-classical approach to the solution of light curves

    NASA Astrophysics Data System (ADS)

    Perrier, C.; Breysacher, J.; Rauw, G.

    2009-09-01

    Aims: We present a technique to determine the orbital and physical parameters of eclipsing eccentric Wolf-Rayet + O-star binaries, where one eclipse is produced by the absorption of the O-star light by the stellar wind of the W-R star. Methods: Our method is based on the use of the empirical moments of the light curve that are integral transforms evaluated from the observed light curves. The optical depth along the line of sight and the limb darkening of the W-R star are modelled by simple mathematical functions, and we derive analytical expressions for the moments of the light curve as a function of the orbital parameters and the key parameters of the transparency and limb-darkening functions. These analytical expressions are then inverted in order to derive the values of the orbital inclination, the stellar radii, the fractional luminosities, and the parameters of the wind transparency and limb-darkening laws. Results: The method is applied to the SMC W-R eclipsing binary HD 5980, a remarkable object that underwent an LBV-like event in August 1994. The analysis refers to the pre-outburst observational data. A synthetic light curve based on the elements derived for the system allows a quality assessment of the results obtained.

  7. Maximum von Mises Stress in the Loading Environment of Mass Acceleration Curve

    NASA Technical Reports Server (NTRS)

    Glaser, Robert J.; Chen, Long Y.

    2006-01-01

    Method for calculating stress due to acceleration loading: 1) Part has been designed by FEA and hand calculation in one critical loading direction judged by the analyst; 2) Maximum stress can be due to loading in another direction; 3) Analysis procedure to be presented determines: a) The maximum Mises stress at any point; and b) The direction of maximum loading associated with the "stress". Concept of Mass Acceleration Curves (MAC): 1) Developed by JPL to perform preliminary structural sizing (i.e. Mariners, Voyager, Galileo, Pathfinder, MER,...MSL); 2) Acceleration of physical masses are bounded by a curve; 3) G-levels of vibro-acoustic and transient environments; 4) Convergent process before the couple loads cycle; and 5) Semi-empirical method to effectively bound the loads, not a simulation of the actual response.

  8. Inferring cardiac phase response curve in vivo

    NASA Astrophysics Data System (ADS)

    Pikovsky, Arkady; Kralemann, Bjoern; Fruehwirth, Matthias; Rosenblum, Michael; Kenner, Thomas; Schaefer, Jochen; Moser, Maximilian

    2014-03-01

    Characterizing properties of biological oscillators with phase response cirves (PRC) is one of main theoretical tools in neuroscience, cardio-respiratory physiology, and chronobiology. We present a technique that allows the extraction of the PRC from a non-invasive observation of a system consisting of two interacting oscillators, in this case heartbeat and respiration, in its natural environment and under free-running conditions. We use this method to obtain the phase coupling functions describing cardio-respiratory interactions and the phase response curve of 17 healthy humans. We show at which phase the cardiac beat is susceptible to respiratory drive and extract the respiratory-related component of heart rate variability. This non-invasive method of bivariate data analysis for the determination of phase response curves of coupled oscillators may find application in other biological and physical systems.

  9. A Century of Enzyme Kinetic Analysis, 1913 to 2013

    PubMed Central

    Johnson, Kenneth A.

    2013-01-01

    This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. PMID:23850893

  10. Importance of nasal clipping in screening investigations of flow volume curve.

    PubMed

    Yanev, I

    1992-01-01

    Comparative analysis of some basic lung indices obtained from a screening investigation of the flow volume curve by using two techniques, with a nose clip and without a nose clip, was made on a cohort of 86 workers in a factory shop for the production of bearings. We found no statistically significant differences between the indices obtained by the two techniques. Our study showed that the FVC and FEV1 obtained in workers without using nose clips were equal to or better than those obtained using nose clips in 60% of the workers. The reproducibility of the two methods was similar. The analysis of the data has shown that the flow volume curve investigation gives better results when performed without a nose clip, especially in industrial conditions.

  11. Deriving injury risk curves using survival analysis from biomechanical experiments.

    PubMed

    Yoganandan, Narayan; Banerjee, Anjishnu; Hsu, Fang-Chi; Bass, Cameron R; Voo, Liming; Pintar, Frank A; Gayzik, F Scott

    2016-10-03

    Injury risk curves from biomechanical experimental data analysis are used in automotive studies to improve crashworthiness and advance occupant safety. Metrics such as acceleration and deflection coupled with outcomes such as fractures and anatomical disruptions from impact tests are used in simple binary regression models. As an improvement, the International Standards Organization suggested a different approach. It was based on survival analysis. While probability curves for side-impact-induced thorax and abdominal injuries and frontal impact-induced foot-ankle-leg injuries are developed using this approach, deficiencies are apparent. The objective of this study is to present an improved, robust and generalizable methodology in an attempt to resolve these issues. It includes: (a) statistical identification of the most appropriate independent variable (metric) from a pool of candidate metrics, measured and or derived during experimentation and analysis processes, based on the highest area under the receiver operator curve, (b) quantitative determination of the most optimal probability distribution based on the lowest Akaike information criterion, (c) supplementing the qualitative/visual inspection method for comparing the selected distribution with a non-parametric distribution with objective measures, (d) identification of overly influential observations using different methods, and (e) estimation of confidence intervals using techniques more appropriate to the underlying survival statistical model. These clear and quantified details can be easily implemented with commercial/open source packages. They can be used in retrospective analysis and prospective design of experiments, and in applications to different loading scenarios such as underbody blast events. The feasibility of the methodology is demonstrated using post mortem human subject experiments and 24 metrics associated with thoracic/abdominal injuries in side-impacts. Published by Elsevier Ltd.

  12. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  13. Continuation Methods for Qualitative Analysis of Aircraft Dynamics

    NASA Technical Reports Server (NTRS)

    Cummings, Peter A.

    2004-01-01

    A class of numerical methods for constructing bifurcation curves for systems of coupled, non-linear ordinary differential equations is presented. Foundations are discussed, and several variations are outlined along with their respective capabilities. Appropriate background material from dynamical systems theory is presented.

  14. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its clustering in 2D.

  15. Analysis of fast and slow responses in AC conductance curves for p-type SiC MOS capacitors

    NASA Astrophysics Data System (ADS)

    Karamoto, Yuki; Zhang, Xufang; Okamoto, Dai; Sometani, Mitsuru; Hatakeyama, Tetsuo; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi

    2018-06-01

    We used a conductance method to investigate the interface characteristics of a SiO2/p-type 4H-SiC MOS structure fabricated by dry oxidation. It was found that the measured equivalent parallel conductance–frequency (G p/ω–f) curves were not symmetric, showing that there existed both high- and low-frequency signals. We attributed high-frequency responses to fast interface states and low-frequency responses to near-interface oxide traps. To analyze the fast interface states, Nicollian’s standard conductance method was applied in the high-frequency range. By extracting the high-frequency responses from the measured G p/ω–f curves, the characteristics of the low-frequency responses were reproduced by Cooper’s model, which considers the effect of near-interface traps on the G p/ω–f curves. The corresponding density distribution of slow traps as a function of energy level was estimated.

  16. [Value of Immunohistochemical Methods in Detecting EML4-ALK Fusion Mutations: A Meta-analysis].

    PubMed

    Liu, Chang; Cai, Lu; Zhong, Diansheng; Wang, Jing

    2016-01-01

    The fusion between echinoderm microtubule-associated protein 4 (EML4) and anaplastic lymphatic tumor kinase (ALK) rearrangement is present in approximately 5% of non-small cell lung cancer (NSCLC) patients. It has been regarded as another new target gene after epidermal growth factor receptor (EGFR) and K-ras. Figures showed that the disease control rate could reach up to 80% in NSCLC patients with EML4-ALK fusion gene after treated with ALK inhibitors. Thus, exploring an accurate and rapid detecting method is the key in screening NSCLC patients with EML4-ALK expressions. The aim of this study is to analyze the specificity and sensitivity of IHC in detecting EML4-ALK fusion mutations. To evaluate the accuracy and clinical value of this method, and then provide basis for individual molecular therapy of NSCLC patients. Using Pubmed database to search all documents required. The deadline of retrieval was February 25, 2015. Then further screening the articles according to the inclusion and exclusion criteria. Using diagnostic test meta-analysis methods to analyze the sensitivity and specificity of the immunohistochemistry (IHC) method compared with fluorescence in situ hybridization (FISH) method. Eleven literatures were added into the meta analysis, there were 3,234 of total cases. The diagnostic odds ratio (DOR) was 1,135.00 (95%CI: 337.10-3,821.46); the area under curve (AUC) of summary receiver operating characteristic curve (SROC) curve was 0.992,3 (SEAUC=0.003,2), the Q* was 0.964,4 (SEQ*=0.008,7). Immunohistochemical detection of EML4-ALK fusion gene mutation with specific antibody is feasible. It has high sensitivity and specificity. IHC can be a simple and rapid way in screening EML4-ALK fusion gene mutation and exhibits important clinical values.

  17. Bayesian analysis of stage-fall-discharge rating curves and their uncertainties

    NASA Astrophysics Data System (ADS)

    Mansanarez, V.; Le Coz, J.; Renard, B.; Lang, M.; Pierrefeu, G.; Vauchel, P.

    2016-09-01

    Stage-fall-discharge (SFD) rating curves are traditionally used to compute streamflow records at sites where the energy slope of the flow is variable due to variable backwater effects. We introduce a model with hydraulically interpretable parameters for estimating SFD rating curves and their uncertainties. Conventional power functions for channel and section controls are used. The transition to a backwater-affected channel control is computed based on a continuity condition, solved either analytically or numerically. The practical use of the method is demonstrated with two real twin-gauge stations, the Rhône River at Valence, France, and the Guthusbekken stream at station 0003ṡ0033, Norway. Those stations are typical of a channel control and a section control, respectively, when backwater-unaffected conditions apply. The performance of the method is investigated through sensitivity analysis to prior information on controls and to observations (i.e., available gaugings) for the station of Valence. These analyses suggest that precisely identifying SFD rating curves requires adapted gauging strategy and/or informative priors. The Madeira River, one of the largest tributaries of the Amazon, provides a challenging case typical of large, flat, tropical river networks where bed roughness can also be variable in addition to slope. In this case, the difference in staff gauge reference levels must be estimated as another uncertain parameter of the SFD model. The proposed Bayesian method is a valuable alternative solution to the graphical and empirical techniques still proposed in hydrometry guidance and standards.

  18. Visual navigation using edge curve matching for pinpoint planetary landing

    NASA Astrophysics Data System (ADS)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  19. Analysis of HIV using a high resolution melting (HRM) diversity assay: automation of HRM data analysis enhances the utility of the assay for analysis of HIV incidence.

    PubMed

    Cousins, Matthew M; Swan, David; Magaret, Craig A; Hoover, Donald R; Eshleman, Susan H

    2012-01-01

    HIV diversity may be a useful biomarker for discriminating between recent and non-recent HIV infection. The high resolution melting (HRM) diversity assay was developed to quantify HIV diversity in viral populations without sequencing. In this assay, HIV diversity is expressed as a single numeric HRM score that represents the width of a melting peak. HRM scores are highly associated with diversity measures obtained with next generation sequencing. In this report, a software package, the HRM Diversity Assay Analysis Tool (DivMelt), was developed to automate calculation of HRM scores from melting curve data. DivMelt uses computational algorithms to calculate HRM scores by identifying the start (T1) and end (T2) melting temperatures for a DNA sample and subtracting them (T2 - T1 =  HRM score). DivMelt contains many user-supplied analysis parameters to allow analyses to be tailored to different contexts. DivMelt analysis options were optimized to discriminate between recent and non-recent HIV infection and to maximize HRM score reproducibility. HRM scores calculated using DivMelt were compared to HRM scores obtained using a manual method that is based on visual inspection of DNA melting curves. HRM scores generated with DivMelt agreed with manually generated HRM scores obtained from the same DNA melting data. Optimal parameters for discriminating between recent and non-recent HIV infection were identified. DivMelt provided greater discrimination between recent and non-recent HIV infection than the manual method. DivMelt provides a rapid, accurate method of determining HRM scores from melting curve data, facilitating use of the HRM diversity assay for large-scale studies.

  20. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity

    PubMed Central

    Wittenberg, Leah A.; Jonsson, Nina J.; Chan, RV Paul; Chiang, Michael F.

    2014-01-01

    Presence of plus disease in retinopathy of prematurity (ROP) is an important criterion for identifying treatment-requiring ROP. Plus disease is defined by a standard published photograph selected over 20 years ago by expert consensus. However, diagnosis of plus disease has been shown to be subjective and qualitative. Computer-based image analysis, using quantitative methods, has potential to improve the objectivity of plus disease diagnosis. The objective was to review the published literature involving computer-based image analysis for ROP diagnosis. The PubMed and Cochrane library databases were searched for the keywords “retinopathy of prematurity” AND “image analysis” AND/OR “plus disease.” Reference lists of retrieved articles were searched to identify additional relevant studies. All relevant English-language studies were reviewed. There are four main computer-based systems, ROPtool (AU ROC curve, plus tortuosity 0.95, plus dilation 0.87), RISA (AU ROC curve, arteriolar TI 0.71, venular diameter 0.82), Vessel Map (AU ROC curve, arteriolar dilation 0.75, venular dilation 0.96), and CAIAR (AU ROC curve, arteriole tortuosity 0.92, venular dilation 0.91), attempting to objectively analyze vessel tortuosity and dilation in plus disease in ROP. Some of them show promise for identification of plus disease using quantitative methods. This has potential to improve the diagnosis of plus disease, and may contribute to the management of ROP using both traditional binocular indirect ophthalmoscopy and image-based telemedicine approaches. PMID:21366159

  1. A Comparison of a Machine Learning Model with EuroSCORE II in Predicting Mortality after Elective Cardiac Surgery: A Decision Curve Analysis.

    PubMed

    Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril

    2017-01-01

    The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755-0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691-0.783) and 0.742 (0.698-0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction.

  2. Buckling analysis for anisotropic laminated plates under combined inplane loads

    NASA Technical Reports Server (NTRS)

    Viswanathan, A. V.; Tamekuni, M.; Baker, L. L.

    1974-01-01

    The buckling analysis presented considers rectangular flat or curved general laminates subjected to combined inplane normal and shear loads. Linear theory is used in the analysis. All prebuckling deformations and any initial imperfections are ignored. The analysis method can be readily extended to longitudinally stiffened structures subjected to combined inplane normal and shear loads.

  3. On a framework for generating PoD curves assisted by numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less

  4. On a framework for generating PoD curves assisted by numerical simulations

    NASA Astrophysics Data System (ADS)

    Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar

    2015-03-01

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  5. Soil Conservation Service Curve Number method: How to mend a wrong soil moisture accounting procedure?

    NASA Astrophysics Data System (ADS)

    Michel, Claude; Andréassian, Vazken; Perrin, Charles

    2005-02-01

    This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.

  6. Examining Classification Criteria: A Comparison of Three Cut Score Methods

    ERIC Educational Resources Information Center

    DiStefano, Christine; Morgan, Grant

    2011-01-01

    This study compared 3 different methods of creating cut scores for a screening instrument, T scores, receiver operating characteristic curve (ROC) analysis, and the Rasch rating scale method (RSM), for use with the Behavioral and Emotional Screening System (BESS) Teacher Rating Scale for Children and Adolescents (Kamphaus & Reynolds, 2007).…

  7. Note: Eddy current displacement sensors independent of target conductivity.

    PubMed

    Wang, Hongbo; Li, Wei; Feng, Zhihua

    2015-01-01

    Eddy current sensors (ECSs) are widely used for non-contact displacement measurement. In this note, the quantitative error of an ECS caused by target conductivity was analyzed using a complex image method. The response curves (L-x) of the ECS with different targets were similar and could be overlapped by shifting the curves on x direction with √2δ/2. Both finite element analysis and experiments match well with the theoretical analysis, which indicates that the measured error of high precision ECSs caused by target conductivity can be completely eliminated, and the ECSs can measure different materials precisely without calibration.

  8. Fully automated spectrometric protocols for determination of antioxidant activity: advantages and disadvantages.

    PubMed

    Sochor, Jiri; Ryvolova, Marketa; Krystofova, Olga; Salas, Petr; Hubalek, Jaromir; Adam, Vojtech; Trnkova, Libuse; Havel, Ladislav; Beklova, Miroslava; Zehnalek, Josef; Provaznik, Ivo; Kizek, Rene

    2010-11-29

    The aim of this study was to describe behaviour, kinetics, time courses and limitations of the six different fully automated spectrometric methods--DPPH, TEAC, FRAP, DMPD, Free Radicals and Blue CrO5. Absorption curves were measured and absorbance maxima were found. All methods were calibrated using the standard compounds Trolox® and/or gallic acid. Calibration curves were determined (relative standard deviation was within the range from 1.5 to 2.5%). The obtained characteristics were compared and discussed. Moreover, the data obtained were applied to optimize and to automate all mentioned protocols. Automatic analyzer allowed us to analyse simultaneously larger set of samples, to decrease the measurement time, to eliminate the errors and to provide data of higher quality in comparison to manual analysis. The total time of analysis for one sample was decreased to 10 min for all six methods. In contrary, the total time of manual spectrometric determination was approximately 120 min. The obtained data provided good correlations between studied methods (R=0.97-0.99).

  9. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    NASA Astrophysics Data System (ADS)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  10. Model of Numerical Spatial Classification for Sustainable Agriculture in Badung Regency and Denpasar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Trigunasih, N. M.; Lanya, I.; Subadiyasa, N. N.; Hutauruk, J.

    2018-02-01

    Increasing number and activity of the population to meet the needs of their lives greatly affect the utilization of land resources. Land needs for activities of the population continue to grow, while the availability of land is limited. Therefore, there will be changes in land use. As a result, the problems faced by land degradation and conversion of agricultural land become non-agricultural. The objectives of this research are: (1) to determine parameter of spatial numerical classification of sustainable food agriculture in Badung Regency and Denpasar City (2) to know the projection of food balance in Badung Regency and Denpasar City in 2020, 2030, 2040, and 2050 (3) to specify of function of spatial numerical classification in the making of zonation model of sustainable agricultural land area in Badung regency and Denpasar city (4) to determine the appropriate model of the area to protect sustainable agricultural land in spatial and time scale in Badung and Denpasar regencies. The method used in this research was quantitative method include: survey, soil analysis, spatial data development, geoprocessing analysis (spatial analysis of overlay and proximity analysis), interpolation of raster digital elevation model data, and visualization (cartography). Qualitative methods consisted of literature studies, and interviews. The parameters observed for a total of 11 parameters Badung regency and Denpasar as much as 9 parameters. Numerical classification parameter analysis results used the standard deviation and the mean of the population data and projections relationship rice field in the food balance sheet by modelling. The result of the research showed that, the number of different numerical classification parameters in rural areas (Badung) and urban areas (Denpasar), in urban areas the number of parameters is less than the rural areas. The based on numerical classification weighting and scores generate population distribution parameter analysis results of a standard deviation and average value. Numerical classification produced 5 models, which was divided into three zones are sustainable neighbourhood, buffer and converted in Denpasar and Badung. The results of Population curve parameter analysis in Denpasar showed normal curve, in contrast to the Badung regency showed abnormal curve, therefore Denpasar modeling carried out throughout the region, while in the Badung regency modeling done in each district. Relationship modelling and projections lands role in food balance in Badung views of sustainable land area whereas in Denpasar seen from any connection to the green open spaces in the spatial plan Denpasar 2011-2031. Modelling in Badung (rural) is different in Denpasar (urban), as well as population curve parameter analysis results in Badung showed abnormal curve while in Denpasar showed normal curve. Relationship modelling and projections lands role in food balance in the Badung regency sustainable in terms of land area, while in Denpasar in terms of linkages with urban green space in Denpasar City’s regional landuse plan of 2011-2031.

  11. Interpretable functional principal component analysis.

    PubMed

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  12. Numerical equilibrium analysis for structured consumer resource models.

    PubMed

    de Roos, A M; Diekmann, O; Getto, P; Kirkilionis, M A

    2010-02-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries can be defined in the (two-parameter) plane. We numerically trace these implicitly defined curves using alternatingly tangent prediction and Newton correction. Evaluation of the maps defining the curves involves integration over individual size and individual survival probability (and their derivatives) as functions of individual age. Such ingredients are often defined as solutions of ODE, i.e., in general only implicitly. In our case, the right-hand sides of these ODE feature discontinuities that are caused by an abrupt change of behavior at the size where juveniles are assumed to turn adult. So, we combine the numerical solution of these ODE with curve tracing methods. We have implemented the algorithms for "Daphnia consuming algae" models in C-code. The results obtained by way of this implementation are shown in the form of graphs.

  13. Evidence for Periodicity in 43 year-long Monitoring of NGC 5548

    NASA Astrophysics Data System (ADS)

    Bon, E.; Zucker, S.; Netzer, H.; Marziani, P.; Bon, N.; Jovanović, P.; Shapovalova, A. I.; Komossa, S.; Gaskell, C. M.; Popović, L. Č.; Britzen, S.; Chavushyan, V. H.; Burenkov, A. N.; Sergeev, S.; La Mura, G.; Valdés, J. R.; Stalevski, M.

    2016-08-01

    We present an analysis of 43 years (1972 to 2015) of spectroscopic observations of the Seyfert 1 galaxy NGC 5548. This includes 12 years of new unpublished observations (2003 to 2015). We compiled about 1600 Hβ spectra and analyzed the long-term spectral variations of the 5100 Å continuum and the Hβ line. Our analysis is based on standard procedures, including the Lomb-Scargle method, which is known to be rather limited to such heterogeneous data sets, and a new method developed specifically for this project that is more robust and reveals a ˜5700 day periodicity in the continuum light curve, the Hβ light curve, and the radial velocity curve of the red wing of the Hβ line. The data are consistent with orbital motion inside the broad emission line region of the source. We discuss several possible mechanisms that can explain this periodicity, including orbiting dusty and dust-free clouds, a binary black hole system, tidal disruption events, and the effect of an orbiting star periodically passing through an accretion disk.

  14. Differentiation of five enterohepatic Helicobacter species by nested PCR with high-resolution melting curve analysis.

    PubMed

    Wu, Miaoli; Rao, Dan; Zhu, Yujun; Wang, Jing; Yuan, Wen; Zhang, Yu; Huang, Ren; Guo, Pengju

    2017-04-01

    Enterohepatic Helicobacter species (EHS) are widespread in rodent species around the world. Several studies have demonstrated that infection with EHS can interfere with the outcomes of animal experiments in cancer research and significantly influence the study results. Therefore, it is essential to establish a rapid detection and identification of EHS for biomedical research using laboratory rodents. Our study aimed to develop a rapid and sensitive method to detect and distinguish five enterohepatic Helicobacter species. Nested PCR followed by high-resolution melting curve analysis (HRM) was developed for identification of H. bilis, H. rodentium, H. muridarum, H. typhlonius, as well as H. hepaticus. To validate the accuracy of nested PCR-HRM analysis, quantitative real-time PCR methods for five different enterohepatic Helicobacter species were developed. A total of 50 cecal samples were tested using both nested PCR-HRM analysis and qPCR method. The nested PCR-HRM method could distinguish five enterohepatic Helicobacter species by different melting temperatures. The melting curve were characterized by peaks of 78.7 ± 0.12°C for H. rodentium, 80.51 ± 0.09°C for H. bilis, 81.6 ± 0.1°C for H. typhlonius, 82.11 ± 0.18°C for H. muridarum, and 82.95 ± 0.09°C for H. hepaticus. The nested PCR-HRM assay is a simple, rapid, and cost-effective assay. This assay could be a useful tool for molecular epidemiology study of enterohepatic Helicobacter infection and an attractive alternative for genotyping of enterohepatic Helicobacter species. © 2016 John Wiley & Sons Ltd.

  15. Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley

    2016-09-01

    Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.

  16. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors

    PubMed Central

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-01-01

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people. PMID:28036015

  17. Influence of Individual Differences on the Calculation Method for FBG-Type Blood Pressure Sensors.

    PubMed

    Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun; Kobayashi, Yuka

    2016-12-28

    In this paper, we propose a blood pressure calculation and associated measurement method that by using a fiber Bragg grating (FBG) sensor. There are several points at which the pulse can be measured on the surface of the human body, and when a FBG sensor located at any of these points, the pulse wave signal can be measured. The measured waveform is similar to the acceleration pulse wave. The pulse wave signal changes depending on several factors, including whether or not the individual is healthy and/or elderly. The measured pulse wave signal can be used to calculate the blood pressure using a calibration curve, which is constructed by a partial least squares (PLS) regression analysis using a reference blood pressure and the pulse wave signal. In this paper, we focus on the influence of individual differences from calculated blood pressure based on each calibration curve. In our study, the calculated blood pressure from both the individual and overall calibration curves were compared, and our results show that the calculated blood pressure based on the overall calibration curve had a lower measurement accuracy than that based on an individual calibration curve. We also found that the influence of the individual differences on the calculated blood pressure when using the FBG sensor method were very low. Therefore, the FBG sensor method that we developed for measuring the blood pressure was found to be suitable for use by many people.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Li, X; Liu, G

    Purpose: We compare and investigate the dosimetric impacts on pencil beam scanning (PBS) proton treatment plans generated with CT calibration curves from four different CT scanners and one averaged ‘global’ CT calibration curve. Methods: The four CT scanners are located at three different hospital locations within the same health system. CT density calibration curves were collected from these scanners using the same CT calibration phantom and acquisition parameters. Mass density to HU value tables were then commissioned in a commercial treatment planning system. Five disease sites were chosen for dosimetric comparisons at brain, lung, head and neck, adrenal, and prostate.more » Three types of PBS plans were generated at each treatment site using SFUD, IMPT, and robustness optimized IMPT techniques. 3D dose differences were investigated using 3D Gamma analysis. Results: The CT calibration curves for all four scanners display very similar shapes. Large HU differences were observed at both the high HU and low HU regions of the curves. Large dose differences were generally observed at the distal edges of the beams and they are beam angle dependent. Out of the five treatment sites, lung plans exhibits the most overall range uncertainties and prostate plans have the greatest dose discrepancy. There are no significant differences between the SFUD, IMPT, and the RO-IMPT methods. 3D gamma analysis with 3%, 3 mm criteria showed all plans with greater than 95% passing rate. Two of the scanners with close HU values have negligible dose difference except for lung. Conclusion: Our study shows that there are more than 5% dosimetric differences between different CT calibration curves. PBS treatment plans generated with SFUD, IMPT, and the robustness optimized IMPT has similar sensitivity to the CT density uncertainty. More patient data and tighter gamma criteria based on structure location and size will be used for further investigation.« less

  19. Pluto's Atmosphere, Then and Now

    NASA Astrophysics Data System (ADS)

    Elliot, J. L.; Buie, M.; Person, M. J.; Qu, S.

    2002-09-01

    The KAO light curve for the 1988 stellar occultation by Pluto exhibits a sharp drop just below half light, but above this level the light curve is consistent with that of an isothermal atmosphere (T = 105 +/- 8 K, with N2 as its major constituent). The sharp drop in the light curve has been interpreted as being caused by: (i) a haze layer, (ii) a large thermal gradient, or (iii) some combination of these two. Modeling Pluto's atmosphere with a haze layer yields a normal optical depth >= 0.145 (Elliot & Young 1992, AJ 103, 991). On the other hand, if Pluto's atmosphere is assumed to be clear, the occultation light curve can be inverted with a new method that avoids the large-body approximations. Inversion of the KAO light curve with this method yields an upper isothermal part, followed by a sharp thermal gradient that reaches a maximum magnitude of -3.9 +/- 0.6 K km-1 at the end of the inversion (r = 1206 +/- 10 km). Even though we do not yet understand the cause of the sharp drop, the KAO light curve can be used as a benchmark for examining subsequent Pluto occultation light curves to determine whether Pluto's atmospheric structure has changed since 1988. As an example, the Mamiña light curve for the 2002 July 20 Pluto occultation of P126A was compared with the KAO light curve by Buie et al. (this conference), who concluded that Pluto's atmospheric structure has changed significantly since 1988. Further analysis and additional light curves from this and subsequent occultations (e.g. 2002 August 21) will allow us to elucidate the nature of these changes. This work was supported, in part, by grants from NASA (NAG5-9008 and NAG5-10444) and NSF (AST-0073447).

  20. Application of the Spectral Element Method to Acoustic Radiation

    NASA Technical Reports Server (NTRS)

    Doyle, James F.; Rizzi, Stephen A. (Technical Monitor)

    2000-01-01

    This report summarizes research to develop a capability for analysis of interior noise in enclosed structures when acoustically excited by an external random source. Of particular interest was the application to the study of noise and vibration transmission in thin-walled structures as typified by aircraft fuselages. Three related topics are focused upon. The first concerns the development of a curved frame spectral element, the second shows how the spectral element method for wave propagation in folded plate structures is extended to problems involving curved segmented plates. These are of significance because by combining these curved spectral elements with previously presented flat spectral elements, the dynamic response of geometrically complex structures can be determined. The third topic shows how spectral elements, which incorporate the effect of fluid loading on the structure, are developed for analyzing acoustic radiation from dynamically loaded extended plates.

  1. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    NASA Astrophysics Data System (ADS)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  2. Establishment of analysis method for methane detection by gas chromatography

    NASA Astrophysics Data System (ADS)

    Liu, Xinyuan; Yang, Jie; Ye, Tianyi; Han, Zeyu

    2018-02-01

    The study focused on the establishment of analysis method for methane determination by gas chromatography. Methane was detected by hydrogen flame ionization detector, and the quantitative relationship was determined by working curve of y=2041.2x+2187 with correlation coefficient of 0.9979. The relative standard deviation of 2.60-6.33% and the recovery rate of 96.36%∼105.89% were obtained during the parallel determination of standard gas. This method was not quite suitable for biogas content analysis because methane content in biogas would be over the measurement range in this method.

  3. Shape design sensitivity analysis using domain information

    NASA Technical Reports Server (NTRS)

    Seong, Hwal-Gyeong; Choi, Kyung K.

    1985-01-01

    A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.

  4. Assessment of genetic mutations in the XRCC2 coding region by high resolution melting curve analysis and the risk of differentiated thyroid carcinoma in Iran

    PubMed Central

    Fayaz, Shima; Fard-Esfahani, Pezhman; Fard-Esfahani, Armaghan; Mostafavi, Ehsan; Meshkani, Reza; Mirmiranpour, Hossein; Khaghani, Shahnaz

    2012-01-01

    Homologous recombination (HR) is the major pathway for repairing double strand breaks (DSBs) in eukaryotes and XRCC2 is an essential component of the HR repair machinery. To evaluate the potential role of mutations in gene repair by HR in individuals susceptible to differentiated thyroid carcinoma (DTC) we used high resolution melting (HRM) analysis, a recently introduced method for detecting mutations, to examine the entire XRCC2 coding region in an Iranian population. HRM analysis was used to screen for mutations in three XRCC2 coding regions in 50 patients and 50 controls. There was no variation in the HRM curves obtained from the analysis of exons 1 and 2 in the case and control groups. In exon 3, an Arg188His polymorphism (rs3218536) was detected as a new melting curve group (OR: 1.46; 95%CI: 0.432–4.969; p = 0.38) compared with the normal melting curve. We also found a new Ser150Arg polymorphism in exon 3 of the control group. These findings suggest that genetic variations in the XRCC2 coding region have no potential effects on susceptibility to DTC. However, further studies with larger populations are required to confirm this conclusion. PMID:22481871

  5. Delamination Analysis Of Composite Curved Bars

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Jackson, Raymond H.

    1990-01-01

    Classical anisotropic elasticity theory used to construct "multilayer" composite semicircular curved bar subjected to end forces and end moments. Radial location and intensity of open-mode delamination stress calculated and compared with results obtained from anisotropic continuum theory and from finite element method. Multilayer theory gave more accurate predictions of location and intensity of open-mode delamination stress. Currently being applied to predict open-mode delamination stress concentrations in horse-shoe-shaped composite test coupons.

  6. Elastic stability of laminated, flat and curved, long rectangular plates subjected to combined inplane loads

    NASA Technical Reports Server (NTRS)

    Viswanathan, A. V.; Tamekuni, M.; Baker, L. L.

    1974-01-01

    A method is presented to predict theoretical buckling loads of long, rectangular flat and curved laminated plates with arbitrary orientation of orthotropic axes each lamina. The plate is subjected to combined inplane normal and shear loads. Arbitrary boundary conditions may be stipulated along the longitudinal sides of the plate. In the absence of inplane shear loads and extensional-shear coupling, the analysis is also applicable to finite length plates. Numerical results are presented for curved laminated composite plates with boundary conditions and subjected to various loadings. These results indicate some of the complexities involved in the numerical solution of the analysis for general laminates. The results also show that the reduced bending stiffness approximation when applied to buckling problems could lead to considerable error in some cases and therefore must be used with caution.

  7. Comparative Analysis of the Volatile Components of Agrimonia eupatoria from Leaves and Roots by Gas Chromatography-Mass Spectrometry and Multivariate Curve Resolution

    PubMed Central

    Feng, Xiao-Liang; He, Yun-biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei

    2013-01-01

    Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria. PMID:24286016

  8. Comparative Analysis of the Volatile Components of Agrimonia eupatoria from Leaves and Roots by Gas Chromatography-Mass Spectrometry and Multivariate Curve Resolution.

    PubMed

    Feng, Xiao-Liang; He, Yun-Biao; Liang, Yi-Zeng; Wang, Yu-Lin; Huang, Lan-Fang; Xie, Jian-Wei

    2013-01-01

    Gas chromatography-mass spectrometry and multivariate curve resolution were applied to the differential analysis of the volatile components in Agrimonia eupatoria specimens from different plant parts. After extracted with water distillation method, the volatile components in Agrimonia eupatoria from leaves and roots were detected by GC-MS. Then the qualitative and quantitative analysis of the volatile components in the main root of Agrimonia eupatoria was completed with the help of subwindow factor analysis resolving two-dimensional original data into mass spectra and chromatograms. 68 of 87 separated constituents in the total ion chromatogram of the volatile components were identified and quantified, accounting for about 87.03% of the total content. Then, the common peaks in leaf were extracted with orthogonal projection resolution method. Among the components determined, there were 52 components coexisting in the studied samples although the relative content of each component showed difference to some extent. The results showed a fair consistency in their GC-MS fingerprint. It was the first time to apply orthogonal projection method to compare different plant parts of Agrimonia eupatoria, and it reduced the burden of qualitative analysis as well as the subjectivity. The obtained results proved the combined approach powerful for the analysis of complex Agrimonia eupatoria samples. The developed method can be used to further study and quality control of Agrimonia eupatoria.

  9. FORCinel Version 3.0: An Integrated Environment for Processing, Analysis and Simulation of First-Order Reversal Curve Diagrams

    NASA Astrophysics Data System (ADS)

    Lascu, I.; Harrison, R. J.

    2016-12-01

    First-order reversal curve (FORC) diagrams are a powerful method to characterise the hysteresis properties of magnetic grain ensembles. Methods of processing, analysis and simulation of FORC diagrams have developed rapidly over the past few years, dramatically expanding their utility within rock magnetic research. Here we announce the latest release of FORCinel (Version 3.0), which integrates many of these developments into a unified, user-friendly package running within Igor Pro (www.wavemetrics.com). FORCinel v. 3.0 can be downloaded from https://wserv4.esc.cam.ac.uk/nanopaleomag/. The release will be accompanied by a series of video tutorials outlining each of the new features, including: i) improved work flow, with unified smoothing approach; ii) increased processing speed using multiple processors; iii) control of output resolution, enabling large datasets (> 500 FORCs) to be smoothed in a matter of seconds; iv) load, process, analyse and average multiple FORC diagrams; v) load and process non-gridded data and data acquired on non-PMC systems; vi) improved method for exploring optimal smoothing parameters; vii) interactive and un-doable data-pretreatments; viii) automated detection and removal of measurement outliers; ix) improved interactive method for the generation and optimisation of colour scales; x) full integration with FORCem1 - supervised quantitative unmixing of FORC diagrams using principle component analysis (PCA); xi) full integration with FORCulator2 - micromagnetic simulation of FORC diagrams; xiii) simulate TRM acquisition using the kinetic Monte Carlo simulation algorithm of Shcherbakov3. 1. Lascu, I., Harrison, R.J., Li, Y., Muraszko, J.R., Channell, J.E.T., Piotrowski, A.M., Hodell, D.A., 2015. Magnetic unmixing of first-order reversal curve diagrams using principal component analysis. Geochemistry, Geophys. Geosystems 16, 2900-2915. 2. Harrison, R.J., Lascu, I., 2014. FORCulator: A micromagnetic tool for simulating first-order reversal curve diagrams. Geochemistry Geophys. Geosystems 15, 4671-4691. 3. Shcherbakov, V.P., Lamash, B.E., Sycheva, N.K., 1995. Monte-Carlo modelling of thermoremanence acquisition in interacting single-domain grains. Phys. Earth Planet. Inter. 87, 197-211.

  10. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    PubMed

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  11. Initial Validation of a Comprehensive Assessment Instrument for Bereavement-Related Grief Symptoms and Risk of Complications: The Indicator of Bereavement Adaptation—Cruse Scotland (IBACS)

    PubMed Central

    Schut, Henk; Stroebe, Margaret S.; Wilson, Stewart; Birrell, John

    2016-01-01

    Objective This study assessed the validity of the Indicator of Bereavement Adaptation Cruse Scotland (IBACS). Designed for use in clinical and non-clinical settings, the IBACS measures severity of grief symptoms and risk of developing complications. Method N = 196 (44 male, 152 female) help-seeking, bereaved Scottish adults participated at two timepoints: T1 (baseline) and T2 (after 18 months). Four validated assessment instruments were administered: CORE-R, ICG-R, IES-R, SCL-90-R. Discriminative ability was assessed using ROC curve analysis. Concurrent validity was tested through correlation analysis at T1. Predictive validity was assessed using correlation analyses and ROC curve analysis. Optimal IBACS cutoff values were obtained by calculating a maximal Youden index J in ROC curve analysis. Clinical implications were compared across instruments. Results ROC curve analysis results (AUC = .84, p < .01, 95% CI between .77 and .90) indicated the IBACS is a good diagnostic instrument for assessing complicated grief. Positive correlations (p < .01, 2-tailed) with all four instruments at T1 demonstrated the IBACS' concurrent validity, strongest with complicated grief measures (r = .82). Predictive validity was shown to be fair in T2 ROC curve analysis results (n = 67, AUC = .78, 95% CI between .65 and .92; p < .01). Predictive validity was also supported by stable positive correlations between IBACS and other instruments at T2. Clinical indications were found not to differ across instruments. Conclusions The IBACS offers effective grief symptom and risk assessment for use by non-clinicians. Indications are sufficient to support intake assessment for a stepped model of bereavement intervention. PMID:27741246

  12. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    PubMed Central

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  13. A new method to predict anatomical outcome after idiopathic macular hole surgery.

    PubMed

    Liu, Peipei; Sun, Yaoyao; Dong, Chongya; Song, Dan; Jiang, Yanrong; Liang, Jianhong; Yin, Hong; Li, Xiaoxin; Zhao, Mingwei

    2016-04-01

    To investigate whether a new macular hole closure index (MHCI) could predict anatomic outcome of macular hole surgery. A vitrectomy with internal limiting membrane peeling, air-fluid exchange, and gas tamponade were performed on all patients. The postoperative anatomic status of the macular hole was defined by spectral-domain OCT. MHCI was calculated as (M+N)/BASE based on the preoperative OCT status. M and N were the curve lengths of the detached photoreceptor arms, and BASE was the length of the retinal pigment epithelial layer (RPE layer) detaching from the photoreceptors. Postoperative anatomical outcomes were divided into three grades: A (bridge-like closure), B (good closure), and C (poor closure or no closure). Correlation analysis was performed between anatomical outcomes and MHCI. Receiver operating characteristic (ROC) curves were derived for MHCI, indicating good model discrimination. ROC curves were also assessed by the area under the curve, and cut-offs were calculated. Other predictive parameters reported previously, which included the MH minimum, the MH height, the macular hole index (MHI), the diameter hole index (DHI), and the tractional hole index (THI) had been compared as well. MHCI correlated significantly with postoperative anatomical outcomes (r = 0.543, p = 0.000), but other predictive parameters did not. The areas under the curves indicated that MHCI could be used as an effective predictor of anatomical outcome. Cut-off values of 0.7 and 1.0 were obtained for MHCI from ROC curve analysis. MHCI demonstrated a better predictive effect than other parameters, both in the correlation analysis and ROC analysis. MHCI could be an easily measured and accurate predictive index for postoperative anatomical outcomes.

  14. CYP2C19 progress curve analysis and mechanism-based inactivation by three methylenedioxyphenyl compounds.

    PubMed

    Salminen, Kaisa A; Meyer, Achim; Imming, Peter; Raunio, Hannu

    2011-12-01

    Several in vitro criteria were used to assess whether three methylenedioxyphenyl (MDP) compounds, the isoquinoline alkaloids bulbocapnine, canadine, and protopine, are mechanism-based inactivators of CYP2C19. The recently reported fluorometric CYP2C19 progress curve analysis approach was applied first to determine whether these alkaloids demonstrate time-dependent inhibition. In this experiment, bulbocapnine, canadine, and protopine displayed time dependence and saturation in their inactivation kinetics with K(I) and k(inact) values of 72.4 ± 14.7 μM and 0.38 ± 0.036 min(-1), 2.1 ± 0.63 μM and 0.18 ± 0.015 min(-1), and 7.1 ± 2.3 μM and 0.24 ± 0.021 min(-1), respectively. Additional studies were performed to determine whether other specific criteria for mechanism-based inactivation were fulfilled: NADPH dependence, irreversibility, and involvement of a catalytic step in the enzyme inactivation. CYP2C19 activity was not significantly restored by dialysis when it had been inactivated by the alkaloids in the presence of a NADPH-regenerating system, and a metabolic-intermediate complex-associated increase in absorbance at approximately 455 nm was observed. In conclusion, the CYP2C19 progress curve analysis method revealed time-dependent inhibition by these alkaloids, and additional experiments confirmed its quasi-irreversible nature. This study revealed that the CYP2C19 progress curve analysis method is useful for identifying novel mechanism-based inactivators and yields a wealth of information in one run. The alkaloids bulbocapnine, canadine, and protopine, present in herbal medicines, are new mechanism-based inactivators and the first MDP compounds exhibiting quasi-irreversible inactivation of CYP2C19.

  15. Protecting Location Privacy for Outsourced Spatial Data in Cloud Storage

    PubMed Central

    Gui, Xiaolin; An, Jian; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC∗) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC∗ and DSC are more secure than SHC, and DSC achieves the best index generation performance. PMID:25097865

  16. Protecting location privacy for outsourced spatial data in cloud storage.

    PubMed

    Tian, Feng; Gui, Xiaolin; An, Jian; Yang, Pan; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC(∗)) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC(∗) and DSC are more secure than SHC, and DSC achieves the best index generation performance.

  17. The average receiver operating characteristic curve in multireader multicase imaging studies

    PubMed Central

    Samuelson, F W

    2014-01-01

    Objective: In multireader, multicase (MRMC) receiver operating characteristic (ROC) studies for evaluating medical imaging systems, the area under the ROC curve (AUC) is often used as a summary metric. Owing to the limitations of AUC, plotting the average ROC curve to accompany the rigorous statistical inference on AUC is recommended. The objective of this article is to investigate methods for generating the average ROC curve from ROC curves of individual readers. Methods: We present both a non-parametric method and a parametric method for averaging ROC curves that produce a ROC curve, the area under which is equal to the average AUC of individual readers (a property we call area preserving). We use hypothetical examples, simulated data and a real-world imaging data set to illustrate these methods and their properties. Results: We show that our proposed methods are area preserving. We also show that the method of averaging the ROC parameters, either the conventional bi-normal parameters (a, b) or the proper bi-normal parameters (c, da), is generally not area preserving and may produce a ROC curve that is intuitively not an average of multiple curves. Conclusion: Our proposed methods are useful for making plots of average ROC curves in MRMC studies as a companion to the rigorous statistical inference on the AUC end point. The software implementing these methods is freely available from the authors. Advances in knowledge: Methods for generating the average ROC curve in MRMC ROC studies are formally investigated. The area-preserving criterion we defined is useful to evaluate such methods. PMID:24884728

  18. Comparative analysis of the apparent saturation hysteresis approach and the domain theory of hysteresis in respect of prediction of scanning curves and air entrapment

    NASA Astrophysics Data System (ADS)

    Beriozkin, A.; Mualem, Y.

    2018-05-01

    This study theoretically analyzes the concept of apparent saturation hysteresis, combined with the Scott et al. (1983) scaling approach, as suggested by Parker and Lenhard (1987), to account for the effect of air entrapment and release on the soil water hysteresis. We found that the theory of Parker and Lenhard (1987) is comprised of some mutually canceling mathematical operations, and when cleared of the superfluous intermediate calculations, their model reduces to the original Scott et al.'s (1983) scaling method, supplemented with the requirement of closure of scanning loops. Our analysis reveals that actually there is no effect of their technique of accounting for the entrapped air on the final prediction of the effective saturation (or water content) scanning curves. Our consideration indicates that the use of the Land (1968) formula for assessing the amount of entrapped air is in disaccord with the apparent saturation concept as introduced by Parker and Lenhard (1987). In this paper, a proper routine is suggested for predicting hysteretic scanning curves of any order, given the two measured main curves, in the complete hysteretic domain and some verification tests are carried out versus measured results. Accordingly, explicit closed-form formulae for direct prediction (with no need of intermediate calculation) of scanning curves up to the third order are derived to sustain our analysis.

  19. Quality assessment of SPR sensor chips; case study on L1 chips.

    PubMed

    Olaru, Andreea; Gheorghiu, Mihaela; David, Sorin; Polonschii, Cristina; Gheorghiu, Eugen

    2013-07-15

    Surface quality of the Surface Plasmon Resonance (SPR) chips is a major limiting issue in most SPR analyses, even more for supported lipid membranes experiments, where both the organization of the lipid matrix and the subsequent incorporation of the target molecule depend on the surface quality. A novel quantitative method to characterize the quality of SPR sensors chips is described for L1 chips subject to formation of lipid films, injection of membrane disrupting compounds, followed by appropriate regeneration procedures. The method consists in analysis of the SPR reflectivity curves for several standard solutions (e.g. PBS, HEPES or deionized water). This analysis reveals the decline of sensor surface as a function of the number of experimental cycles (consisting in biosensing assay and regeneration step) and enables active control of surface regeneration for enhanced reproducibility. We demonstrate that quantitative evaluation of the changes in reflectivity curves (shape of the SPR dip) and of the slope of the calibration curve provides a rapid and effective procedure for surface quality assessment. Whereas the method was tested on L1 SPR sensors chips, we stress on its amenability to assess the quality of other types of SPR chips, as well. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Biological dosimetry of ionizing radiation: Evaluation of the dose with cytogenetic methodologies by the construction of calibration curves

    NASA Astrophysics Data System (ADS)

    Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia

    2016-09-01

    In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining PCC of lymphocytes and CHO cells with FISH using PNA probes after 10 h and 24 h after irradiation, and, finally, calibration data of excess PCC fragments (Giemsa) to be used if human blood is available immediately after irradiation or within 24 h.

  1. Quantitative methods for compensation of matrix effects and self-absorption in Laser Induced Breakdown Spectroscopy signals of solids

    NASA Astrophysics Data System (ADS)

    Takahashi, Tomoko; Thornton, Blair

    2017-12-01

    This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.

  2. Size Effects in Impact Damage of Composite Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Dobyns, Alan; Jackson, Wade

    2003-01-01

    Panel size has a large effect on the impact response and resultant damage level of honeycomb sandwich panels. It has been observed during impact testing that panels of the same design but different panel sizes will show large differences in damage when impacted with the same impact energy. To study this effect, a test program was conducted with instrumented impact testing of three different sizes of sandwich panels to obtain data on panel response and residual damage. In concert with the test program. a closed form analysis method was developed that incorporates the effects of damage on the impact response. This analysis method will predict both the impact response and the residual damage of a simply-supported sandwich panel impacted at any position on the panel. The damage is incorporated by the use of an experimental load-indentation curve obtained for the face-sheet/honeycomb and indentor combination under study. This curve inherently includes the damage response and can be obtained quasi-statically from a rigidly-backed specimen or a specimen with any support conditions. Good correlation has been obtained between the test data and the analysis results for the maximum force and residual indentation. The predictions can be improved by using a dynamic indentation curve. Analyses have also been done using the MSC/DYTRAN finite element code.

  3. Fourier Descriptor Analysis and Unification of Voice Range Profile Contours: Method and Applications

    ERIC Educational Resources Information Center

    Pabon, Peter; Ternstrom, Sten; Lamarche, Anick

    2011-01-01

    Purpose: To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. Method: A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the…

  4. Application of several methods for determining transfer functions and frequency response of aircraft from flight data

    NASA Technical Reports Server (NTRS)

    Eggleston, John M; Mathews, Charles W

    1954-01-01

    In the process of analyzing the longitudinal frequency-response characteristics of aircraft, information on some of the methods of analysis has been obtained by the Langley Aeronautical Laboratory of the National Advisory Committee for Aeronautics. In the investigation of these methods, the practical applications and limitations were stressed. In general, the methods considered may be classed as: (1) analysis of sinusoidal response, (2) analysis of transient response as to harmonic content through determination of the Fourier integral by manual or machine methods, and (3) analysis of the transient through the use of least-squares solutions of the coefficients of an assumed equation for either the transient time response or frequency response (sometimes referred to as curve-fitting methods). (author)

  5. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  6. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  7. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    PubMed

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  8. Estuarial fingerprinting through multidimensional fluorescence and multivariate analysis.

    PubMed

    Hall, Gregory J; Clow, Kerin E; Kenny, Jonathan E

    2005-10-01

    As part of a strategy for preventing the introduction of aquatic nuisance species (ANS) to U.S. estuaries, ballast water exchange (BWE) regulations have been imposed. Enforcing these regulations requires a reliable method for determining the port of origin of water in the ballast tanks of ships entering U.S. waters. This study shows that a three-dimensional fluorescence fingerprinting technique, excitation emission matrix (EEM) spectroscopy, holds great promise as a ballast water analysis tool. In our technique, EEMs are analyzed by multivariate classification and curve resolution methods, such as N-way partial least squares Regression-discriminant analysis (NPLS-DA) and parallel factor analysis (PARAFAC). We demonstrate that classification techniques can be used to discriminate among sampling sites less than 10 miles apart, encompassing Boston Harbor and two tributaries in the Mystic River Watershed. To our knowledge, this work is the first to use multivariate analysis to classify water as to location of origin. Furthermore, it is shown that curve resolution can show seasonal features within the multidimensional fluorescence data sets, which correlate with difficulty in classification.

  9. Two alternative multiplex PCRs for the identification of the seven species of anglerfish (Lophius spp.) using an end-point or a melting curve analysis real-time protocol.

    PubMed

    Castigliego, Lorenzo; Armani, Andrea; Tinacci, Lara; Gianfaldoni, Daniela; Guidi, Alessandra

    2015-01-01

    Anglerfish (Lophius spp.) is consumed worldwide and is an important economic resource though its seven species are often fraudulently interchanged due to their different commercial value, especially when sold in the form of fillets or pieces. Molecular analysis is the only possible mean to verify traceability and counteract fraud. We developed two multiplex PCRs, one end-point and one real-time with melting curve post-amplification analysis, which can even be run with the simplest two-channel thermocyclers. The two methods were tested on seventy-five reference samples. Their specificity was checked in twenty more species of those most commonly available on the market and in other species of the Lophiidae family. Both methods, the choice of which depends on the equipment and budget of the lab, provide a rapid and easy-to-read response, improving both the simplicity and cost-effectiveness of existing methods for identifying Lophius species. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Application of the inverse analysis for determining the material properties of the woven fabrics for macroscopic approach

    NASA Astrophysics Data System (ADS)

    Oleksik, Mihaela; Oleksik, Valentin

    2013-05-01

    The current paper intends to realise a fast method for determining the material characteristics in the case of composite materials used in the airbags manufacturing. For determining the material data needed for other complex numerical simulations at macroscopic level there was used the inverse analysis method. In fact, there were carried out tensile tests for the composite material extracted along two directions - the direction of the weft and the direction of the warp and afterwards there were realised numerical simulations (using the Ls-Dyna software). A second stage consisted in the numerical simulation through the finite element method and the experimental testing for the Bias test. The material characteristics of the composite fabric material were then obtained by applying a multicriterial analysis using the Ls-Opt software, for which there was imposed a decrease of the mismatch between the force-displacement curves obtained numerically and experimentally, respectively, for both directions (weft and warp) as well as the decrease of the mismatch between the strain - extension curves for two points at the Bias test.

  11. Applications of MASW Method with Different Offsets and Geophone Geometries in Buca District of Izmir City, TURKEY

    NASA Astrophysics Data System (ADS)

    Pamuk, Eren; Önsen, Funda; Turan, Seçil

    2014-05-01

    Shear-wave velocity is so critical parameter for evaluating the dynamic behaviour of soil in the subsurface investigations. Multichannel Analysis of Surface Waves (MASW) is a popular method to utilize shear-wave velocity in shallow depth surveys. This method uses the dispersive properties of shear-waves for imaging the subsurface layers. In MASW method, firstly data are acquired multichannel field records (or shot gathers), then dispersion curves are extracted. Finally, these dispersion curves are inverted to obtain one dimension (1D) Vs depth profiles. Reliable and accurate results of evaluating shear wave velocity depends on dispersion curves. Therefore, determination of basic mode dispersion curve is very important. In this study, MASW measurements were carried out different types of spread and various offsets to obtain better results in İzmir, Turkey. The types of spread were selected as pairs geophone group of spread, increase spread and constant interval spread. The data were collected in the Campus of Tinaztepe, Dokuz Eylul University, Izmir (Buca). 24 channel Geometrix Geode seismic instruments, 4.5 Hz low frequency receiver (geophone) and sledge hammer (8kg) as an energy source were used in this study. The data were collected with forward shots. MASW measurements were applied different profiles and their lengths were 24 m. Geophone intervals were selected 1 m in the constant interval spread and offsets were selected respectively 1, 4, 8, 12, 24 m in all spreads. In the first stage of this study, the measurements, which were taken in these offsets, were compared between each other in all spreads. The results show that higher resolution dispersion curves were observed at 1 m, 2 m and 4 m offsets. In the other offsets (8, 12, 24 m), distinguishability between basic and higher modes dispersion curves became difficult. In the second stage of this study, obtained dispersion curves of different spread were compared to all spread type of MASW survey.

  12. W-curve alignments for HIV-1 genomic comparisons.

    PubMed

    Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H

    2010-06-01

    The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of aligning extremes of the curves to effectively phase-shift them past the HIV-1 gap problem, is presented. Besides yielding similar neighbor-joining phenogram topologies, most Mother and Infant C2-V5 sequences in the cohort pairs geometrically map closest to each other, indicating that W-curve heuristics overcame any gap problem.

  13. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  14. A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing

    NASA Astrophysics Data System (ADS)

    Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi

    2018-06-01

    This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.

  15. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  16. Low-Impact Development Design—Integrating Suitability Analysis and Site Planning For Reduction Of Post-Development Stormwater Quantity

    EPA Science Inventory

    A land-suitability analysis (LSA) was integrated with open-space conservation principles, based on watershed physiographic and soil characteristics, to derive a low-impact development (LID) residential plan for a three hectare site in Coshocton OH, USA. The curve number method wa...

  17. Analyzing DNA curvature and its impact on the ionic environment: application to molecular dynamics simulations of minicircles

    PubMed Central

    Pasi, Marco; Zakrzewska, Krystyna; Maddocks, John H.

    2017-01-01

    Abstract We propose a method for analyzing the magnitude and direction of curvature within nucleic acids, based on the curvilinear helical axis calculated by Curves+. The method is applied to analyzing curvature within minicircles constructed with varying degrees of over- or under-twisting. Using the molecular dynamics trajectories of three different minicircles, we are able to quantify how curvature varies locally both in space and in time. We also analyze how curvature influences the local environment of the minicircles, notably via increased heterogeneity in the ionic distributions surrounding the double helix. The approach we propose has been integrated into Curves+ and the utilities Canal (time trajectory analysis) and Canion (environmental analysis) and can be used to study a wide variety of static and dynamic structural data on nucleic acids. PMID:28180333

  18. Nozzle Free Jet Flows Within the Strong Curved Shock Regime

    NASA Technical Reports Server (NTRS)

    Shih, Tso-Shin

    1975-01-01

    A study based on inviscid analysis was conducted to examine the flow field produced from a convergent-divergent nozzle when a strong curved shock occurs. It was found that a certain constraint is imposed on the flow solution of the problem which is the unique feature of the flow within this flow regime, and provides the reason why the inverse method of calculation cannot be employed for these problems. An approximate method was developed to calculate the flow field, and results were obtained for two-dimensional flows. Analysis and calculations were performed for flows with axial symmetry. It is shown that under certain conditions, the vorticity generated at the jet boundary may become infinite and the viscous effect becomes important. Under other conditions, the asymptotic free jet height as well as the corresponding shock geometry were determined.

  19. Tendency for interlaboratory precision in the GMO analysis method based on real-time PCR.

    PubMed

    Kodama, Takashi; Kurosawa, Yasunori; Kitta, Kazumi; Naito, Shigehiro

    2010-01-01

    The Horwitz curve estimates interlaboratory precision as a function only of concentration, and is frequently used as a method performance criterion in food analysis with chemical methods. The quantitative biochemical methods based on real-time PCR require an analogous criterion to progressively promote method validation. We analyzed the tendency of precision using a simplex real-time PCR technique in 53 collaborative studies of seven genetically modified (GM) crops. Reproducibility standard deviation (SR) and repeatability standard deviation (Sr) of the genetically modified organism (GMO) amount (%) was more or less independent of GM crops (i.e., maize, soybean, cotton, oilseed rape, potato, sugar beet, and rice) and evaluation procedure steps. Some studies evaluated whole steps consisting of DNA extraction and PCR quantitation, whereas others focused only on the PCR quantitation step by using DNA extraction solutions. Therefore, SR and Sr for GMO amount (%) are functions only of concentration similar to the Horwitz curve. We proposed S(R) = 0.1971C 0.8685 and S(r) = 0.1478C 0.8424, where C is the GMO amount (%). We also proposed a method performance index in GMO quantitative methods that is analogous to the Horwitz Ratio.

  20. IDF relationships using bivariate copula for storm events in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Ariff, N. M.; Jemain, A. A.; Ibrahim, K.; Wan Zin, W. Z.

    2012-11-01

    SummaryIntensity-duration-frequency (IDF) curves are used in many hydrologic designs for the purpose of water managements and flood preventions. The IDF curves available in Malaysia are those obtained from univariate analysis approach which only considers the intensity of rainfalls at fixed time intervals. As several rainfall variables are correlated with each other such as intensity and duration, this paper aims to derive IDF points for storm events in Peninsular Malaysia by means of bivariate frequency analysis. This is achieved through utilizing the relationship between storm intensities and durations using the copula method. Four types of copulas; namely the Ali-Mikhail-Haq (AMH), Frank, Gaussian and Farlie-Gumbel-Morgenstern (FGM) copulas are considered because the correlation between storm intensity, I, and duration, D, are negative and these copulas are appropriate when the relationship between the variables are negative. The correlations are attained by means of Kendall's τ estimation. The analysis was performed on twenty rainfall stations with hourly data across Peninsular Malaysia. Using Akaike's Information Criteria (AIC) for testing goodness-of-fit, both Frank and Gaussian copulas are found to be suitable to represent the relationship between I and D. The IDF points found by the copula method are compared to the IDF curves yielded based on the typical IDF empirical formula of the univariate approach. This study indicates that storm intensities obtained from both methods are in agreement with each other for any given storm duration and for various return periods.

  1. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  2. [Vegetation index estimation by chlorophyll content of grassland based on spectral analysis].

    PubMed

    Xiao, Han; Chen, Xiu-Wan; Yang, Zhen-Yu; Li, Huai-Yu; Zhu, Han

    2014-11-01

    Comparing the methods of existing remote sensing research on the estimation of chlorophyll content, the present paper confirms that the vegetation index is one of the most practical and popular research methods. In recent years, the increasingly serious problem of grassland degradation. This paper, firstly, analyzes the measured reflectance spectral curve and its first derivative curve in the grasslands of Songpan, Sichuan and Gongger, Inner Mongolia, conducts correlation analysis between these two spectral curves and chlorophyll content, and finds out the regulation between REP (red edge position) and grassland chlorophyll content, that is, the higher the chlorophyll content is, the higher the REIP (red-edge inflection point) value would be. Then, this paper constructs GCI (grassland chlorophyll index) and selects the most suitable band for retrieval. Finally, this paper calculates the GCI by the use of satellite hyperspectral image, conducts the verification and accuracy analysis of the calculation results compared with chlorophyll content data collected from field of twice experiments. The result shows that for grassland chlorophyll content, GCI has stronger sensitivity than other indices of chlorophyll, and has higher estimation accuracy. GCI is the first proposed to estimate the grassland chlorophyll content, and has wide application potential for the remote sensing retrieval of grassland chlorophyll content. In addition, the grassland chlorophyll content estimation method based on remote sensing retrieval in this paper provides new research ideas for other vegetation biochemical parameters' estimation, vegetation growth status' evaluation and grassland ecological environment change's monitoring.

  3. CRITICAL CURVES AND CAUSTICS OF TRIPLE-LENS MODELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daněk, Kamil; Heyrovský, David, E-mail: kamil.danek@utf.mff.cuni.cz, E-mail: heyrovsky@utf.mff.cuni.cz

    2015-06-10

    Among the 25 planetary systems detected up to now by gravitational microlensing, there are two cases of a star with two planets, and two cases of a binary star with a planet. Other, yet undetected types of triple lenses include triple stars or stars with a planet with a moon. The analysis and interpretation of such events is hindered by the lack of understanding of essential characteristics of triple lenses, such as their critical curves and caustics. We present here analytical and numerical methods for mapping the critical-curve topology and caustic cusp number in the parameter space of n-point-mass lenses.more » We apply the methods to the analysis of four symmetric triple-lens models, and obtain altogether 9 different critical-curve topologies and 32 caustic structures. While these results include various generic types, they represent just a subset of all possible triple-lens critical curves and caustics. Using the analyzed models, we demonstrate interesting features of triple lenses that do not occur in two-point-mass lenses. We show an example of a lens that cannot be described by the Chang–Refsdal model in the wide limit. In the close limit we demonstrate unusual structures of primary and secondary caustic loops, and explain the conditions for their occurrence. In the planetary limit we find that the presence of a planet may lead to a whole sequence of additional caustic metamorphoses. We show that a pair of planets may change the structure of the primary caustic even when placed far from their resonant position at the Einstein radius.« less

  4. TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z; Shi, J; Yang, Y

    Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less

  5. The Lumbar Lordosis in Males and Females, Revisited

    PubMed Central

    Hay, Ori; Dar, Gali; Abbas, Janan; Stein, Dan; May, Hila; Masharawi, Youssef; Peled, Nathan; Hershkovitz, Israel

    2015-01-01

    Background Whether differences exist in male and female lumbar lordosis has been debated by researchers who are divided as to the nature of variations in the spinal curve, their origin, reasoning, and implications from a morphological, functional and evolutionary perspective. Evaluation of the spinal curvature is constructive in understanding the evolution of the spine, as well as its pathology, planning of surgical procedures, monitoring its progression and treatment of spinal deformities. The aim of the current study was to revisit the nature of lumbar curve in males and females. Methods Our new automated method uses CT imaging of the spine to measure lumbar curvature in males and females. The curves extracted from 158 individuals were based on the spinal canal, thus avoiding traditional pitfalls of using bone features for curve estimation. The model analysis was carried out on the entire curve, whereby both local and global descriptors were examined in a single framework. Six parameters were calculated: segment length, curve length, curvedness, lordosis peak location, lordosis cranial peak height, and lordosis caudal peak height. Principal Findings Compared to males, the female spine manifested a statistically significant greater curvature, a caudally located lordotic peak, and greater cranial peak height. As caudal peak height is similar for males and females, the illusion of deeper lordosis among females is due partially to the fact that the upper part of the female lumbar curve is positioned more dorsally (more backwardly inclined). Conclusions Males and females manifest different lumbar curve shape, yet similar amount of inward curving (lordosis). The morphological characteristics of the female spine were probably developed to reduce stress on the vertebral elements during pregnancy and nursing. PMID:26301782

  6. Thermoluminescence of nanocrystalline CaSO{sub 4}: Dy for gamma dosimetry and calculation of trapping parameters using deconvolution method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandlik, Nandkumar, E-mail: ntmandlik@gmail.com; Patil, B. J.; Bhoraskar, V. N.

    2014-04-24

    Nanorods of CaSO{sub 4}: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.

  7. Thermoluminescence of nanocrystalline CaSO4: Dy for gamma dosimetry and calculation of trapping parameters using deconvolution method

    NASA Astrophysics Data System (ADS)

    Mandlik, Nandkumar; Patil, B. J.; Bhoraskar, V. N.; Sahare, P. D.; Dhole, S. D.

    2014-04-01

    Nanorods of CaSO4: Dy having diameter 20 nm and length 200 nm have been synthesized by the chemical coprecipitation method. These samples were irradiated with gamma radiation for the dose varying from 0.1 Gy to 50 kGy and their TL characteristics have been studied. TL dose response shows a linear behavior up to 5 kGy and further saturates with increase in the dose. A Computerized Glow Curve Deconvolution (CGCD) program was used for the analysis of TL glow curves. Trapping parameters for various peaks have been calculated by using CGCD program.

  8. Field of view of limitations in see-through HMD using geometric waveguides.

    PubMed

    DeHoog, Edward; Holmstedt, Jason; Aye, Tin

    2016-08-01

    Geometric waveguides are being integrated into head-mounted display (HMD) systems, where having see-through capability in a compact, lightweight form factor is required. We developed methods for determining the field of view (FOV) of such waveguide HMD systems and have analytically derived the FOV for waveguides using planar and curved geometries. By using real ray-tracing methods, we are able to show how the geometry and index of refraction of the waveguide, as well as the properties of the coupling optics, impact the FOV. Use of this analysis allows one to determine the maximum theoretical FOV of a planar or curved waveguide-based system.

  9. Comparing kinetic curves in liquid chromatography

    NASA Astrophysics Data System (ADS)

    Kurganov, A. A.; Kanat'eva, A. Yu.; Yakubenko, E. E.; Popova, T. P.; Shiryaeva, V. E.

    2017-01-01

    Five equations for kinetic curves which connect the number of theoretical plates N and time of analysis t 0 for five different versions of optimization, depending on the parameters being varied (e.g., mobile phase flow rate, pressure drop, sorbent grain size), are obtained by means of mathematical modeling. It is found that a method based on the optimization of a sorbent grain size at fixed pressure is most suitable for the optimization of rapid separations. It is noted that the advantages of the method are limited by an area of relatively low efficiency, and the advantage of optimization is transferred to a method based on the optimization of both the sorbent grain size and the drop in pressure across a column in the area of high efficiency.

  10. A novel method of multiple nucleic acid detection: Real-time RT-PCR coupled with probe-melting curve analysis.

    PubMed

    Han, Yang; Hou, Shao-Yang; Ji, Shang-Zhi; Cheng, Juan; Zhang, Meng-Yue; He, Li-Juan; Ye, Xiang-Zhong; Li, Yi-Min; Zhang, Yi-Xuan

    2017-11-15

    A novel method, real-time reverse transcription PCR (real-time RT-PCR) coupled with probe-melting curve analysis, has been established to detect two kinds of samples within one fluorescence channel. Besides a conventional TaqMan probe, this method employs another specially designed melting-probe with a 5' terminus modification which meets the same label with the same fluorescent group. By using an asymmetric PCR method, the melting-probe is able to detect an extra sample in the melting stage effectively while it almost has little influence on the amplification detection. Thus, this method allows the availability of united employment of both amplification stage and melting stage for detecting samples in one reaction. The further demonstration by simultaneous detection of human immunodeficiency virus (HIV) and hepatitis C virus (HCV) in one channel as a model system is presented in this essay. The sensitivity of detection by real-time RT-PCR coupled with probe-melting analysis was proved to be equal to that detected by conventional real-time RT-PCR. Because real-time RT-PCR coupled with probe-melting analysis can double the detection throughputs within one fluorescence channel, it is expected to be a good solution for the problem of low-throughput in current real-time PCR. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. A FEM-based method to determine the complex material properties of piezoelectric disks.

    PubMed

    Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C

    2014-08-01

    Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is considered in the adjustment procedure, the obtained material properties allow simulating the displacement amplitude accurately. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Shape information from glucose curves: functional data analysis compared with traditional summary measures.

    PubMed

    Frøslie, Kathrine Frey; Røislien, Jo; Qvigstad, Elisabeth; Godang, Kristin; Bollerslev, Jens; Voldner, Nanna; Henriksen, Tore; Veierød, Marit B

    2013-01-17

    Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2-3  hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as "general level" (FPC1), "time to peak" (FPC2) and "oscillations" (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (-0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (-0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy.

  13. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    PubMed

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  14. Comparison and Analysis on Mechanical Property and Machinability about Polyetheretherketone and Carbon-Fibers Reinforced Polyetheretherketone

    PubMed Central

    Ji, Shijun; Sun, Changrui; Zhao, Ji; Liang, Fusheng

    2015-01-01

    The aim of this paper is to compare the mechanical property and machinability of Polyetheretherketone (PEEK) and 30 wt% carbon-fibers reinforced Polyetheretherketone (PEEK CF 30). The method of nano-indentation is used to investigate the microscopic mechanical property. The evolution of load with displacement, Young’s modulus curves and hardness curves are analyzed. The results illustrate that the load-displacement curves of PEEK present better uniformity, and the variation of Young’s modulus and hardness of PEEK both change smaller at the experimental depth. The machinability between PEEK and PEEK CF 30 are also compared by the method of single-point diamond turning (SPDT), and the peak-to-valley value (PV) and surface roughness (Ra) are obtained to evaluate machinability of the materials after machining. The machining results show that PEEK has smaller PV and Ra, which means PEEK has superior machinability. PMID:28793428

  15. Comparison and Analysis on Mechanical Property and Machinability about Polyetheretherketone and Carbon-Fibers Reinforced Polyetheretherketone.

    PubMed

    Ji, Shijun; Sun, Changrui; Zhao, Ji; Liang, Fusheng

    2015-07-07

    The aim of this paper is to compare the mechanical property and machinability of Polyetheretherketone (PEEK) and 30 wt% carbon-fibers reinforced Polyetheretherketone (PEEK CF 30). The method of nano-indentation is used to investigate the microscopic mechanical property. The evolution of load with displacement, Young's modulus curves and hardness curves are analyzed. The results illustrate that the load-displacement curves of PEEK present better uniformity, and the variation of Young's modulus and hardness of PEEK both change smaller at the experimental depth. The machinability between PEEK and PEEK CF 30 are also compared by the method of single-point diamond turning (SPDT), and the peak-to-valley value (PV) and surface roughness (Ra) are obtained to evaluate machinability of the materials after machining. The machining results show that PEEK has smaller PV and Ra, which means PEEK has superior machinability.

  16. Dispersion analysis and measurement of circular cylindrical wedge-like acoustic waveguides.

    PubMed

    Yu, Tai-Ho

    2015-09-01

    This study investigated the propagation of flexural waves along the outer edge of a circular cylindrical wedge, the phase velocities, and the corresponding mode displacements. Thus far, only approximate solutions have been derived because the corresponding boundary-value problems are complex. In this study, dispersion curves were determined using the bi-dimensional finite element method and derived through the separation of variables and the Hamilton principle. Modal displacement calculations clarified that the maximal deformations appeared at the outer edge of the wedge tip. Numerical examples indicated how distinct thin-film materials deposited on the outer surface of the circular cylindrical wedge influenced the dispersion curves. Additionally, dispersion curves were measured using a laser-induced guided wave, a knife-edge measurement scheme, and a two-dimensional fast Fourier transform method. Both the numerical and experimental results correlated closely, thus validating the numerical solution. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Analysis of Wien filter spectra from Hall thruster plumes.

    PubMed

    Huang, Wensheng; Shastry, Rohit

    2015-07-01

    A method for analyzing the Wien filter spectra obtained from the plumes of Hall thrusters is derived and presented. The new method extends upon prior work by deriving the integration equations for the current and species fractions. Wien filter spectra from the plume of the NASA-300M Hall thruster are analyzed with the presented method and the results are used to examine key trends. The new integration method is found to produce results slightly different from the traditional area-under-the-curve method. The use of different velocity distribution forms when performing curve-fits to the peaks in the spectra is compared. Additional comparison is made with the scenario where the current fractions are assumed to be proportional to the heights of peaks. The comparison suggests that the calculated current fractions are not sensitive to the choice of form as long as both the height and width of the peaks are accounted for. Conversely, forms that only account for the height of the peaks produce inaccurate results. Also presented are the equations for estimating the uncertainty associated with applying curve fits and charge-exchange corrections. These uncertainty equations can be used to plan the geometry of the experimental setup.

  18. In-plane stability analysis of non-uniform cross-sectioned curved beams

    NASA Astrophysics Data System (ADS)

    Öztürk, Hasan; Yeşilyurt, İsa; Sabuncu, Mustafa

    2006-09-01

    In this study, in-plane stability analysis of non-uniform cross-sectioned thin curved beams under uniformly distributed dynamic loads is investigated by using the Finite Element Method. The first and second unstable regions are examined for dynamic stability. In-plane vibration and in-plane buckling are also studied. Two different finite element models, representing variations of cross-section, are developed by using simple strain functions in the analysis. The results obtained from this study are compared with the results of other investigators in existing literature for the fundamental natural frequency and critical buckling load. The effects of opening angle, variations of cross-section, static and dynamic load parameters on the stability regions are shown in graphics.

  19. Effect of nitrogen plasma afterglow on the surface charge effect resulted during XPS surface analysis of amorphous carbon nitride thin films

    NASA Astrophysics Data System (ADS)

    Kayed, Kamal

    2018-06-01

    The aim of this paper is to investigate the relationship between the micro structure and the surface charge effect resulted during XPS surface analysis of amorphous carbon nitride thin films prepared by laser ablation method. The study results show that the charge effect coefficient (E) is not just a correction factor. We found that the changes in this coefficient value due to incorporation of nitrogen atoms into the carbon network are related to the spatial configurations of the sp2 bonded carbon atoms, order degree and sp2 clusters size. In addition, results show that the curve E vs. C(sp3)-N is a characteristic curve of the micro structure. This means that using this curve makes it easy to sorting the samples according to the micro structure (hexagonal rings or chains).

  20. Temperature effect on the acid-base behaviour of Na-montmorillonite.

    PubMed

    Duc, Myriam; Carteret, Cédric; Thomas, Fabien; Gaboriaud, Fabien

    2008-11-15

    We report a study of the acid-base properties of Na-montmorillonite suspensions at temperatures from 25 degrees C to 80 degrees C, by continuous and batch potentiometric methods, combined with analysis of the dissolved and readsorbed species. The batch titration curves reveal that the dissolution processes of Na-montmorillonite and silica-rich secondary phases are increasingly predominant, respectively at acid and basic pH, and according to the temperature. The continuous titration curves are less affected by these side reactions. In the absence of a common intersection point, the thermodynamic analysis of the curves was based on the shift of the PZNPC with the ionic strength. This shift was not significantly altered by the temperature, by comparison with the dissociation product of water in the same conditions. Therefore we concluded that protonation-deprotonation of the dissociable sites at the edges of the clay platelets is not significantly temperature dependent.

  1. Modeling of light dynamic cone penetration test - Panda 3 ® in granular material by using 3D Discrete element method

    NASA Astrophysics Data System (ADS)

    Tran, Quoc Anh; Chevalier, Bastien; Benz, Miguel; Breul, Pierre; Gourvès, Roland

    2017-06-01

    The recent technological developments made on the light dynamic penetration test Panda 3 ® provide a dynamic load-penetration curve σp - sp for each impact. This curve is influenced by the mechanical and physical properties of the investigated granular media. In order to analyze and exploit the load-penetration curve, a numerical model of penetration test using 3D Discrete Element Method is proposed for reproducing tests in dynamic conditions in granular media. All parameters of impact used in this model have at first been calibrated by respecting mechanical and geometrical properties of the hammer and the rod. There is a good agreement between experimental results and the ones obtained from simulations in 2D or 3D. After creating a sample, we will simulate the Panda 3 ®. It is possible to measure directly the dynamic load-penetration curve occurring at the tip for each impact. Using the force and acceleration measured in the top part of the rod, it is possible to separate the incident and reflected waves and then calculate the tip's load-penetration curve. The load-penetration curve obtained is qualitatively similar with that obtained by experimental tests. In addition, the frequency analysis of the measured signals present also a good compliance with that measured in reality when the tip resistance is qualitatively similar.

  2. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca; Haider, Masoom A.; Jaffray, David A.

    Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarselymore » sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.« less

  3. A century of enzyme kinetic analysis, 1913 to 2013.

    PubMed

    Johnson, Kenneth A

    2013-09-02

    This review traces the history and logical progression of methods for quantitative analysis of enzyme kinetics from the 1913 Michaelis and Menten paper to the application of modern computational methods today. Following a brief review of methods for fitting steady state kinetic data, modern methods are highlighted for fitting full progress curve kinetics based upon numerical integration of rate equations, including a re-analysis of the original Michaelis-Menten full time course kinetic data. Finally, several illustrations of modern transient state kinetic methods of analysis are shown which enable the elucidation of reactions occurring at the active sites of enzymes in order to relate structure and function. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  4. Oscillatory patterns in the light curves of five long-term monitored type 1 active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Kovačević, Andjelka B.; Pérez-Hernández, Ernesto; Popović, Luka Č.; Shapovalova, Alla I.; Kollatschny, Wolfram; Ilić, Dragana

    2018-04-01

    New combined data of five well-known type 1 active galactic nuclei (AGNs) are probed with a novel hybrid method in a search for oscillatory behaviour. Additional analysis of artificial light curves obtained from the coupled oscillatory models gives confirmation for detected periods that could have a physical background. We find periodic variations in the long-term light curves of 3C 390.3, NGC 4151 and NGC 5548, and E1821 + 643, with correlation coefficients larger than 0.6. We show that the oscillatory patterns of two binary black hole candidates, NGC 5548 and E1821 + 643, correspond to qualitatively different dynamical regimes of chaos and stability, respectively. We demonstrate that the absence of oscillatory patterns in Arp 102B could be the result of a weak coupling between oscillatory mechanisms. This is the first good evidence that 3C 390.3 and Arp 102B, categorized as double-peaked Balmer line objects, have qualitative different dynamics. Our analysis shows a novelty in the oscillatory dynamical patterns of the light curves of these type 1 AGNs.

  5. STR melting curve analysis as a genetic screening tool for crime scene samples.

    PubMed

    Nguyen, Quang; McKinney, Jason; Johnson, Donald J; Roberts, Katherine A; Hardy, Winters R

    2012-07-01

    In this proof-of-concept study, high-resolution melt curve (HRMC) analysis was investigated as a postquantification screening tool to discriminate human CSF1PO and THO1 genotypes amplified with mini-STR primers in the presence of SYBR Green or LCGreen Plus dyes. A total of 12 CSF1PO and 11 HUMTHO1 genotypes were analyzed on the LightScanner HR96 and LS-32 systems and were correctly differentiated based upon their respective melt profiles. Short STR amplicon melt curves were affected by repeat number, and single-source and mixed DNA samples were additionally differentiated by the formation of heteroduplexes. Melting curves were shown to be unique and reproducible from DNA quantities ranging from 20 to 0.4 ng and distinguished identical from nonidentical genotypes from DNA derived from different biological fluids and compromised samples. Thus, a method is described which can assess both the quantity and the possible probative value of samples without full genotyping. 2012 American Academy of Forensic Sciences. Published 2012. This article is a U.S. Government work and is in the public domain in the U.S.A.

  6. Optimal spinneret layout in Von Koch curves of fractal theory based needleless electrospinning process

    NASA Astrophysics Data System (ADS)

    Yang, Wenxiu; Liu, Yanbo; Zhang, Ligai; Cao, Hong; Wang, Yang; Yao, Jinbo

    2016-06-01

    Needleless electrospinning technology is considered as a better avenue to produce nanofibrous materials at large scale, and electric field intensity and its distribution play an important role in controlling nanofiber diameter and quality of the nanofibrous web during electrospinning. In the current study, a novel needleless electrospinning method was proposed based on Von Koch curves of Fractal configuration, simulation and analysis on electric field intensity and distribution in the new electrospinning process were performed with Finite element analysis software, Comsol Multiphysics 4.4, based on linear and nonlinear Von Koch fractal curves (hereafter called fractal models). The result of simulation and analysis indicated that Second level fractal structure is the optimal linear electrospinning spinneret in terms of field intensity and uniformity. Further simulation and analysis showed that the circular type of Fractal spinneret has better field intensity and distribution compared to spiral type of Fractal spinneret in the nonlinear Fractal electrospinning technology. The electrospinning apparatus with the optimal Von Koch fractal spinneret was set up to verify the theoretical analysis results from Comsol simulation, achieving more uniform electric field distribution and lower energy cost, compared to the current needle and needleless electrospinning technologies.

  7. Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential value.

    PubMed

    Bishop, Dorothy V M; Thompson, Paul A

    2016-01-01

    Background. The p-curve is a plot of the distribution of p-values reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. Methods. p-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results. We show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the "p-hacking bump" just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions. The absence of a bump in the p-curve is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis. In particular, p-hacking with ghost variables is likely to be missed.

  8. Seismic performance evaluation of RC frame-shear wall structures using nonlinear analysis methods

    NASA Astrophysics Data System (ADS)

    Shi, Jialiang; Wang, Qiuwei

    To further understand the seismic performance of reinforced concrete (RC) frame-shear wall structures, a 1/8 model structure is scaled from a main factory structure with seven stories and seven bays. The model with four-stories and two-bays was pseudo-dynamically tested under six earthquake actions whose peak ground accelerations (PGA) vary from 50gal to 400gal. The damage process and failure patterns were investigated. Furthermore, nonlinear dynamic analysis (NDA) and capacity spectrum method (CSM) were adopted to evaluate the seismic behavior of the model structure. The top displacement curve, story drift curve and distribution of hinges were obtained and discussed. It is shown that the model structure had the characteristics of beam-hinge failure mechanism. The two methods can be used to evaluate the seismic behavior of RC frame-shear wall structures well. What’s more, the NDA can be somewhat replaced by CSM for the seismic performance evaluation of RC structures.

  9. Estimating Time to Event From Longitudinal Categorical Data: An Analysis of Multiple Sclerosis Progression.

    PubMed

    Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A

    2007-12-01

    The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.

  10. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    NASA Astrophysics Data System (ADS)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  11. Learning curve for laparoscopic Heller myotomy and Dor fundoplication for achalasia

    PubMed Central

    Omura, Nobuo; Tsuboi, Kazuto; Hoshino, Masato; Yamamoto, Seryung; Akimoto, Shunsuke; Masuda, Takahiro; Kashiwagi, Hideyuki; Yanaga, Katsuhiko

    2017-01-01

    Purpose Although laparoscopic Heller myotomy and Dor fundoplication (LHD) is widely performed to address achalasia, little is known about the learning curve for this technique. We assessed the learning curve for performing LHD. Methods Of the 514 cases with LHD performed between August 1994 and March 2016, the surgical outcomes of 463 cases were evaluated after excluding 50 cases with reduced port surgery and one case with the simultaneous performance of laparoscopic distal partial gastrectomy. A receiver operating characteristic (ROC) curve analysis was used to identify the cut-off value for the number of surgical experiences necessary to become proficient with LHD, which was defined as the completion of the learning curve. Results We defined the completion of the learning curve when the following 3 conditions were satisfied. 1) The operation time was less than 165 minutes. 2) There was no blood loss. 3) There was no intraoperative complication. In order to establish the appropriate number of surgical experiences required to complete the learning curve, the cut-off value was evaluated by using a ROC curve (AUC 0.717, p < 0.001). Finally, we identified the cut-off value as 16 surgical cases (sensitivity 0.706, specificity 0.646). Conclusion Learning curve seems to complete after performing 16 cases. PMID:28686640

  12. New strategy to identify radicals in a time evolving EPR data set by multivariate curve resolution-alternating least squares.

    PubMed

    Fadel, Maya Abou; de Juan, Anna; Vezin, Hervé; Duponchel, Ludovic

    2016-12-01

    Electron paramagnetic resonance (EPR) spectroscopy is a powerful technique that is able to characterize radicals formed in kinetic reactions. However, spectral characterization of individual chemical species is often limited or even unmanageable due to the severe kinetic and spectral overlap among species in kinetic processes. Therefore, we applied, for the first time, multivariate curve resolution-alternating least squares (MCR-ALS) method to EPR time evolving data sets to model and characterize the different constituents in a kinetic reaction. Here we demonstrate the advantage of multivariate analysis in the investigation of radicals formed along the kinetic process of hydroxycoumarin in alkaline medium. Multiset analysis of several EPR-monitored kinetic experiments performed in different conditions revealed the individual paramagnetic centres as well as their kinetic profiles. The results obtained by MCR-ALS method demonstrate its prominent potential in analysis of EPR time evolved spectra. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms.

    PubMed

    Andrzejewska, Anna; Kaczmarski, Krzysztof; Guiochon, Georges

    2009-02-13

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N=500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  14. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  15. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  16. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less

  17. Structural Technology Evaluation and Analysis Program (STEAP) Delivery Order 0042: Development of the Equivalent Overload Model, Demonstration of the Failure of Superposition, and Relaxation/Redistribution Measurement

    DTIC Science & Technology

    2011-09-01

    with the bilinear plasticity relation. We used the bilinear relation, which allowed a full range of hardening from isotropic to kinematic to be...43 Table 12. Verification of the Weight Function Method for Single Corner Crack at a Hole in an Infinite ...determine the “Young’s Modulus,” or the slope of the linear region of the curve, the experimental data is curve fit with

  18. Investigations of Fully Homomorphic Encryption (IFHE)

    DTIC Science & Technology

    2015-05-01

    analysis via experiments using the curve secp256k1 used in the Bitcoin protocol. In particular we show that with as little as 200 signatures we are able...used in Bitcoin [30]. The implementation of the secp256k1 curve in OpenSSL is interesting as it uses the wNAF method for exponentiation, as opposed to... Bitcoin an obvious mitigation against the attack is to limit the number of times a private key is used within the Bitcoin protocol. Each wallet

  19. Aeroelastic analysis of a troposkien-type wind turbine blade

    NASA Technical Reports Server (NTRS)

    Nitzsche, F.

    1981-01-01

    The linear aeroelastic equations for one curved blade of a vertical axis wind turbine in state vector form are presented. The method is based on a simple integrating matrix scheme together with the transfer matrix idea. The method is proposed as a convenient way of solving the associated eigenvalue problem for general support conditions.

  20. Power Analysis for Complex Mediational Designs Using Monte Carlo Methods

    ERIC Educational Resources Information Center

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…

  1. An analytical and experimental investigation of the response of the curved, composite frame/skin specimens

    NASA Technical Reports Server (NTRS)

    Moas, Eduardo; Boitnott, Richard L.; Griffin, O. Hayden, Jr.

    1994-01-01

    Six-foot diameter, semicircular graphite/epoxy specimens representative of generic aircraft frames were loaded quasi-statistically to determine their load response and failure mechanisms for large deflections that occur in airplanes crashes. These frame/skin specimens consisted of a cylindrical skin section co-cured with a semicircular I-frame. The skin provided the necessary lateral stiffness to keep deformations in the plane of the frame in order to realistically represent deformations as they occur in actual fuselage structures. Various frame laminate stacking sequences and geometries were evaluated by statically loading the specimen until multiple failures occurred. Two analytical methods were compared for modeling the frame/skin specimens: a two-dimensional shell finite element analysis and a one-dimensional, closed-form, curved beam solution derived using an energy method. Flange effectivities were included in the beam analysis to account for the curling phenomenon that occurs in thin flanges of curved beams. Good correlation was obtained between experimental results and the analytical predictions of the linear response of the frames prior to the initial failure. The specimens were found to be useful for evaluating composite frame designs.

  2. On the stability analysis of sharply stratified shear flows

    NASA Astrophysics Data System (ADS)

    Churilov, Semyon

    2018-05-01

    When the stability of a sharply stratified shear flow is studied, the density profile is usually taken stepwise and a weak stratification between pycnoclines is neglected. As a consequence, in the instability domain of the flow two-sided neutral curves appear such that the waves corresponding to them are neutrally stable, whereas the neighboring waves on either side of the curve are unstable, in contrast with the classical result of Miles (J Fluid Mech 16:209-227, 1963) who proved that in stratified flows unstable oscillations can be only on one side of the neutral curve. In the paper, the contradiction is resolved and changes in the flow stability pattern under transition from a model stepwise to a continuous density profile are analyzed. On this basis, a simple self-consistent algorithm is proposed for studying the stability of sharply stratified shear flows with a continuous density variation and an arbitrary monotonic velocity profile without inflection points. Because our calculations and the algorithm are both based on the method of stability analysis (Churilov J Fluid Mech 539:25-55, 2005; ibid, 617, 301-326, 2008), which differs essentially from usually used, the paper starts with a brief review of the method and results obtained with it.

  3. TUNNEL LINING DESIGN METHOD BY FRAME STRUCTURE ANALYSIS USING GROUND REACTION CURVE

    NASA Astrophysics Data System (ADS)

    Sugimoto, Mitsutaka; Sramoon, Aphichat; Okazaki, Mari

    Both of NATM and shield tunnelling method can be applied to Diluvial and Neogene deposit, on which mega cities are located in Japan. Since the lining design method for both tunnelling methods are much different, the unified concept for tunnel lining design is expected. Therefore, in this research, a frame structure analysis model for tunnel lining design using the ground reaction curve was developed, which can take into account the earth pressure due to excavated surface displacement to active side including the effect of ground self-stabilization, and the excavated surface displacement before lining installation. Based on the developed model, a parameter study was carried out taking coefficient of subgrade reaction and grouting rate as a parameter, and the measured earth pressure acting on the lining at the site was compared with the calculated one by the developed model and the conventional model. As a result, it was confirmed that the developed model can represent earth pressure acting on the lining, lining displacement, and lining sectional force at ground ranging from soft ground to stiff ground.

  4. Identification of the critical depth-of-cut through a 2D image of the cutting region resulting from taper cutting of brittle materials

    NASA Astrophysics Data System (ADS)

    Gu, Wen; Zhu, Zhiwei; Zhu, Wu-Le; Lu, Leyao; To, Suet; Xiao, Gaobo

    2018-05-01

    An automatic identification method for obtaining the critical depth-of-cut (DoC) of brittle materials with nanometric accuracy and sub-nanometric uncertainty is proposed in this paper. With this method, a two-dimensional (2D) microscopic image of the taper cutting region is captured and further processed by image analysis to extract the margin of generated micro-cracks in the imaging plane. Meanwhile, an analytical model is formulated to describe the theoretical curve of the projected cutting points on the imaging plane with respect to a specified DoC during the whole cutting process. By adopting differential evolution algorithm-based minimization, the critical DoC can be identified by minimizing the deviation between the extracted margin and the theoretical curve. The proposed method is demonstrated through both numerical simulation and experimental analysis. Compared with conventional 2D- and 3D-microscopic-image-based methods, determination of the critical DoC in this study uses the envelope profile rather than the onset point of the generated cracks, providing a more objective approach with smaller uncertainty.

  5. Proposal for a standardised identification of the mono-exponential terminal phase for orally administered drugs.

    PubMed

    Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte

    2008-04-01

    The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.

  6. Modified Displacement Transfer Functions for Deformed Shape Predictions of Slender Curved Structures with Varying Curvatives

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Fleischer, Van Tran

    2014-01-01

    To eliminate the need to use finite-element modeling for structure shape predictions, a new method was invented. This method is to use the Displacement Transfer Functions to transform the measured surface strains into deflections for mapping out overall structural deformed shapes. The Displacement Transfer Functions are expressed in terms of rectilinearly distributed surface strains, and contain no material properties. This report is to apply the patented method to the shape predictions of non-symmetrically loaded slender curved structures with different curvatures up to a full circle. Because the measured surface strains are not available, finite-element analysis had to be used to analytically generate the surface strains. Previously formulated straight-beam Displacement Transfer Functions were modified by introducing the curvature-effect correction terms. Through single-point or dual-point collocations with finite-elementgenerated deflection curves, functional forms of the curvature-effect correction terms were empirically established. The resulting modified Displacement Transfer Functions can then provide quite accurate shape predictions. Also, the uniform straight-beam Displacement Transfer Function was applied to the shape predictions of a section-cut of a generic capsule (GC) outer curved sandwich wall. The resulting GC shape predictions are quite accurate in partial regions where the radius of curvature does not change sharply.

  7. Extracting transient Rayleigh wave and its application in detecting quality of highway roadbed

    USGS Publications Warehouse

    Liu, J.; Xia, J.; Luo, Y.; Li, X.; Xu, S.; ,

    2004-01-01

    This paper first explains the tau-p mapping method of extracting Rayleigh waves (LR waves) from field shot gathers. It also explains a mathematical model of physical character parameters of quality of high-grade roads. This paper then discusses an algorithm of computing dispersion curves using adjacent channels. Shear velocity and physical character parameters are obtained by inversion of dispersion curves. The algorithm using adjacent channels to calculating dispersion curves eliminates average effects that exist by using multi-channels to obtain dispersion curves so that it improves longitudinal and transverse resolution of LR waves and precision of non-invasive detection, and also broadens its application fields. By analysis of modeling results of detached computation of the ground roll and real examples of detecting density and pressure strength of a high-grade roadbed, and by comparison of shallow seismic image method with borehole cores, we concluded that: 1 the abnormal scale and configuration obtained by LR waves are mostly the same as the result of shallow seismic image method; 2 an average relative error of density obtained from LR waves inversion is 1.6% comparing with borehole coring; 3 transient LR waves in detecting density and pressure strength of a high-grade roadbed is feasible and effective.

  8. Analysis test of understanding of vectors with the three-parameter logistic model of item response theory and item response curves technique

    NASA Astrophysics Data System (ADS)

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-12-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.

  9. Circuit analysis method for thin-film solar cell modules

    NASA Technical Reports Server (NTRS)

    Burger, D. R.

    1985-01-01

    The design of a thin-film solar cell module is dependent on the probability of occurrence of pinhole shunt defects. Using known or assumed defect density data, dichotomous population statistics can be used to calculate the number of defects expected in a module. Probability theory is then used to assign the defective cells to individual strings in a selected series-parallel circuit design. Iterative numerical calculation is used to calcuate I-V curves using cell test values or assumed defective cell values as inputs. Good and shunted cell I-V curves are added to determine the module output power and I-V curve. Different levels of shunt resistance can be selected to model different defect levels.

  10. Bearing fault diagnosis under unknown time-varying rotational speed conditions via multiple time-frequency curve extraction

    NASA Astrophysics Data System (ADS)

    Huang, Huan; Baddour, Natalie; Liang, Ming

    2018-02-01

    Under normal operating conditions, bearings often run under time-varying rotational speed conditions. Under such circumstances, the bearing vibrational signal is non-stationary, which renders ineffective the techniques used for bearing fault diagnosis under constant running conditions. One of the conventional methods of bearing fault diagnosis under time-varying speed conditions is resampling the non-stationary signal to a stationary signal via order tracking with the measured variable speed. With the resampled signal, the methods available for constant condition cases are thus applicable. However, the accuracy of the order tracking is often inadequate and the time-varying speed is sometimes not measurable. Thus, resampling-free methods are of interest for bearing fault diagnosis under time-varying rotational speed for use without tachometers. With the development of time-frequency analysis, the time-varying fault character manifests as curves in the time-frequency domain. By extracting the Instantaneous Fault Characteristic Frequency (IFCF) from the Time-Frequency Representation (TFR) and converting the IFCF, its harmonics, and the Instantaneous Shaft Rotational Frequency (ISRF) into straight lines, the bearing fault can be detected and diagnosed without resampling. However, so far, the extraction of the IFCF for bearing fault diagnosis is mostly based on the assumption that at each moment the IFCF has the highest amplitude in the TFR, which is not always true. Hence, a more reliable T-F curve extraction approach should be investigated. Moreover, if the T-F curves including the IFCF, its harmonic, and the ISRF can be all extracted from the TFR directly, no extra processing is needed for fault diagnosis. Therefore, this paper proposes an algorithm for multiple T-F curve extraction from the TFR based on a fast path optimization which is more reliable for T-F curve extraction. Then, a new procedure for bearing fault diagnosis under unknown time-varying speed conditions is developed based on the proposed algorithm and a new fault diagnosis strategy. The average curve-to-curve ratios are utilized to describe the relationship of the extracted curves and fault diagnosis can then be achieved by comparing the ratios to the fault characteristic coefficients. The effectiveness of the proposed method is validated by simulated and experimental signals.

  11. On the behavior of certain ink aging curves.

    PubMed

    Cantú, Antonio A

    2017-09-01

    This work treats writing inks, particularly ballpoint pen inks. It reviews those ink aging methods that are based on the analysis (measurement) of ink solvents (e.g., 2-phenoxyethanol, which is the most common among ballpoint pen inks). Each method involves measurements that are components of an ink aging parameter associated with the method. Only mass independent parameters are considered. An ink solvent from an ink that is on an air-exposed substrate will evaporate at a decreasing rate and is never constant as the ink ages. An ink aging parameter should reflect this behavior. That is, the graph of a parameter's experimentally-determined values plotted against ink age (which yields the ink aging curve) should show this behavior. However, some experimentally-determined aging curves contain outlying points that are below or above where they should be or points corresponding to different ages that have the same ordinate (parameter value). Such curves, unfortunately, are useless since such curves show that an ink can appear older or younger than what it should be in one or more of its points or have the same age in two or more of its points. This work explains that one cause of this unexpected behavior is that the parameter values were improperly determined such as when a measurement is made of an ink solvent that is not completely extracted (removed) from an ink sample with a chosen extractor such as dry heat or a solvent. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. High performance liquid chromatographic assay for the quantitation of total glutathione in plasma

    NASA Technical Reports Server (NTRS)

    Abukhalaf, Imad K.; Silvestrov, Natalia A.; Menter, Julian M.; von Deutsch, Daniel A.; Bayorh, Mohamed A.; Socci, Robin R.; Ganafa, Agaba A.

    2002-01-01

    A simple and widely used homocysteine HPLC procedure was applied for the HPLC identification and quantitation of glutathione in plasma. The method, which utilizes SBDF as a derivatizing agent utilizes only 50 microl of sample volume. Linear quantitative response curve was generated for glutathione over a concentration range of 0.3125-62.50 micromol/l. Linear regression analysis of the standard curve exhibited correlation coefficient of 0.999. Limit of detection (LOD) and limit of quantitation (LOQ) values were 5.0 and 15 pmol, respectively. Glutathione recovery using this method was nearly complete (above 96%). Intra-assay and inter-assay precision studies reflected a high level of reliability and reproducibility of the method. The applicability of the method for the quantitation of glutathione was demonstrated successfully using human and rat plasma samples.

  13. Meta-Analyses of Diagnostic Accuracy in Imaging Journals: Analysis of Pooling Techniques and Their Effect on Summary Estimates of Diagnostic Accuracy.

    PubMed

    McGrath, Trevor A; McInnes, Matthew D F; Korevaar, Daniël A; Bossuyt, Patrick M M

    2016-10-01

    Purpose To determine whether authors of systematic reviews of diagnostic accuracy studies published in imaging journals used recommended methods for meta-analysis, and to evaluate the effect of traditional methods on summary estimates of sensitivity and specificity. Materials and Methods Medline was searched for published systematic reviews that included meta-analysis of test accuracy data limited to imaging journals published from January 2005 to May 2015. Two reviewers independently extracted study data and classified methods for meta-analysis as traditional (univariate fixed- or random-effects pooling or summary receiver operating characteristic curve) or recommended (bivariate model or hierarchic summary receiver operating characteristic curve). Use of methods was analyzed for variation with time, geographical location, subspecialty, and journal. Results from reviews in which study authors used traditional univariate pooling methods were recalculated with a bivariate model. Results Three hundred reviews met the inclusion criteria, and in 118 (39%) of those, authors used recommended meta-analysis methods. No change in the method used was observed with time (r = 0.54, P = .09); however, there was geographic (χ(2) = 15.7, P = .001), subspecialty (χ(2) = 46.7, P < .001), and journal (χ(2) = 27.6, P < .001) heterogeneity. Fifty-one univariate random-effects meta-analyses were reanalyzed with the bivariate model; the average change in the summary estimate was -1.4% (P < .001) for sensitivity and -2.5% (P < .001) for specificity. The average change in width of the confidence interval was 7.7% (P < .001) for sensitivity and 9.9% (P ≤ .001) for specificity. Conclusion Recommended methods for meta-analysis of diagnostic accuracy in imaging journals are used in a minority of reviews; this has not changed significantly with time. Traditional (univariate) methods allow overestimation of diagnostic accuracy and provide narrower confidence intervals than do recommended (bivariate) methods. (©) RSNA, 2016 Online supplemental material is available for this article.

  14. Research on the middle-of-receiver-spread assumption of the MASW method

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Liu, J.; Xu, Y.; Liu, Q.

    2009-01-01

    The multichannel analysis of surface wave (MASW) method has been effectively used to determine near-surface shear- (S-) wave velocity. Estimating the S-wave velocity profile from Rayleigh-wave measurements is straightforward. A three-step process is required to obtain S-wave velocity profiles: acquisition of a multiple number of multichannel records along a linear survey line by use of the roll-along mode, extraction of dispersion curves of Rayleigh waves, and inversion of dispersion curves for an S-wave velocity profile for each shot gather. A pseudo-2D S-wave velocity section can be generated by aligning 1D S-wave velocity models. In this process, it is very important to understand where the inverted 1D S-wave velocity profile should be located: the midpoint of each spread (a middle-of-receiver-spread assumption) or somewhere between the source and the last receiver. In other words, the extracted dispersion curve is determined by the geophysical structure within the geophone spread or strongly affected by the source geophysical structure. In this paper, dispersion curves of synthetic datasets and a real-world example are calculated by fixing the receiver spread and changing the source location. Results demonstrate that the dispersion curves are mainly determined by structures within a receiver spread. ?? 2008 Elsevier Ltd. All rights reserved.

  15. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  16. Open-Mode Debonding Analysis of Curved Sandwich Panels Subjected to Heating and Cryogenic Cooling on Opposite Faces

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    1999-01-01

    Increasing use of curved sandwich panels as aerospace structure components makes it vital to fully understand their thermostructural behavior and identify key factors affecting the open-mode debonding failure. Open-mode debonding analysis is performed on a family of curved honeycomb-core sandwich panels with different radii of curvature. The curved sandwich panels are either simply supported or clamped, and are subjected to uniform heating on the convex side and uniform cryogenic cooling on the concave side. The finite-element method was used to study the effects of panel curvature and boundary condition on the open-mode stress (radial tensile stress) and displacement fields in the curved sandwich panels. The critical stress point, where potential debonding failure could initiate, was found to be at the midspan (or outer span) of the inner bonding interface between the sandwich core and face sheet on the concave side, depending on the boundary condition and panel curvature. Open-mode stress increases with increasing panel curvature, reaching a maximum value at certain high curvature, and then decreases slightly as the panel curvature continues to increase and approach that of quarter circle. Changing the boundary condition from simply supported to clamped reduces the magnitudes of open-mode stresses and the associated sandwich core depth stretching.

  17. Hysteroscopic sterilization using a virtual reality simulator: assessment of learning curve.

    PubMed

    Janse, Juliënne A; Goedegebuure, Ruben S A; Veersema, Sebastiaan; Broekmans, Frank J M; Schreuder, Henk W R

    2013-01-01

    To assess the learning curve using a virtual reality simulator for hysteroscopic sterilization with the Essure method. Prospective multicenter study (Canadian Task Force classification II-2). University and teaching hospital in the Netherlands. Thirty novices (medical students) and five experts (gynecologists who had performed >150 Essure sterilization procedures). All participants performed nine repetitions of bilateral Essure placement on the simulator. Novices returned after 2 weeks and performed a second series of five repetitions to assess retention of skills. Structured observations on performance using the Global Rating Scale and parameters derived from the simulator provided measurements for analysis. The learning curve is represented by improvement per procedure. Two-way repeated-measures analysis of variance was used to analyze learning curves. Effect size (ES) was calculated to express the practical significance of the results (ES ≥ 0.50 indicates a large learning effect). For all parameters, significant improvements were found in novice performance within nine repetitions. Large learning effects were established for six of eight parameters (p < .001; ES, 0.50-0.96). Novices approached expert level within 9 to 14 repetitions. The learning curve established in this study endorses future implementation of the simulator in curricula on hysteroscopic skill acquisition for clinicians who are interested in learning this sterilization technique. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  18. Can anthropometry measure gender discrimination? An analysis using WHO standards to assess the growth of Bangladeshi children.

    PubMed

    Moestue, Helen

    2009-08-01

    To examine the potential of anthropometry as a tool to measure gender discrimination, with particular attention to the WHO growth standards. Surveillance data collected from 1990 to 1999 were analysed. Height-for-age Z-scores were calculated using three norms: the WHO standards, the 1978 National Center for Health Statistics (NCHS) reference and the 1990 British growth reference (UK90). Bangladesh. Boys and girls aged 6-59 months (n 504 358). The three sets of growth curves provided conflicting pictures of the relative growth of girls and boys by age and over time. Conclusions on sex differences in growth depended also on the method used to analyse the curves, be it according to the shape or the relative position of the sex-specific curves. The shapes of the WHO-generated curves uniquely implied that Bangladeshi girls faltered faster or caught up slower than boys throughout their pre-school years, a finding consistent with the literature. In contrast, analysis of the relative position of the curves suggested that girls had higher WHO Z-scores than boys below 24 months of age. Further research is needed to help establish whether and how the WHO international standards can measure gender discrimination in practice, which continues to be a serious problem in many parts of the world.

  19. Estimating Transmissivity from the Water Level Fluctuations of a Sinusoidally Forced Well

    USGS Publications Warehouse

    Mehnert, E.; Valocchi, A.J.; Heidari, M.; Kapoor, S.G.; Kumar, P.

    1999-01-01

    The water levels in wells are known to fluctuate in response to earth tides and changes in atmospheric pressure. These water level fluctuations can be analyzed to estimate transmissivity (T). A new method to estimate transmissivity, which assumes that the atmospheric pressure varies in a sinusoidal fashion, is presented. Data analysis for this simplified method involves using a set of type curves and estimating the ratio of the amplitudes of the well response over the atmospheric pressure. Type curves for this new method were generated based on a model for ground water flow between the well and aquifer developed by Cooper et al. (1965). Data analysis with this method confirmed these published results: (1) the amplitude ratio is a function of transmissivity, the well radius, and the frequency of the sinusoidal oscillation; and (2) the amplitude ratio is a weak function of storativity. Compared to other methods, the developed method involves simpler, more intuitive data analysis and allows shorter data sets to be analyzed. The effect of noise on estimating the amplitude ratio was evaluated and found to be more significant at lower T. For aquifers with low T, noise was shown to mask the water level fluctuations induced by atmospheric pressure changes. In addition, reducing the length of the data series did not affect the estimate of T, but the variance of the estimate was higher for the shorter series of noisy data.

  20. Estimation of the water retention curve from the soil hydraulic conductivity and sorptivity in an upward infiltration process

    NASA Astrophysics Data System (ADS)

    Moret-Fernández, David; Angulo, Marta; Latorre, Borja; González-Cebollada, César; López, María Victoria

    2017-04-01

    Determination of the saturated hydraulic conductivity, Ks, and the α and n parameters of the van Genuchten (1980) water retention curve, θ(h), are fundamental to fully understand and predict soil water distribution. This work presents a new procedure to estimate the soil hydraulic properties from the inverse analysis of a single cumulative upward infiltration curve followed by an overpressure step at the end of the wetting process. Firstly, Ks is calculated by the Darcy's law from the overpressure step. The soil sorptivity (S) is then estimated using the Haverkamp et al., (1994) equation. Next, a relationship between α and n, f(α,n), is calculated from the estimated Sand Ks. The α and n values are finally obtained by the inverse analysis of the experimental data after applying the f(α,n) relationship to the HYDRUS-1D model. The method was validated on theoretical synthetic curves for three different soils (sand, loam and clay), and subsequently tested on experimental sieved soils (sand, loam, clay loam and clay) of known hydraulic properties. A robust relationship was observed between the theoretical α and nvalues (R2 > 0.99) of the different synthetic soils and those estimated from inverse analysis of the upward infiltration curve. Consistent results were also obtained for the experimental soils (R2 > 0.85). These results demonstrated that this technique allowed accurate estimates of the soil hydraulic properties for a wide range of textures, including clay soils.

  1. Analysis of tipping-curve measurements performed at the DSS-13 beam-waveguide antenna at 32.0 and 8.45 GigaHertz

    NASA Technical Reports Server (NTRS)

    Morabito, D. D.; Skjerve, L.

    1995-01-01

    This article reports on the analysis of the Ka-band Antenna Performance Experiment tipping-curve data acquired at the DSS-13 research and development beam-waveguide (BWG) antenna. By measuring the operating system temperatures as the antenna is moved form zenith to low-elevation angles and fitting a model to the data, one can obtain information on how well the overall temperature model behaves at zenith and approximate the contribution due to the atmosphere. The atmospheric contribution estimated from the data can be expressed in the form of (1) atmospheric noise temperatures that can provide weather statistic information and be compared against those estimated from other methods and (2) the atmospheric loss factor used to refer efficiency measurements to zero atmosphere. This article reports on an analysis performed on a set of 68 8.4-GHz and 67 32-GHz tipping-curve data sets acquired between December 1993 and May 1995 and compares the results with those inferred from a surface model using input meteorological data and from water vapor radiometer (WVR) data. The general results are that, for a selected subset of tip curves, (1) the BWG tipping-curve atmospheric temperatures are in good agreement with those determined from WVR data (the average difference is 0.06 +/- 0.64 K at 32 GHz) and (2) the surface model average values are biased 3.6 K below those of the BWG and WVR at 32 GHz.

  2. A Curve Fitting Approach Using ANN for Converting CT Number to Linear Attenuation Coefficient for CT-based PET Attenuation Correction

    NASA Astrophysics Data System (ADS)

    Lai, Chia-Lin; Lee, Jhih-Shian; Chen, Jyh-Cheng

    2015-02-01

    Energy-mapping, the conversion of linear attenuation coefficients (μ) calculated at the effective computed tomography (CT) energy to those corresponding to 511 keV, is an important step in CT-based attenuation correction (CTAC) for positron emission tomography (PET) quantification. The aim of this study was to implement energy-mapping step by using curve fitting ability of artificial neural network (ANN). Eleven digital phantoms simulated by Geant4 application for tomographic emission (GATE) and 12 physical phantoms composed of various volume concentrations of iodine contrast were used in this study to generate energy-mapping curves by acquiring average CT values and linear attenuation coefficients at 511 keV of these phantoms. The curves were built with ANN toolbox in MATLAB. To evaluate the effectiveness of the proposed method, another two digital phantoms (liver and spine-bone) and three physical phantoms (volume concentrations of 3%, 10% and 20%) were used to compare the energy-mapping curves built by ANN and bilinear transformation, and a semi-quantitative analysis was proceeded by injecting 0.5 mCi FDG into a SD rat for micro-PET scanning. The results showed that the percentage relative difference (PRD) values of digital liver and spine-bone phantom are 5.46% and 1.28% based on ANN, and 19.21% and 1.87% based on bilinear transformation. For 3%, 10% and 20% physical phantoms, the PRD values of ANN curve are 0.91%, 0.70% and 3.70%, and the PRD values of bilinear transformation are 3.80%, 1.44% and 4.30%, respectively. Both digital and physical phantoms indicated that the ANN curve can achieve better performance than bilinear transformation. The semi-quantitative analysis of rat PET images showed that the ANN curve can reduce the inaccuracy caused by attenuation effect from 13.75% to 4.43% in brain tissue, and 23.26% to 9.41% in heart tissue. On the other hand, the inaccuracy remained 6.47% and 11.51% in brain and heart tissue when the bilinear transformation was used. Overall, it can be concluded that the bilinear transformation method resulted in considerable bias and the newly proposed calibration curve built by ANN could achieve better results with acceptable accuracy.

  3. Investigation on the solidification course of Al-Si alloys by using a numerical Newtonian thermal analysis method

    NASA Astrophysics Data System (ADS)

    Tang, Peng; Hu, Zhiliu; Zhao, Yanjun; Huang, Qingbao

    2017-12-01

    A numerical Newtonian thermal analysis (NTA) method was carried out for online monitoring the solidification course of commercial Al-Si alloys. The solidification paths of different molten Al-Si alloys were characterized by the fraction solid curves. The variation of heat capacity of Al and Si were concerned in the determination of baseline evaluation of latent heat. In this experiment, the pure Al, Al-1Si, Al-5Si, Al-9Si, Al-13Si and Al-18Si alloys were molten at 800 °C and cooled at room temperature, respectively. The cooling curves of these alloys were measured by using K-type thermocouples. The liquidus temperatures of these alloys decreased with the increase of Si %. An obvious stage occurred at about 580 °C, which was closely related to Al-Si eutectic reaction. Different phase fractions of these alloys were supported by the microstructure observation.

  4. Review of Hull Structural Monitoring Systems for Navy Ships

    DTIC Science & Technology

    2013-05-01

    generally based on the same basic form of S-N curve, different correction methods are used by the various classification societies. ii. Methods for...Likewise there are a number of different methods employed for temperature compensation and these vary depending on the type of gauge, although typically...Analysis, Inc.[30] Figure 8. Examples of different methods of temperature compensation of fibre-optic strain sensors. It is noted in NATO

  5. Ambiguities and completeness of SAS data analysis: investigations of apoferritin by SAXS/SANS EID and SEC-SAXS methods

    NASA Astrophysics Data System (ADS)

    Zabelskii, D. V.; Vlasov, A. V.; Ryzhykau, Yu L.; Murugova, T. N.; Brennich, M.; Soloviov, D. V.; Ivankov, O. I.; Borshchevskiy, V. I.; Mishin, A. V.; Rogachev, A. V.; Round, A.; Dencher, N. A.; Büldt, G.; Gordeliy, V. I.; Kuklin, A. I.

    2018-03-01

    The method of small angle scattering (SAS) is widely used in the field of biophysical research of proteins in aqueous solutions. Obtaining low-resolution structure of proteins is still a highly valuable method despite the advances in high-resolution methods such as X-ray diffraction, cryo-EM etc. SAS offers the unique possibility to obtain structural information under conditions close to those of functional assays, i.e. in solution, without different additives, in the mg/mL concentration range. SAS method has a long history, but there are still many uncertainties related to data treatment. We compared 1D SAS profiles of apoferritin obtained by X-ray diffraction (XRD) and SAS methods. It is shown that SAS curves for X-ray diffraction crystallographic structure of apoferritin differ more significantly than it might be expected due to the resolution of the SAS instrument. Extrapolation to infinite dilution (EID) method does not sufficiently exclude dimerization and oligomerization effects and therefore could not guarantee total absence of dimers account in the final SAS curve. In this study, we show that EID SAXS, EID SANS and SEC-SAXS methods give complementary results and when they are used all together, it allows obtaining the most accurate results and high confidence from SAS data analysis of proteins.

  6. First reference curves of waist and hip circumferences in an Asian population of youths: CASPIAN study.

    PubMed

    Kelishadi, Roya; Gouya, Mohammad Mehdi; Ardalan, Gelayol; Hosseini, Mohsen; Motaghian, Molouk; Delavari, Alireza; Majdzadeh, Reza; Heidarzadeh, Abtin; Mahmoud-Arabi, Minou Sadat; Riazi, Mohammad Mehdi

    2007-06-01

    The Objective of the present study is to develop the first age- and gender-specific reference curves for waist and hip circumferences in an Asian population of youths. This cross-sectional population survey was conducted in 2003-04 on a nationally representative sample of 21111 school-students living in urban (84.6%) and rural (15.4%) areas of 23 provinces in Iran. After anthropometric measurements, smoothed reference curves for waist and hip circumference (WC, HiC) and waist-to-hip ratio (WHR) were developed by the LMS method. In both genders, WC and HiC percentile values increased with age. For girls, the 50th to 95th percentile curves for WC had a sharp increase between 8 and 13 years and 11-15 years, respectively, and began to plateau after this age, whereas for boys, these curves had a persistent and less sharp increase with age, until the age of 18 years. The WHR curves of girls decreased with age until 15 years and began to plateau thereafter, whereas for boys the 25th to 95th curves had a plateau pattern. Comparison of the current reference curves with the British ones showed that in boys, the 5th and 50th percentile curves were similar in both studies, but the 95th percentile curve of our study was higher than the British curves. For girls, the 5th percentile curves of both studies were similar, but the 50th and 95th percentile curves of our study were higher than the British ones. These curves represent the first childhood WC, HiC and WHR reference curves obtained in Asia. These curves can provide baseline data for analysis of time trends, as well as for international comparisons.

  7. Manufacturing complexity analysis

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1977-01-01

    The analysis of the complexity of a typical system is presented. Starting with the subsystems of an example system, the step-by-step procedure for analysis of the complexity of an overall system is given. The learning curves for the various subsystems are determined as well as the concurrent numbers of relevant design parameters. Then trend curves are plotted for the learning curve slopes versus the various design-oriented parameters, e.g. number of parts versus slope of learning curve, or number of fasteners versus slope of learning curve, etc. Representative cuts are taken from each trend curve, and a figure-of-merit analysis is made for each of the subsystems. Based on these values, a characteristic curve is plotted which is indicative of the complexity of the particular subsystem. Each such characteristic curve is based on a universe of trend curve data taken from data points observed for the subsystem in question. Thus, a characteristic curve is developed for each of the subsystems in the overall system.

  8. Applications of multi-frequency single beam sonar fisheries analysis methods for seep quantification and characterization

    NASA Astrophysics Data System (ADS)

    Price, V.; Weber, T.; Jerram, K.; Doucet, M.

    2016-12-01

    The analysis of multi-frequency, narrow-band single-beam acoustic data for fisheries applications has long been established, with methodology focusing on characterizing targets in the water column by utilizing complex algorithms and false-color time series data to create and compare frequency response curves for dissimilar biological groups. These methods were built on concepts developed for multi-frequency analysis of satellite imagery for terrestrial analysis and have been applied to a broad range of data types and applications. Single-beam systems operating at multiple frequencies are also used for the detection and identification of seeps in water column data. Here we incorporate the same analysis and visualization techniques used for fisheries applications to attempt to characterize and quantify seeps by creating and comparing frequency response curves and applying false coloration to shallow and deep multi-channel seep data. From this information, we can establish methods to differentiate bubble size in the echogram and differentiate seep composition. These techniques are also useful in differentiating plume content from biological noise (volume reverberation) created by euphausid layers and fish with gas-filled swim bladders. The combining of the multiple frequencies using false coloring and other image analysis techniques after applying established normalization and beam pattern correction algorithms is a novel approach to quantitatively describing seeps. Further, this information could be paired with geological models, backscatter, and bathymetry data to assess seep distribution.

  9. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    PubMed

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  10. Foot-ankle complex injury risk curves using calcaneus bone mineral density data.

    PubMed

    Yoganandan, Narayan; Chirvi, Sajal; Voo, Liming; DeVogel, Nicholas; Pintar, Frank A; Banerjee, Anjishnu

    2017-08-01

    Biomechanical data from post mortem human subject (PMHS) experiments are used to derive human injury probability curves and develop injury criteria. This process has been used in previous and current automotive crashworthiness studies, Federal safety standards, and dummy design and development. Human bone strength decreases as the individuals reach their elderly age. Injury risk curves using the primary predictor variable (e.g., force) should therefore account for such strength reduction when the test data are collected from PMHS specimens of different ages (age at the time of death). This demographic variable is meant to be a surrogate for fracture, often representing bone strength as other parameters have not been routinely gathered in previous experiments. However, bone mineral densities (BMD) can be gathered from tested specimens (presented in this manuscript). The objective of this study is to investigate different approaches of accounting for BMD in the development of human injury risk curves. Using simulated underbody blast (UBB) loading experiments conducted with the PMHS lower leg-foot-ankle complexes, a comparison is made between the two methods: treating BMD as a covariate and pre-scaling test data based on BMD. Twelve PMHS lower leg-foot-ankle specimens were subjected to UBB loads. Calcaneus BMD was obtained from quantitative computed tomography (QCT) images. Fracture forces were recorded using a load cell. They were treated as uncensored data in the survival analysis model which used the Weibull distribution in both methods. The width of the normalized confidence interval (NCIS) was obtained using the mean and ± 95% confidence limit curves. The mean peak forces of 3.9kN and 8.6kN were associated with the 5% and 50% probability of injury for the covariate method of deriving the risk curve for the reference age of 45 years. The mean forces of 5.4 kN and 9.2kN were associated with the 5% and 50% probability of injury for the pre-scaled method. The NCIS magnitudes were greater in the covariate-based risk curves (0.52-1.00) than in the risk curves based on the pre-scaled method (0.24-0.66). The pre-scaling method resulted in a generally greater injury force and a tighter injury risk curve confidence interval. Although not directly applicable to the foot-ankle fractures, when compared with the use of spine BMD from QCT scans to pre-scale the force, the calcaneus BMD scaled data produced greater force at the same risk level in general. Pre-scaling the force data using BMD is an alternate, and likely a more accurate, method instead of using covariate to account for the age-related bone strength change in deriving risk curves from biomechanical experiments using PMHS. Because of the proximity of the calcaneus bone to the impacting load, it is suggested to use and determine the BMD of the foot-ankle bone in future UBB and other loading conditions to derive human injury probability curves for the foot-ankle complex. Copyright © 2017. Published by Elsevier Ltd.

  11. Extracting information from S-curves of language change

    PubMed Central

    Ghanbarnejad, Fakhteh; Gerlach, Martin; Miotto, José M.; Altmann, Eduardo G.

    2014-01-01

    It is well accepted that adoption of innovations are described by S-curves (slow start, accelerating period and slow end). In this paper, we analyse how much information on the dynamics of innovation spreading can be obtained from a quantitative description of S-curves. We focus on the adoption of linguistic innovations for which detailed databases of written texts from the last 200 years allow for an unprecedented statistical precision. Combining data analysis with simulations of simple models (e.g. the Bass dynamics on complex networks), we identify signatures of endogenous and exogenous factors in the S-curves of adoption. We propose a measure to quantify the strength of these factors and three different methods to estimate it from S-curves. We obtain cases in which the exogenous factors are dominant (in the adoption of German orthographic reforms and of one irregular verb) and cases in which endogenous factors are dominant (in the adoption of conventions for romanization of Russian names and in the regularization of most studied verbs). These results show that the shape of S-curve is not universal and contains information on the adoption mechanism. PMID:25339692

  12. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  13. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  14. A direct potential fitting RKR method: Semiclassical vs. quantal comparisons

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    2016-12-01

    Quantal and semiclassical (SC) eigenvalues are compared for three diatomic molecular potential curves: the X state of CO, the X state of Rb2, and the A state of I2. The comparisons show higher levels of agreement than generally recognized, when the SC calculations incorporate a quantum defect correction to the vibrational quantum number, in keeping with the Kaiser modification. One particular aspect of this is better agreement between quantal and SC estimates of the zero-point vibrational energy, supporting the need for the Y00 correction in this context. The pursuit of a direct-potential-fitting (DPF) RKR method is motivated by the notion that some of the limitations of RKR potentials may be innate, from their generation by an exact inversion of approximate quantities: the vibrational energy Gυ and rotational constant Bυ from least-squares analysis of spectroscopic data. In contrast, the DPF RKR method resembles the quantal DPF methods now increasingly used to analyze diatomic spectral data, but with the eigenvalues obtained from SC phase integrals. Application of this method to the analysis of 9500 assigned lines in the I2A ← X spectrum fails to alter the quantal-SC disparities found for the A-state RKR curve from a previous analysis. On the other hand, the SC method can be much faster than the quantal method in exploratory work with different potential functions, where it is convenient to use finite-difference methods to evaluate the partial derivatives required in nonlinear fitting.

  15. Electrochemical Skin Conductance May Be Used to Screen for Diabetic Cardiac Autonomic Neuropathy in a Chinese Population with Diabetes

    PubMed Central

    He, Tianyi; Wang, Chuan; Zuo, Anju; Liu, Pan; Li, Wenjuan

    2017-01-01

    Aims. This study aimed to assess whether the electrochemical skin conductance (ESC) could be used to screen for diabetic cardiac autonomic neuropathy (DCAN) in a Chinese population with diabetes. Methods. We recruited 75 patients with type 2 diabetes mellitus (T2DM) and 45 controls without diabetes. DCAN was diagnosed by the cardiovascular autonomic reflex tests (CARTs) as gold standard. In all subjects ESCs of hands and feet were also detected by SUDOSCAN™ as a new screening method. The efficacy was assessed by receiver operating characteristic (ROC) curve analysis. Results. The ESCs of both hands and feet were significantly lower in T2DM patients with DCAN than those without DCAN (67.33 ± 15.37 versus 78.03 ± 13.73, P = 0.002, and 57.77 ± 20.99 versus 75.03 ± 11.41, P < 0.001). The ROC curve analysis showed the areas under the ROC curve were both 0.75 for ESCs of hands and feet in screening DCAN. And the optimal cut-off values of ESCs, sensitivities, and specificities were 76 μS, 76.7%, and 75.6% for hands and 75 μS, 80.0%, and 60.0% for feet, respectively. Conclusions. ESC measurement is a reliable and feasible method to screen DCAN in the Chinese population with diabetes before further diagnosis with CARTs. PMID:28280746

  16. LMS tables for waist circumference and waist–height ratio in Colombian adults: analysis of nationwide data 2010

    PubMed Central

    Ramírez-Vélez, R; Correa-Bautista, J E; Martínez-Torres, J; Méneses-Echavez, J F; González-Ruiz, K; González-Jiménez, E; Schmidt-RioValle, J; Lobelo, F

    2016-01-01

    Background/Objectives: Indices predictive of central obesity include waist circumference (WC) and waist-to-height ratio (WHtR). These data are lacking for Colombian adults. This study aims at establishing smoothed centile charts and LMS tables for WC and WHtR; appropriate cutoffs were selected using receiver-operating characteristic analysis based on data from the representative sample. Subjects/Methods: We used data from the cross-sectional, national representative nutrition survey (ENSIN, 2010). A total of 83 220 participants (aged 20–64) were enroled. Weight, height, body mass index (BMI), WC and WHtR were measured and percentiles calculated using the LMS method (L (curve Box-Cox), M (curve median), and S (curve coefficient of variation)). Receiver operating characteristics curve analyses were used to evaluate the optimal cutoff point of WC and WHtR for overweight and obesity based on WHO definitions. Results: Reference values for WC and WHtR are presented. Mean WC and WHtR increased with age for both genders. We found a strong positive correlation between WC and BMI (r=0.847, P< 0.01) and WHtR and BMI (r=0.878, P<0.01). In obese men, the cutoff point value is 96.6 cm for the WC. In women, the cutoff point value is 91.0 cm for the WC. Receiver operating characteristic curve for WHtR was also obtained and the cutoff point value of 0.579 in men, and in women the cutoff point value was 0.587. A high sensitivity and specificity were obtained. Conclusions: This study presents first reference values of WC and WHtR for Colombians aged 20–64. Through LMS tables for adults, we hope to provide quantitative tools to study obesity and its complications. PMID:27026425

  17. A global goodness-of-fit test for receiver operating characteristic curve analysis via the bootstrap method.

    PubMed

    Zou, Kelly H; Resnic, Frederic S; Talos, Ion-Florin; Goldberg-Zimring, Daniel; Bhagwat, Jui G; Haker, Steven J; Kikinis, Ron; Jolesz, Ferenc A; Ohno-Machado, Lucila

    2005-10-01

    Medical classification accuracy studies often yield continuous data based on predictive models for treatment outcomes. A popular method for evaluating the performance of diagnostic tests is the receiver operating characteristic (ROC) curve analysis. The main objective was to develop a global statistical hypothesis test for assessing the goodness-of-fit (GOF) for parametric ROC curves via the bootstrap. A simple log (or logit) and a more flexible Box-Cox normality transformations were applied to untransformed or transformed data from two clinical studies to predict complications following percutaneous coronary interventions (PCIs) and for image-guided neurosurgical resection results predicted by tumor volume, respectively. We compared a non-parametric with a parametric binormal estimate of the underlying ROC curve. To construct such a GOF test, we used the non-parametric and parametric areas under the curve (AUCs) as the metrics, with a resulting p value reported. In the interventional cardiology example, logit and Box-Cox transformations of the predictive probabilities led to satisfactory AUCs (AUC=0.888; p=0.78, and AUC=0.888; p=0.73, respectively), while in the brain tumor resection example, log and Box-Cox transformations of the tumor size also led to satisfactory AUCs (AUC=0.898; p=0.61, and AUC=0.899; p=0.42, respectively). In contrast, significant departures from GOF were observed without applying any transformation prior to assuming a binormal model (AUC=0.766; p=0.004, and AUC=0.831; p=0.03), respectively. In both studies the p values suggested that transformations were important to consider before applying any binormal model to estimate the AUC. Our analyses also demonstrated and confirmed the predictive values of different classifiers for determining the interventional complications following PCIs and resection outcomes in image-guided neurosurgery.

  18. Presenting new exoplanet candidates for the CoRoT chromatic light curves

    NASA Astrophysics Data System (ADS)

    Boufleur, Rodrigo; Emilio, Marcelo; Andrade, Laerte; Janot-Pacheco, Eduardo; De La Reza, Ramiro

    2015-08-01

    One of the most promising topics of modern Astronomy is the discovery and characterization of extrasolar planets due to its importance for the comprehension of planetary formation and evolution. Missions like MOST (Microvariability and Oscillations of Stars Telescope) (Walker et al., 2003) and especially the satellites dedicated to the search for exoplanets CoRoT (Convection, Rotation and planetary Transits) (Baglin et al., 1998) and Kepler (Borucki et al., 2003) produced a great amount of data and together account for hundreds of new discoveries. An important source of error in the search for planets with light curves obtained from space observatories are the displacements occuring in the data due to external causes. This artificial charge generation phenomenon associated with the data is mainly caused by the impact of high energy particles onto the CCD (Pinheiro da Silva et al. 2008), although other sources of error, not as well known also need to be taken into account. So, an effective analysis of the light curves depends a lot on the mechanisms employed to deal with these phenomena. To perform our research, we developed and applied a different method to fix the light curves, the CDAM (Corot Detrend Algorithm Modified), inspired by the work of Mislis et al. (2012). The paradigms were obtained using the BLS method (Kovács et al., 2002). After a semiautomatic pre-analysis associated with a visual inspection of the planetary transits signatures, we obtained dozens of exoplanet candidates in very good agreement with the literature and also new unpublished cases. We present the study results and characterization of the new cases for the chromatic channel public light curves of the CoRoT satellite.

  19. New methods for engineering site characterization using reflection and surface wave seismic survey

    NASA Astrophysics Data System (ADS)

    Chaiprakaikeow, Susit

    This study presents two new seismic testing methods for engineering application, a new shallow seismic reflection method and Time Filtered Analysis of Surface Waves (TFASW). Both methods are described in this dissertation. The new shallow seismic reflection was developed to measure reflection at a single point using two to four receivers, assuming homogeneous, horizontal layering. It uses one or more shakers driven by a swept sine function as a source, and the cross-correlation technique to identify wave arrivals. The phase difference between the source forcing function and the ground motion due to the dynamic response of the shaker-ground interface was corrected by using a reference geophone. Attenuated high frequency energy was also recovered using the whitening in frequency domain. The new shallow seismic reflection testing was performed at the crest of Porcupine Dam in Paradise, Utah. The testing used two horizontal Vibroseis sources and four receivers for spacings between 6 and 300 ft. Unfortunately, the results showed no clear evidence of the reflectors despite correction of the magnitude and phase of the signals. However, an improvement in the shape of the cross-correlations was noticed after the corrections. The results showed distinct primary lobes in the corrected cross-correlated signals up to 150 ft offset. More consistent maximum peaks were observed in the corrected waveforms. TFASW is a new surface (Rayleigh) wave method to determine the shear wave velocity profile at a site. It is a time domain method as opposed to the Spectral Analysis of Surface Waves (SASW) method, which is a frequency domain method. This method uses digital filtering to optimize bandwidth used to determine the dispersion curve. Results from testings at three different sites in Utah indicated good agreement with the dispersion curves measured using both TFASW and SASW methods. The advantage of TFASW method is that the dispersion curves had less scatter at long wavelengths as a result from wider bandwidth used in those tests.

  20. A stage-normalized function for the synthesis of stage-discharge relations for the Colorado River in Grand Canyon, Arizona

    USGS Publications Warehouse

    Wiele, Stephen M.; Torizzo, Margaret

    2003-01-01

    A method was developed to construct stage-discharge rating curves for the Colorado River in Grand Canyon, Arizona, using two stage-discharge pairs and a stage-normalized rating curve. Stage-discharge rating curves formulated with the stage-normalized curve method are compared to (1) stage-discharge rating curves for six temporary stage gages and two streamflow-gaging stations developed by combining stage records with modeled unsteady flow; (2) stage-discharge rating curves developed from stage records and discharge measurements at three streamflow-gaging stations; and (3) stages surveyed at known discharges at the Northern Arizona Sand Bar Studies sites. The stage-normalized curve method shows good agreement with field data when the discharges used in the construction of the rating curves are at least 200 cubic meters per second apart. Predictions of stage using the stage-normalized curve method are also compared to predictions of stage from a steady-flow model.

  1. Application of curve resolution algorithms in the study of drug photodegradation kinetics -- the example of moclobemide.

    PubMed

    Skibiński, Robert; Komsta, Łukasz

    2012-01-01

    The photodegradation of moclobemide was studied in methanolic media. Ultra-HPLC (UHPLC)/MS/MS analysis proved decomposition to 4-chlorobenzamide as a major degradation product and small amounts of Ro 16-3177 (4-chloro-N-[2-[(2-hydroxyethyl)amino] ethyl]benzamide) and 2-[(4-chlorobenzylidene)amino]-N-[2-ethoxyethenyl]ethenamine. The methanolic solution was investigated spectrophotometrically in the UV region, registering the spectra during 30 min of degradation. Using reference spectra and a multivariate chemometric method (multivariate curve resolution-alternating least squares), the spectra were resolved and concentration profiles were obtained. The obtained results were in good agreement with a quantitative approach, with UHPLC-diode array detection as the reference method.

  2. Nonlinear radiative heat transfer and Hall effects on a viscous fluid in a semi-porous curved channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, Z.; Naveed, M., E-mail: rana.m.naveed@gmail.com; Sajid, M.

    In this paper, effects of Hall currents and nonlinear radiative heat transfer in a viscous fluid passing through a semi-porous curved channel coiled in a circle of radius R are analyzed. A curvilinear coordinate system is used to develop the mathematical model of the considered problem in the form partial differential equations. Similarity solutions of the governing boundary value problems are obtained numerically using shooting method. The results are also validated with the well-known finite difference technique known as the Keller-Box method. The analysis of the involved pertinent parameters on the velocity and temperature distributions is presented through graphs andmore » tables.« less

  3. Calculation of the aerodynamic loading of swept and unswept flexible wings of arbitrary stiffness

    NASA Technical Reports Server (NTRS)

    Diederich, Franklin W

    1950-01-01

    A method is presented for calculating the aerodynamic loading, the divergence speed, and certain stability derivatives of swept and unswept wings and tail surfaces of arbitrary stiffness. Provision is made for using either stiffness curves and root rotation constants or structural influence coefficients in the analysis. Computing forms, tables of numerical constants required in the analysis, and an illustrative example are included to facilitate calculations by means of the method.

  4. Accounting for sampling variability, injury under-reporting, and sensor error in concussion injury risk curves.

    PubMed

    Elliott, Michael R; Margulies, Susan S; Maltese, Matthew R; Arbogast, Kristy B

    2015-09-18

    There has been recent dramatic increase in the use of sensors affixed to the heads or helmets of athletes to measure the biomechanics of head impacts that lead to concussion. The relationship between injury and linear or rotational head acceleration measured by such sensors can be quantified with an injury risk curve. The utility of the injury risk curve relies on the accuracy of both the clinical diagnosis and the biomechanical measure. The focus of our analysis was to demonstrate the influence of three sources of error on the shape and interpretation of concussion injury risk curves: sampling variability associated with a rare event, concussion under-reporting, and sensor measurement error. We utilized Bayesian statistical methods to generate synthetic data from previously published concussion injury risk curves developed using data from helmet-based sensors on collegiate football players and assessed the effect of the three sources of error on the risk relationship. Accounting for sampling variability adds uncertainty or width to the injury risk curve. Assuming a variety of rates of unreported concussions in the non-concussed group, we found that accounting for under-reporting lowers the rotational acceleration required for a given concussion risk. Lastly, after accounting for sensor error, we find strengthened relationships between rotational acceleration and injury risk, further lowering the magnitude of rotational acceleration needed for a given risk of concussion. As more accurate sensors are designed and more sensitive and specific clinical diagnostic tools are introduced, our analysis provides guidance for the future development of comprehensive concussion risk curves. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Evaluation of the influences of various force magnitudes and configurations on scoliotic curve correction using finite element analysis.

    PubMed

    Karimi, Mohammad Taghi; Ebrahimi, Mohammad Hossein; Mohammadi, Ali; McGarry, Anthony

    2017-03-01

    Scoliosis is a lateral curvature in the normally straight vertical line of the spine, and the curvature can be moderate to severe. Different treatment can be used based on severity and age of subjects, but most common treatment for this disease is using orthosis. To design orthosis types of force arrangement can be varied, from transverse loads to vertical loads or combination of them. But it is not well introduced how orthoses control scoliotic curve and how to achieve the maximum correction based on force configurations and magnitude. Therefore, it was aimed to determine the effect of various loads configurations and magnitudes on curve correction of a degenerative scoliotic subject. A scoliotic subject participated in this study. The CT-Scan of the subject was used to produce 3D model of spine. The 3D model of spine was produced by Mimics software and the finite element analysis and deformation of scoliotic curve of the spine under seven different forces and in three different conditions was determined by ABAQUS software. The Cobb angle in scoliosis curve decreased significantly by applying forces. In each condition depends on different forces, different corrections have been achieved. It can be concluded that the configurations of the force application mentioned in this study is effective to decrease the scoliosis curve. Although it is a case study, it can be used for a vast number of subjects to predict the correction of scoliosis curve before orthotic treatment. Moreover, it is recommended that this method and the outputs can be compared with clinical findings.

  6. Quantitative Ultrasound for Measuring Obstructive Severity in Children with Hydronephrosis.

    PubMed

    Cerrolaza, Juan J; Peters, Craig A; Martin, Aaron D; Myers, Emmarie; Safdar, Nabile; Linguraru, Marius George

    2016-04-01

    We define sonographic biomarkers for hydronephrotic renal units that can predict the necessity of diuretic nuclear renography. We selected a cohort of 50 consecutive patients with hydronephrosis of varying severity in whom 2-dimensional sonography and diuretic mercaptoacetyltriglycine renography had been performed. A total of 131 morphological parameters were computed using quantitative image analysis algorithms. Machine learning techniques were then applied to identify ultrasound based safety thresholds that agreed with the t½ for washout. A best fit model was then derived for each threshold level of t½ that would be clinically relevant at 20, 30 and 40 minutes. Receiver operating characteristic curve analysis was performed. Sensitivity, specificity and area under the receiver operating characteristic curve were determined. Improvement obtained by the quantitative imaging method compared to the Society for Fetal Urology grading system and the hydronephrosis index was statistically verified. For the 3 thresholds considered and at 100% sensitivity the specificities of the quantitative imaging method were 94%, 70% and 74%, respectively. Corresponding area under the receiver operating characteristic curve values were 0.98, 0.94 and 0.94, respectively. Improvement obtained by the quantitative imaging method over the Society for Fetal Urology grade and hydronephrosis index was statistically significant (p <0.05 in all cases). Quantitative imaging analysis of renal sonograms in children with hydronephrosis can identify thresholds of clinically significant washout times with 100% sensitivity to decrease the number of diuretic renograms in up to 62% of children. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  7. Rapid screening of rpoB and katG mutations in Mycobacterium tuberculosis isolates by high-resolution melting curve analysis.

    PubMed

    Haeili, M; Fooladi, A I; Bostanabad, S Z; Sarokhalil, D D; Siavoshi, F; Feizabadi, M M

    2014-01-01

    Early detection of multidrug-resistant tuberculosis (MDR-TB) is essential to prevent its transmission in the community and initiate effective anti-TB treatment regimen. High-resolution melting curve (HRM) analysis was evaluated for rapid detection of resistance conferring mutations in rpoB and katG genes. We screened 95 Mycobacterium tuberculosis clinical isolates including 20 rifampin resistant (RIF-R), 21 isoniazid resistant (INH-R) and 54 fully susceptible (S) isolates determined by proportion method of drug susceptibility testing. Nineteen M. tuberculosis isolates with known drug susceptibility genotypes were used as references for the assay validation. The nucleotide sequences of the target regions rpoB and katG genes were determined to investigate the frequency and type of mutations and to confirm HRM results. HRM analysis of a 129-bp fragment of rpoB allowed correct identification of 19 of the 20 phenotypically RIF-R and all RIF-S isolates. All INH-S isolates generated wild-type HRM curves and 18 out of 21 INH-R isolates harboured any mutation in 109-bp fragment of katG exhibited mutant type HRM curves. However, 1 RIF-R and 3 INH-R isolates were falsely identified as susceptible which were confirmed for having no mutation in their target regions by sequencing. The main mutations involved in RIF and INH resistance were found at codons rpoB531 (60% of RIF-R isolates) and katG315 (85.7% of INH-R isolates), respectively. HRM was found to be a reliable, rapid and low cost method to characterise drug susceptibility of clinical TB isolates in resource-limited settings.

  8. The learning curve to achieve satisfactory completion rates in upper GI endoscopy: an analysis of a national training database.

    PubMed

    Ward, S T; Hancox, A; Mohammed, M A; Ismail, T; Griffiths, E A; Valori, R; Dunckley, P

    2017-06-01

    The aim of this study was to determine the number of OGDs (oesophago-gastro-duodenoscopies) trainees need to perform to acquire competency in terms of successful unassisted completion to the second part of the duodenum 95% of the time. OGD data were retrieved from the trainee e-portfolio developed by the Joint Advisory Group on GI Endoscopy (JAG) in the UK. All trainees were included unless they were known to have a baseline experience of >20 procedures or had submitted data for <20 procedures. The primary outcome measure was OGD completion, defined as passage of the endoscope to the second part of the duodenum without physical assistance. The number of OGDs required to achieve a 95% completion rate was calculated by the moving average method and learning curve cumulative summation (LC-Cusum) analysis. To determine which factors were independently associated with OGD completion, a mixed effects logistic regression model was constructed with OGD completion as the outcome variable. Data were analysed for 1255 trainees over 288 centres, representing 243 555 OGDs. By moving average method, trainees attained a 95% completion rate at 187 procedures. By LC-Cusum analysis, after 200 procedures, >90% trainees had attained a 95% completion rate. Total number of OGDs performed, trainee age and experience in lower GI endoscopy were factors independently associated with OGD completion. There are limited published data on the OGD learning curve. This is the largest study to date analysing the learning curve for competency acquisition. The JAG competency requirement for 200 procedures appears appropriate. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Testing the performance of pure spectrum resolution from Raman hyperspectral images of differently manufactured pharmaceutical tablets.

    PubMed

    Vajna, Balázs; Farkas, Attila; Pataki, Hajnalka; Zsigmond, Zsolt; Igricz, Tamás; Marosi, György

    2012-01-27

    Chemical imaging is a rapidly emerging analytical method in pharmaceutical technology. Due to the numerous chemometric solutions available, characterization of pharmaceutical samples with unknown components present has also become possible. This study compares the performance of current state-of-the-art curve resolution methods (multivariate curve resolution-alternating least squares, positive matrix factorization, simplex identification via split augmented Lagrangian and self-modelling mixture analysis) in the estimation of pure component spectra from Raman maps of differently manufactured pharmaceutical tablets. The batches of different technologies differ in the homogeneity level of the active ingredient, thus, the curve resolution methods are tested under different conditions. An empirical approach is shown to determine the number of components present in a sample. The chemometric algorithms are compared regarding the number of detected components, the quality of the resolved spectra and the accuracy of scores (spectral concentrations) compared to those calculated with classical least squares, using the true pure component (reference) spectra. It is demonstrated that using appropriate multivariate methods, Raman chemical imaging can be a useful tool in the non-invasive characterization of unknown (e.g. illegal or counterfeit) pharmaceutical products. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    NASA Astrophysics Data System (ADS)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  11. Out-of-plane dynamic stability analysis of curved beams subjected to uniformly distributed radial loading

    NASA Astrophysics Data System (ADS)

    Sabuncu, M.; Ozturk, H.; Cimen; S.

    2011-04-01

    In this study, out-of-plane stability analysis of tapered cross-sectioned thin curved beams under uniformly distributed radial loading is performed by using the finite-element method. Solutions referred to as Bolotin's approach are analysed for dynamic stability, and the first unstable regions are examined. Out-of-plane vibration and out-of-plane buckling analyses are also studied. In addition, the results obtained in this study are compared with the published results of other researchers for the fundamental frequency and critical lateral buckling load. The effects of subtended angle, variations of cross-section, and dynamic load parameter on the stability regions are shown in graphics

  12. On the reduction of occultation light curves. [stellar occultations by planets

    NASA Technical Reports Server (NTRS)

    Wasserman, L.; Veverka, J.

    1973-01-01

    The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.

  13. Recalcitrant vulnerability curves: methods of analysis and the concept of fibre bridges for enhanced cavitation resistance.

    PubMed

    Cai, Jing; Li, Shan; Zhang, Haixin; Zhang, Shuoxin; Tyree, Melvin T

    2014-01-01

    Vulnerability curves (VCs) generally can be fitted to the Weibull equation; however, a growing number of VCs appear to be recalcitrant, that is, deviate from a Weibull but seem to fit dual Weibull curves. We hypothesize that dual Weibull curves in Hippophae rhamnoides L. are due to different vessel diameter classes, inter-vessel hydraulic connections or vessels versus fibre tracheids. We used dye staining techniques, hydraulic measurements and quantitative anatomy measurements to test these hypotheses. The fibres contribute 1.3% of the total stem conductivity, which eliminates the hypothesis that fibre tracheids account for the second Weibull curve. Nevertheless, the staining pattern of vessels and fibre tracheids suggested that fibres might function as a hydraulic bridge between adjacent vessels. We also argue that fibre bridges are safer than vessel-to-vessel pits and put forward the concept as a new paradigm. Hence, we tentatively propose that the first Weibull curve may be accounted by vessels connected to each other directly by pit fields, while the second Weibull curve is associated with vessels that are connected almost exclusively by fibre bridges. Further research is needed to test the concept of fibre bridge safety in species that have recalcitrant or normal Weibull curves. © 2013 John Wiley & Sons Ltd.

  14. Analysis of the learning curve for peroral endoscopic myotomy for esophageal achalasia: Single-center, two-operator experience.

    PubMed

    Lv, Houning; Zhao, Ningning; Zheng, Zhongqing; Wang, Tao; Yang, Fang; Jiang, Xihui; Lin, Lin; Sun, Chao; Wang, Bangmao

    2017-05-01

    Peroral endoscopic myotomy (POEM) has emerged as an advanced technique for the treatment of achalasia, and defining the learning curve is mandatory. From August 2011 to June 2014, two operators in our institution (A&B) carried out POEM on 35 and 33 consecutive patients, respectively. Moving average and cumulative sum (CUSUM) methods were used to analyze the POEM learning curve for corrected operative time (cOT), referring to duration of per centimeter myotomy. Additionally, perioperative outcomes were compared among distinct learning curve phases. Using the moving average method, cOT reached a plateau at the 29th case and at the 24th case for operators A and B, respectively. CUSUM analysis identified three phases: initial learning period (Phase 1), efficiency period (Phase 2) and mastery period (Phase 3). The relatively smooth state in the CUSUM graph occurred at the 26th case and at the 24th case for operators A and B, respectively. Mean cOT of distinct phases for operator A were 8.32, 5.20 and 3.97 min, whereas they were 5.99, 3.06 and 3.75 min for operator B, respectively. Eckardt score and lower esophageal sphincter pressure significantly decreased during the 1-year follow-up period. Data were comparable regarding patient characteristics and perioperative outcomes. This single-center study demonstrated that expert endoscopists with experience in esophageal endoscopic submucosal dissection reached a plateau in learning of POEM after approximately 25 cases. © 2016 Japan Gastroenterological Endoscopy Society.

  15. Task Design for Students' Work with Basic Theory in Analysis: The Cases of Multidimensional Differentiability and Curve Integrals

    ERIC Educational Resources Information Center

    Gravesen, Katrine Frovin; Grønbaek, Niels; Winsløw, Carl

    2017-01-01

    We investigate the challenges students face in the transition from calculus courses, focusing on methods related to the analysis of real valued functions given in closed form, to more advanced courses on analysis where focus is on theoretical structure, including proof. We do so based on task design aiming for a number of generic potentials for…

  16. Chemical Research--Radiochemistry Report for Month Ending April 17, 1943

    DOE R&D Accomplishments Database

    Franck, J. Division Director

    1952-01-01

    1. A continuation of the detailed analysis of beta and soft and hard gamma activity associated with all fission product elements in a nitrate bombardment is presented. The ?cooling? time has been extended to 170 days. The data for the individual elements are presented in tables as counts/min and in figures as percentage of total beta, soft gamma, and hard gamma radiations. 2. Calculations and graphs have been made on the heat generated by the longer-lived fission products. The method of analysis is presented. 3. Two new short-lived Rh fission product activities have been found. They are probably the daughters of the two long-lived Ru activities (30d, 200d). Re-evaluation of data on 43 leads to the conclusion that the longest lived 43 activity in measureable yields is the 6.1h (formerly 6.6h). New parent-daughter relationships in the rare-earth activities are given. 4. Theoretical beta absorption curves have been made using the Fermi distribution function and linear absorption curves for small energy intervals. A Feather analysis of the absorption curve leads to the theoretical maximum energy.

  17. Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method

    PubMed Central

    Bruno, Thomas J; Allen, Samuel

    2013-01-01

    One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC. PMID:26401423

  18. Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method.

    PubMed

    Bruno, Thomas J; Allen, Samuel

    2013-01-01

    One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC.

  19. Detection of explosives on the surface of banknotes by Raman hyperspectral imaging and independent component analysis.

    PubMed

    Almeida, Mariana R; Correa, Deleon N; Zacca, Jorge J; Logrado, Lucio Paulo Lima; Poppi, Ronei J

    2015-02-20

    The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50 μg cm(-2). Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Vulnerability curves vs. vulnerability indicators: application of an indicator-based methodology for debris-flow hazards

    NASA Astrophysics Data System (ADS)

    Papathoma-Köhle, Maria

    2016-08-01

    The assessment of the physical vulnerability of elements at risk as part of the risk analysis is an essential aspect for the development of strategies and structural measures for risk reduction. Understanding, analysing and, if possible, quantifying physical vulnerability is a prerequisite for designing strategies and adopting tools for its reduction. The most common methods for assessing physical vulnerability are vulnerability matrices, vulnerability curves and vulnerability indicators; however, in most of the cases, these methods are used in a conflicting way rather than in combination. The article focuses on two of these methods: vulnerability curves and vulnerability indicators. Vulnerability curves express physical vulnerability as a function of the intensity of the process and the degree of loss, considering, in individual cases only, some structural characteristics of the affected buildings. However, a considerable amount of studies argue that vulnerability assessment should focus on the identification of these variables that influence the vulnerability of an element at risk (vulnerability indicators). In this study, an indicator-based methodology (IBM) for mountain hazards including debris flow (Kappes et al., 2012) is applied to a case study for debris flows in South Tyrol, where in the past a vulnerability curve has been developed. The relatively "new" indicator-based method is being scrutinised and recommendations for its improvement are outlined. The comparison of the two methodological approaches and their results is challenging since both methodological approaches deal with vulnerability in a different way. However, it is still possible to highlight their weaknesses and strengths, show clearly that both methodologies are necessary for the assessment of physical vulnerability and provide a preliminary "holistic methodological framework" for physical vulnerability assessment showing how the two approaches may be used in combination in the future.

  1. Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.

    NASA Astrophysics Data System (ADS)

    Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.

    2006-01-01

    This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.

  2. A Novel Uncertainty Framework for Improving Discharge Data Quality Using Hydraulic Modelling.

    NASA Astrophysics Data System (ADS)

    Mansanarez, V.; Westerberg, I.; Lyon, S. W.; Lam, N.

    2017-12-01

    Flood risk assessments rely on accurate discharge data records. Establishing a reliable stage-discharge (SD) rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. We introduce an uncertainty framework using hydraulic modelling for developing SD rating curves and estimating their uncertainties. The proposed framework incorporates information from both the hydraulic configuration (bed slope, roughness, vegetation) and the information available in the stage-discharge observation data (gaugings). This method provides a direct estimation of the hydraulic configuration (slope, bed roughness and vegetation roughness). Discharge time series are estimated propagating stage records through posterior rating curve results.We applied this novel method to two Swedish hydrometric stations, accounting for uncertainties in the gaugings for the hydraulic model. Results from these applications were compared to discharge measurements and official discharge estimations.Sensitivity analysis was performed. We focused analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken.

  3. Pedagogical Implications in the Thermal Analysis of Uniform Annular Fins: Alternative Analytic Solutions by Series.

    ERIC Educational Resources Information Center

    Campo, Antonio; Rodriguez, Franklin

    1998-01-01

    Presents two alternative computational procedures for solving the modified Bessel equation of zero order: the Frobenius method, and the power series method coupled with a curve fit. Students in heat transfer courses can benefit from these alternative procedures; a course on ordinary differential equations is the only mathematical background that…

  4. Preventing conflicts among bid curves used with transactive controllers in a market-based resource allocation system

    DOEpatents

    Fuller, Jason C.; Chassin, David P.; Pratt, Robert G.; Hauer, Matthew; Tuffner, Francis K.

    2017-03-07

    Disclosed herein are representative embodiments of methods, apparatus, and systems for distributing a resource (such as electricity) using a resource allocation system. One of the disclosed embodiments is a method for operating a transactive thermostatic controller configured to submit bids to a market-based resource allocation system. According to the exemplary method, a first bid curve is determined, the first bid curve indicating a first set of bid prices for corresponding temperatures and being associated with a cooling mode of operation for a heating and cooling system. A second bid curve is also determined, the second bid curve indicating a second set of bid prices for corresponding temperatures and being associated with a heating mode of operation for a heating and cooling system. In this embodiment, the first bid curve, the second bid curve, or both the first bid curve and the second bid curve are modified to prevent overlap of any portion of the first bid curve and the second bid curve.

  5. A novel model of magnetorheological damper with hysteresis division

    NASA Astrophysics Data System (ADS)

    Yu, Jianqiang; Dong, Xiaomin; Zhang, Zonglun

    2017-10-01

    Due to the complex nonlinearity of magnetorheological (MR) behavior, the modeling of MR dampers is a challenge. A simple and effective model of MR damper remains a work in progress. A novel model of MR damper is proposed with force-velocity hysteresis division method in this study. A typical hysteresis loop of MR damper can be simply divided into two novel curves with the division idea. One is the backbone curve and the other is the branch curve. The exponential-family functions which capturing the characteristics of the two curves can simplify the model and improve the identification efficiency. To illustrate and validate the novel phenomenological model with hysteresis division idea, a dual-end MR damper is designed and tested. Based on the experimental data, the characteristics of the novel curves are investigated. To simplify the parameters identification and obtain the reversibility, the maximum force part, the non-dimensional backbone part and the non-dimensional branch part are derived from the two curves. The maximum force part and the non-dimensional part are in multiplication type add-rule. The maximum force part is dependent on the current and maximum velocity. The non-dominated sorting genetic algorithm II (NSGA II) based on the design of experiments (DOE) is employed to identify the parameters of the normalized shape functions. Comparative analysis is conducted based on the identification results. The analysis shows that the novel model with few identification parameters has higher accuracy and better predictive ability.

  6. A Metric for Reducing False Positives in the Computer-Aided Detection of Breast Cancer from Dynamic Contrast-Enhanced Magnetic Resonance Imaging Based Screening Examinations of High-Risk Women.

    PubMed

    Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L

    2016-02-01

    Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.

  7. Evaluation of peak-picking algorithms for protein mass spectrometry.

    PubMed

    Bauer, Chris; Cramer, Rainer; Schuchhardt, Johannes

    2011-01-01

    Peak picking is an early key step in MS data analysis. We compare three commonly used approaches to peak picking and discuss their merits by means of statistical analysis. Methods investigated encompass signal-to-noise ratio, continuous wavelet transform, and a correlation-based approach using a Gaussian template. Functionality of the three methods is illustrated and discussed in a practical context using a mass spectral data set created with MALDI-TOF technology. Sensitivity and specificity are investigated using a manually defined reference set of peaks. As an additional criterion, the robustness of the three methods is assessed by a perturbation analysis and illustrated using ROC curves.

  8. Multilayer theory for delamination analysis of a composite curved bar subjected to end forces and end moments

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Jackson, Raymond H.

    1989-01-01

    A composite test specimen in the shape of a semicircular curved bar subjected to bending offers an excellent stress field for studying the open-mode delamination behavior of laminated composite materials. This is because the open-mode delamination nucleates at the midspan of the curved bar. The classical anisotropic elasticity theory was used to construct a 'multilayer' theory for the calculations of the stress and deformation fields induced in the multilayered composite semicircular curved bar subjected to end forces and end moments. The radial location and intensity of the open-mode delamination stress were calculated and were compared with the results obtained from the anisotropic continuum theory and from the finite element method. The multilayer theory gave more accurate predictions of the location and the intensity of the open-mode delamination stress than those calculated from the anisotropic continuum theory.

  9. Multilayer theory for delamination analysis of a composite curved bar subjected to end forces and end moments

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Jackson, Raymond H.

    1989-01-01

    A composite test specimen in the shape of a semicircular curved bar subjected to bending offers an excellent stress field for studying the open-mode delamination behavior of laminated composite materials. This is because the open-mode delamination nucleates at the midspan of the curved bar. The classical anisotropic elasticity theory was used to construct a multilayer theory for the calculations of the stress and deformation fields induced in the multilayered composite semicircular curved bar subjected to end forces and end moments. The radial location and intensity of the open-mode delamination stress were calculated and were compared with the results obtained from the anisotropic continuum theory and from the finite element method. The multilayer theory gave more accurate predictions of the location and the intensity of the open-mode delamination stress than those calculated from the anisotropic continuum theory.

  10. Bayesian Inference and Application of Robust Growth Curve Models Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Lai, Keke; Lu, Zhenqiu; Tong, Xin

    2013-01-01

    Despite the widespread popularity of growth curve analysis, few studies have investigated robust growth curve models. In this article, the "t" distribution is applied to model heavy-tailed data and contaminated normal data with outliers for growth curve analysis. The derived robust growth curve models are estimated through Bayesian…

  11. Estimated damage from the Cascadia Subduction Zone tsunami: A model comparisons using fragility curves

    NASA Astrophysics Data System (ADS)

    Wiebe, D. M.; Cox, D. T.; Chen, Y.; Weber, B. A.; Chen, Y.

    2012-12-01

    Building damage from a hypothetical Cascadia Subduction Zone tsunami was estimated using two methods and applied at the community scale. The first method applies proposed guidelines for a new ASCE 7 standard to calculate the flow depth, flow velocity, and momentum flux from a known runup limit and estimate of the total tsunami energy at the shoreline. This procedure is based on a potential energy budget, uses the energy grade line, and accounts for frictional losses. The second method utilized numerical model results from previous studies to determine maximum flow depth, velocity, and momentum flux throughout the inundation zone. The towns of Seaside and Canon Beach, Oregon, were selected for analysis due to the availability of existing data from previously published works. Fragility curves, based on the hydrodynamic features of the tsunami flow (inundation depth, flow velocity, and momentum flux) and proposed design standards from ASCE 7 were used to estimate the probability of damage to structures located within the inundations zone. The analysis proceeded at the parcel level, using tax-lot data to identify construction type (wood, steel, and reinforced-concrete) and age, which was used as a performance measure when applying the fragility curves and design standards. The overall probability of damage to civil buildings was integrated for comparison between the two methods, and also analyzed spatially for damage patterns, which could be controlled by local bathymetric features. The two methods were compared to assess the sensitivity of the results to the uncertainty in the input hydrodynamic conditions and fragility curves, and the potential advantages of each method discussed. On-going work includes coupling the results of building damage and vulnerability to an economic input output model. This model assesses trade between business sectors located inside and outside the induction zone, and is used to measure the impact to the regional economy. Results highlight critical businesses sectors and infrastructure critical to the economic recovery effort, which could be retrofitted or relocated to survive the event. The results of this study improve community understanding of the tsunami hazard for civil buildings.

  12. Application of the H/V and SPAC Method to Estimate a 3D Shear Wave Velocity Model, in the City of Coatzacoalcos, Veracruz.

    NASA Astrophysics Data System (ADS)

    Morales, L. E. A. P.; Aguirre, J.; Vazquez Rosas, R.; Suarez, G.; Contreras Ruiz-Esparza, M. G.; Farraz, I.

    2014-12-01

    Methods that use seismic noise or microtremors have become very useful tools worldwide due to its low costs, the relative simplicity in collecting data, the fact that these are non-invasive methods hence there is no need to alter or even perforate the study site, and also these methods require a relatively simple analysis procedure. Nevertheless the geological structures estimated by this methods are assumed to be parallel, isotropic and homogeneous layers. Consequently precision of the estimated structure is lower than that from conventional seismic methods. In the light of these facts this study aimed towards searching a new way to interpret the results obtained from seismic noise methods. In this study, seven triangular SPAC (Aki, 1957) arrays were performed in the city of Coatzacoalcos, Veracruz, varying in sizes from 10 to 100 meters. From the autocorrelation between the stations of each array, a Rayleigh wave phase velocity dispersion curve was calculated. Such dispersion curve was used to obtain a S wave parallel layers velocity (VS) structure for the study site. Subsequently the horizontal to vertical ratio of the spectrum of microtremors H/V (Nogoshi and Igarashi, 1971; Nakamura, 1989, 2000) was calculated for each vertex of the SPAC triangular arrays, and from the H/V spectrum the fundamental frequency was estimated for each vertex. By using the H/V spectral ratio curves interpreted as a proxy to the Rayleigh wave ellipticity curve, a series of VS structures were inverted for each vertex of the SPAC array. Lastly each VS structure was employed to calculate a 3D velocity model, in which the exploration depth was approximately 100 meters, and had a velocity range in between 206 (m/s) to 920 (m/s). The 3D model revealed a thinning of the low velocity layers. This proved to be in good agreement with the variation of the fundamental frequencies observed at each vertex. With the previous kind of analysis a preliminary model can be obtained as a first approximation, so that more careful studies can be conducted to assess a detailed geological characterization of a specific site. The continuous development of the methods that use microtremors, create many areas of interest in the seismic engineering study field. This and other reasons are why these methods have acquired more presence all over the globe.

  13. Survival analysis using inverse probability of treatment weighted methods based on the generalized propensity score.

    PubMed

    Sugihara, Masahiro

    2010-01-01

    In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log-rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan-Meier estimators of survival curves using an IPTW log-rank test for multi-valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. (c) 2009 John Wiley & Sons, Ltd.

  14. Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits.

    PubMed

    Gámez Serna, Citlalli; Ruichek, Yassine

    2017-06-14

    A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle's speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, 'curve analysis extraction' and 'speed limits database creation' are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger.

  15. Using Floquet periodicity to easily calculate dispersion curves and wave structures of homogeneous waveguides

    NASA Astrophysics Data System (ADS)

    Hakoda, Christopher; Rose, Joseph; Shokouhi, Parisa; Lissenden, Clifford

    2018-04-01

    Dispersion curves are essential to any guided-wave-related project. The Semi-Analytical Finite Element (SAFE) method has become the conventional way to compute dispersion curves for homogeneous waveguides. However, only recently has a general SAFE formulation for commercial and open-source software become available, meaning that until now SAFE analyses have been variable and more time consuming than desirable. Likewise, the Floquet boundary conditions enable analysis of waveguides with periodicity and have been an integral part of the development of metamaterials. In fact, we have found the use of Floquet boundary conditions to be an extremely powerful tool for homogeneous waveguides, too. The nuances of using periodic boundary conditions for homogeneous waveguides that do not exhibit periodicity are discussed. Comparisons between this method and SAFE are made for selected homogeneous waveguide applications. The COMSOL Multiphysics software is used for the results shown, but any standard finite element software that can implement Floquet periodicity (user-defined or built-in) should suffice. Finally, we identify a number of complex waveguides for which dispersion curves can be found with relative ease by using the periodicity inherent to the Floquet boundary conditions.

  16. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  17. High-resolution melting analysis (HRM) for differentiation of four major Taeniidae species in dogs Taenia hydatigena, Taenia multiceps, Taenia ovis, and Echinococcus granulosus sensu stricto.

    PubMed

    Dehghani, Mansoureh; Mohammadi, Mohammad Ali; Rostami, Sima; Shamsaddini, Saeedeh; Mirbadie, Seyed Reza; Harandi, Majid Fasihi

    2016-07-01

    Tapeworms of the genus Taenia include several species of important parasites with considerable medical and veterinary significance. Accurate identification of these species in dogs is the prerequisite of any prevention and control program. Here, we have applied an efficient method for differentiating four major Taeniid species in dogs, i.e., Taenia hydatigena, T. multiceps, T. ovis, and Echinococcus granulosus sensu stricto. High-resolution melting (HRM) analysis is simpler, less expensive, and faster technique than conventional DNA-based assays and enables us to detect PCR amplicons in a closed system. Metacestode samples were collected from local abattoirs from sheep. All the isolates had already been identified by PCR-sequencing, and their sequence data were deposited in the GenBank. Real-time PCR coupled with HRM analysis targeting mitochondrial cox1 and ITS1 genes was used to differentiate taeniid species. Distinct melting curves were obtained from ITS1 region enabling accurate differentiation of three Taenia species and E. granulosus in dogs. The HRM curves of Taenia species and E .granulosus were clearly separated at Tm of 85 to 87 °C. In addition, double-pick melting curves were produced in mixed infections. Cox1 melting curves were not decisive enough to distinguish four taeniids. In this work, the efficiency of HRM analysis to differentiate four major taeniid species in dogs has been demonstrated using ITS1 gene.

  18. Refined hierarchical kinematics quasi-3D Ritz models for free vibration analysis of doubly curved FGM shells and sandwich shells with FGM core

    NASA Astrophysics Data System (ADS)

    Fazzolari, Fiorenzo A.; Carrera, Erasmo

    2014-02-01

    In this paper, the Ritz minimum energy method, based on the use of the Principle of Virtual Displacements (PVD), is combined with refined Equivalent Single Layer (ESL) and Zig Zag (ZZ) shell models hierarchically generated by exploiting the use of Carrera's Unified Formulation (CUF), in order to engender the Hierarchical Trigonometric Ritz Formulation (HTRF). The HTRF is then employed to carry out the free vibration analysis of doubly curved shallow and deep functionally graded material (FGM) shells. The PVD is further used in conjunction with the Gauss theorem to derive the governing differential equations and related natural boundary conditions. Donnell-Mushtari's shallow shell-type equations are given as a particular case. Doubly curved FGM shells and doubly curved sandwich shells made up of isotropic face sheets and FGM core are investigated. The proposed shell models are widely assessed by comparison with the literature results. Two benchmarks are provided and the effects of significant parameters such as stacking sequence, boundary conditions, length-to-thickness ratio, radius-to-length ratio and volume fraction index on the circular frequency parameters and modal displacements are discussed.

  19. Application of multi-criteria decision analysis in prediction of groundwater resources potential: A case of Oke-Ana, Ilesa Area Southwestern, Nigeria

    NASA Astrophysics Data System (ADS)

    Akinlalu, A. A.; Adegbuyiro, A.; Adiat, K. A. N.; Akeredolu, B. E.; Lateef, W. Y.

    2017-06-01

    Groundwater Potential of Oke-Ana area southwestern Nigeria have been evaluated using the integration of electrical resistivity method, remote sensing and geographic information systems. The effect of five hydrogeological indices, namely lineament density, drainage density, lithology, overburden thickness and aquifer layer resistivity on groundwater occurrence was established. Multi-criteria decision analysis technique was employed to assign weight to each of the index using the concept of analytical hierarchy process. The assigned weight was normalized and consistency ratio was established. In order to evaluate the groundwater potential of Oke-Ana, sixty-seven (67) vertical electrical sounding points were occupied. Ten curve types were delineated in the study area. The curve types vary from simple three layer A and H-type curves to the more complex four, five and six layer AA, HA, KH, QH, AKH, HKH, KHA and KHKH curves. Four subsurface geo-electric sequences of top soil, weathered layer, partially weathered/fractured basement and the fresh basement were delineated in the area. The analytical process assisted in classifying Oke-Ana into, low, medium and high groundwater potential zones. Validation of the model from well information and two aborted boreholes suggest 70% agreement.

  20. The initial rise method extended to multiple trapping levels in thermoluminescent materials.

    PubMed

    Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E

    2011-02-01

    The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Functional analysis and classification of phytoplankton based on data from an automated flow cytometer.

    PubMed

    Malkassian, Anthony; Nerini, David; van Dijk, Mark A; Thyssen, Melilotus; Mante, Claude; Gregori, Gerald

    2011-04-01

    Analytical flow cytometry (FCM) is well suited for the analysis of phytoplankton communities in fresh and sea waters. The measurement of light scatter and autofluorescence properties of particles by FCM provides optical fingerprints, which enables different phytoplankton groups to be separated. A submersible version of the CytoSense flow cytometer (the CytoSub) has been designed for in situ autonomous sampling and analysis, making it possible to monitor phytoplankton at a short temporal scale and obtain accurate information about its dynamics. For data analysis, a manual clustering is usually performed a posteriori: data are displayed on histograms and scatterplots, and group discrimination is made by drawing and combining regions (gating). The purpose of this study is to provide greater objectivity in the data analysis by applying a nonmanual and consistent method to automatically discriminate clusters of particles. In other words, we seek for partitioning methods based on the optical fingerprints of each particle. As the CytoSense is able to record the full pulse shape for each variable, it quickly generates a large and complex dataset to analyze. The shape, length, and area of each curve were chosen as descriptors for the analysis. To test the developed method, numerical experiments were performed on simulated curves. Then, the method was applied and validated on phytoplankton cultures data. Promising results have been obtained with a mixture of various species whose optical fingerprints overlapped considerably and could not be accurately separated using manual gating. Copyright © 2011 International Society for Advancement of Cytometry.

  2. A new method of evaluating tight gas sands pore structure from nuclear magnetic resonance (NMR) logs

    NASA Astrophysics Data System (ADS)

    Xiao, Liang; Mao, Zhi-qiang; Xie, Xiu-hong

    2016-04-01

    Tight gas sands always display such characteristics of ultra-low porosity, permeability, high irreducible water, low resistivity contrast, complicated pore structure and strong heterogeneity, these make that the conventional methods are invalid. Many effective gas bearing formations are considered as dry zones or water saturated layers, and cannot be identified and exploited. To improve tight gas sands evaluation, the best method is quantitative characterizing rock pore structure. The mercury injection capillary pressure (MICP) curves are advantageous in predicting formation pore structure. However, the MICP experimental measurements are limited due to the environment and economy factors, this leads formation pore structure cannot be consecutively evaluated. Nuclear magnetic resonance (NMR) logs are considered to be promising in evaluating rock pore structure. Generally, to consecutively quantitatively evaluate tight gas sands pore structure, the best method is constructing pseudo Pc curves from NMR logs. In this paper, based on the analysis of lab experimental results for 20 core samples, which were drilled from tight gas sandstone reservoirs of Sichuan basin, and simultaneously applied for lab MICP and NMR measurements, the relationships of piecewise power function between nuclear magnetic resonance (NMR) transverse relaxation T2 time and pore-throat radius Rc are established. A novel method, which is used to transform NMR reverse cumulative curve as pseudo capillary pressure (Pc) curve is proposed, and the corresponding model is established based on formation classification. By using this model, formation pseudo Pc curves can be consecutively synthesized. The pore throat radius distribution, and pore structure evaluation parameters, such as the average pore throat radius (Rm), the threshold pressure (Pd), the maximum pore throat radius (Rmax) and so on, can also be precisely extracted. After this method is extended into field applications, several tight gas sandstone reservoirs are processed, and the predicted results are compared with core derived results. Good consistency between evaluated results with core derived results illustrates the dependability of the proposed method. Comparing with the previous methods, this presented model is much more theoretical, and the applicability is much improved. Combining with the evaluated results, our target tight gas sands are well evaluated, and many potential gas-bearing layers are effectively identified.

  3. Learning Curve of the Application of Huang Three-Step Maneuver in a Laparoscopic Spleen-Preserving Splenic Hilar Lymphadenectomy for Advanced Gastric Cancer

    PubMed Central

    Huang, Ze-Ning; Huang, Chang-Ming; Zheng, Chao-Hui; Li, Ping; Xie, Jian-Wei; Wang, Jia-Bin; Lin, Jian-Xian; Lu, Jun; Chen, Qi-Yue; Cao, Long-long; Lin, Mi; Tu, Ru-Hong

    2016-01-01

    Abstract To investigate the learning curve of the application of Huang 3-step maneuver, which was summarized and proposed by our center for the treatment of advanced upper gastric cancer. From April 2012 to March 2013, 130 consecutive patients who underwent a laparoscopic spleen-preserving splenic hilar lymphadenectomy (LSPL) by a single surgeon who performed Huang 3-step maneuver were retrospectively analyzed. The learning curve was analyzed based on the moving average (MA) method and the cumulative sum method (CUSUM). Surgical outcomes, short-term outcomes, and follow-up results before and after learning curve were contrastively analyzed. A stepwise multivariate logistic regression was used for a multivariable analysis to determine the factors that affect the operative time using Huang 3-step maneuver. Based on the CUSUM, the learning curve for Huang 3-step maneuver was divided into phase 1 (cases 1–40) and phase 2 (cases 41–130). The dissection time (DT) (P < 0.001), blood loss (BL) (P < 0.001), and number of vessels injured in phase 2 were significantly less than those in phase 1. There were no significant differences in the clinicopathological characteristics, short-term outcomes, or major postoperative complications between the learning curve phases. Univariate and multivariate analyses revealed that body mass index (BMI), short gastric vessels (SGVs), splenic hilar artery (SpA) type, and learning curve phase were significantly associated with DT. In the entire group, 124 patients were followed for a median time of 23.0 months (range, 3–30 months). There was no significant difference in the survival curve between phases. AUGC patients with a BMI less than 25 kg/m2, a small number of SGVs, and a concentrated type of SpA are ideal candidates for surgeons who are in phase 1 of the learning curve. PMID:27043698

  4. Dose Calibration of the ISS-RAD Fast Neutron Detector

    NASA Technical Reports Server (NTRS)

    Zeitlin, C.

    2015-01-01

    The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.

  5. Direct biomechanical modeling of trabecular bone using a nonlinear manifold-based volumetric representation

    NASA Astrophysics Data System (ADS)

    Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.

    2017-03-01

    Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.

  6. Spatial resolution measurements by Radia diagnostic software with SEDENTEXCT image quality phantom in cone beam CT for dental use.

    PubMed

    Watanabe, Hiroshi; Nomura, Yoshikazu; Kuribayashi, Ami; Kurabayashi, Tohru

    2018-02-01

    We aimed to employ the Radia diagnostic software with the safety and efficacy of a new emerging dental X-ray modality (SEDENTEXCT) image quality (IQ) phantom in CT, and to evaluate its validity. The SEDENTEXCT IQ phantom and Radia diagnostic software were employed. The phantom was scanned using one medical full-body CT and two dentomaxillofacial cone beam CTs. The obtained images were imported to the Radia software, and the spatial resolution outputs were evaluated. The oversampling method was employed using our original wire phantom as a reference. The resultant modulation transfer function (MTF) curves were compared. The null hypothesis was that MTF curves generated using both methods would be in agreement. One-way analysis of variance tests were applied to the f50 and f10 values from the MTF curves. The f10 values were subjectively confirmed by observing the line pair modules. The Radia software reported the MTF curves on the xy-plane of the CT scans, but could not return f50 and f10 values on the z-axis. The null hypothesis concerning the reported MTF curves on the xy-plane was rejected. There were significant differences between the results of the Radia software and our reference method, except for f10 values in CS9300. These findings were consistent with our line pair observations. We evaluated the validity of the Radia software with the SEDENTEXCT IQ phantom. The data provided were semi-automatic, albeit with problems and statistically different from our reference. We hope the manufacturer will overcome these limitations.

  7. Ultrafast current imaging by Bayesian inversion

    DOE Data Explorer

    Somnath, Suhas; Law, Kody J. H.; Morozovska, Anna; Maksymovych, Petro; Kim, Yunseok; Lu, Xiaoli; Alexe, Marin; Archibald, Richard K; Kalinin, Sergei V; Jesse, Stephen; Vasudevan, Rama K

    2016-01-01

    Spectroscopic measurements of current-voltage curves in scanning probe microscopy is the earliest and one of the most common methods for characterizing local energy-dependent electronic properties, providing insight into superconductive, semiconductor, and memristive behaviors. However, the quasistatic nature of these measurements renders them extremely slow. Here, we demonstrate a fundamentally new approach for dynamic spectroscopic current imaging via full information capture and Bayesian inference analysis. This "general-mode I-V"method allows three orders of magnitude faster rates than presently possible. The technique is demonstrated by acquiring I-V curves in ferroelectric nanocapacitors, yielding >100,000 I-V curves in <20 minutes. This allows detection of switching currents in the nanoscale capacitors, as well as determination of dielectric constant. These experiments show the potential for the use of full information capture and Bayesian inference towards extracting physics from rapid I-V measurements, and can be used for transport measurements in both atomic force and scanning tunneling microscopy. The data was analyzed using pycroscopy - an open-source python package available at https://github.com/pycroscopy/pycroscopy

  8. Variable distributional characteristics of substrate utilization patterns in activated sludge plants in Kuwait.

    PubMed

    Al-Mutairi, N Z

    2009-02-01

    The objective of this study was to determine the magnitude of microbial functional potential and community structure between three different WWTPs using the Lorenz curve method and to find the effect of seasonal variation on patterns of substrate utilization. Lorenz curve method was sensitive enough to detect short-term changes in microbial functional diversity between Riqqa, Umm Al-Haiman and Al-Jahra activated sludge systems and showed seasonal variations of the utilized carbon sources. Gini coefficient ranged from 0.21 to 0.8. Lorenz curves seemed particularly suitable to present microbial heterogeneity in term of inequality and to highlight the relative contribution of low-and high functional diversity for the three different types of mixed liquors. Correlation analysis of the experimental data show that the complement of the Gini coefficient was strongly and positively correlated with the Shannon index (r(xy)=0.89), evenness (r(xy)=0.91), and AWCD (r(xy)=0.95) at the 95% level of significance (alpha=0.05).

  9. Dynamic cardiac PET imaging: extraction of time-activity curves using ICA and a generalized Gaussian distribution model.

    PubMed

    Mabrouk, Rostom; Dubeau, François; Bentabet, Layachi

    2013-01-01

    Kinetic modeling of metabolic and physiologic cardiac processes in small animals requires an input function (IF) and a tissue time-activity curves (TACs). In this paper, we present a mathematical method based on independent component analysis (ICA) to extract the IF and the myocardium's TACs directly from dynamic positron emission tomography (PET) images. The method assumes a super-Gaussian distribution model for the blood activity, and a sub-Gaussian distribution model for the tissue activity. Our appreach was applied on 22 PET measurement sets of small animals, which were obtained from the three most frequently used cardiac radiotracers, namely: desoxy-fluoro-glucose ((18)F-FDG), [(13)N]-ammonia, and [(11)C]-acetate. Our study was extended to PET human measurements obtained with the Rubidium-82 ((82) Rb) radiotracer. The resolved mathematical IF values compare favorably to those derived from curves extracted from regions of interest (ROI), suggesting that the procedure presents a reliable alternative to serial blood sampling for small-animal cardiac PET studies.

  10. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    PubMed

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  11. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  12. On Correlated-noise Analyses Applied to Exoplanet Light Curves

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Loredo, Thomas J.; Lust, Nate B.; Blecic, Jasmina; Stemm, Madison

    2017-01-01

    Time-correlated noise is a significant source of uncertainty when modeling exoplanet light-curve data. A correct assessment of correlated noise is fundamental to determine the true statistical significance of our findings. Here, we review three of the most widely used correlated-noise estimators in the exoplanet field, the time-averaging, residual-permutation, and wavelet-likelihood methods. We argue that the residual-permutation method is unsound in estimating the uncertainty of parameter estimates. We thus recommend to refrain from this method altogether. We characterize the behavior of the time averaging’s rms-versus-bin-size curves at bin sizes similar to the total observation duration, which may lead to underestimated uncertainties. For the wavelet-likelihood method, we note errors in the published equations and provide a list of corrections. We further assess the performance of these techniques by injecting and retrieving eclipse signals into synthetic and real Spitzer light curves, analyzing the results in terms of the relative-accuracy and coverage-fraction statistics. Both the time-averaging and wavelet-likelihood methods significantly improve the estimate of the eclipse depth over a white-noise analysis (a Markov-chain Monte Carlo exploration assuming uncorrelated noise). However, the corrections are not perfect when retrieving the eclipse depth from Spitzer data sets, these methods covered the true (injected) depth within the 68% credible region in only ˜45%-65% of the trials. Lastly, we present our open-source model-fitting tool, Multi-Core Markov-Chain Monte Carlo (MC3). This package uses Bayesian statistics to estimate the best-fitting values and the credible regions for the parameters for a (user-provided) model. MC3 is a Python/C code, available at https://github.com/pcubillos/MCcubed.

  13. Shiftwork and Diurnal Salivary Cortisol Patterns Among Police Officers

    PubMed Central

    Charles, Luenda E.; Fekedulegn, Desta; Burchfiel, Cecil M.; Hartley, Tara A.; Andrew, Michael E.; Violanti, John M.; Miller, Diane B.

    2016-01-01

    Objective To investigate associations between shiftwork and diurnal salivary cortisol among 319 police officers (77.7% men) Methods Information on shiftwork was obtained from the City of Buffalo, NY electronic payroll records. Saliva was collected using Salivettes at seven time points and analyzed for free cortisol concentrations (nmol/L) using a chemiluminescence immunoassay. Mean slopes and areas under the curve were compared across shift schedule using analysis of variance (ANOVA)/analysis of covariance (ANCOVA). Results Officers working primarily on the night shift had a significantly shallower slope. Mean slope (nmol/L/minutes) of the cortisol curve varied significantly across shifts (day: −0.00332 ± 0.00017, afternoon: −0.00313 ± 0.00018, night: −0.00257 ± 0.0002); adjusted P = 0.023. Conclusions Our results suggest that night shiftwork is a work-place factor that may alter the response of the hypothalamic–pituitary–adrenal (HPA) axis to the circadian cues responsible for the pattern of the diurnal cortisol curve. PMID:27129020

  14. Transit timing analysis of the exoplanet TrES-5 b. Possible existence of the exoplanet TrES-5 c

    NASA Astrophysics Data System (ADS)

    Sokov, Eugene N.; Sokova, Iraida A.; Dyachenko, Vladimir V.; Rastegaev, Denis A.; Burdanov, Artem; Rusov, Sergey A.; Benni, Paul; Shadick, Stan; Hentunen, Veli-Pekka; Salisbury, Mark; Esseiva, Nicolas; Garlitz, Joe; Bretton, Marc; Ogmen, Yenal; Karavaev, Yuri; Ayiomamitis, Anthony; Mazurenko, Oleg; Alonso, David Molina; Velichko, Sergey F.

    2018-06-01

    In this work, we present transit timing variations detected for the exoplanet TrES-5b. To obtain the necessary amount of photometric data for this exoplanet, we have organized an international campaign to search for exoplanets based on the Transit Timing Variation method (TTV) and as a result of this we collected 30 new light curves, 15 light curves from the Exoplanet Transit Database (ETD) and 8 light curves from the literature for the timing analysis of the exoplanet TrES-5b. We have detected timing variations with a semi-amplitude of A ≈ 0.0016 days and a period of P ≈ 99 days. We carried out the N-body modeling based on the three-body problem. The detected perturbation of TrES-5b may be caused by a second exoplanet in the TrES-5 system. We have calculated the possible mass and resonance of the object: M ≈ 0.24MJup at a 1:2 Resonance.

  15. Water impact analysis of space shuttle solid rocket motor by the finite element method

    NASA Technical Reports Server (NTRS)

    Buyukozturk, O.; Hibbitt, H. D.; Sorensen, E. P.

    1974-01-01

    Preliminary analysis showed that the doubly curved triangular shell elements were too stiff for these shell structures. The doubly curved quadrilateral shell elements were found to give much improved results. A total of six load cases were analyzed in this study. The load cases were either those resulting from a static test using reaction straps to simulate the drop conditions or under assumed hydrodynamic conditions resulting from a drop test. The latter hydrodynamic conditions were obtained through an emperical fit of available data. Results obtained from a linear analysis were found to be consistent with results obtained elsewhere with NASTRAN and BOSOR. The nonlinear analysis showed that the originally assumed loads would result in failure of the shell structures. The nonlinear analysis also showed that it was useful to apply internal pressure as a stabilizing influence on collapse. A final analysis with an updated estimate of load conditions resulted in linear behavior up to full load.

  16. Statistically generated weighted curve fit of residual functions for modal analysis of structures

    NASA Technical Reports Server (NTRS)

    Bookout, P. S.

    1995-01-01

    A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.

  17. Injury risk curves for the WorldSID 50th male dummy.

    PubMed

    Petitjean, Audrey; Trosseille, Xavier; Petit, Philippe; Irwin, Annette; Hassan, Joe; Praxl, Norbert

    2009-11-01

    The development of the WorldSID 50th percentile male dummy was initiated in 1997 by the International Organisation for Standardisation (ISO/SC12/TC22/WG5) with the objective of developing a more biofidelic side impact dummy and supporting the adoption of a harmonised dummy into regulations. More than 45 organizations from all around the world have contributed to this effort including governmental agencies, research institutes, car manufacturers and dummy manufacturers. The first production version of the WorldSID 50th male dummy was released in March 2004 and demonstrated an improved biofidelity over existing side impact dummies. Full scale vehicle tests covering a wide range of side impact test procedures were performed worldwide with the WorldSID dummy. However, the vehicle safety performance could not be assessed due to lack of injury risk curves for this dummy. The development of these curves was initiated in 2004 within the framework of ISO/SC12/TC22/WG6 (Injury criteria). In 2008, the ACEA- Dummy Task Force (TFD) decided to contribute to this work and offered resources for a project manager to coordinate of the effort of a group of volunteer biomechanical experts from international institutions (ISO, EEVC, VRTC/NHTSA, JARI, Transport Canada), car manufacturers (ACEA, Ford, General Motors, Honda, Toyota, Chrysler) and universities (Wayne State University, Ohio State University, John Hopkins University, Medical College of Wisconsin) to develop harmonized injury risk curves. An in-depth literature review was conducted. All the available PMHS datasets were identified, the test configurations and the quality of the results were checked. Criteria were developed for inclusion or exclusion of PMHS tests in the development of the injury risk curves. Data were processed to account for differences in mass and age of the subjects. Finally, injury risk curves were developed using the following statistical techniques, the certainty method, the Mertz/Weber method, the logistic regression, the survival analysis and the Consistent Threshold Estimate. The paper presents the methods used to check and process the data, select the PMHS tests, and construct the injury risk curves. The PMHS dataset as well as the injury risk curves are provided.

  18. Evaluation of the learning curve for external cephalic version using cumulative sum analysis

    PubMed Central

    Kim, So Yun; Chang, Eun Hye; Kwak, Dong Wook; Ahn, Hyun Kyung; Ryu, Hyun Mi; Kim, Moon Young

    2017-01-01

    Objective We evaluated the learning curve for external cephalic version (ECV) using learning curve-cumulative sum (LC-CUSUM) analysis. Methods This was a retrospective study involving 290 consecutive cases between October 2013 and March 2017. We evaluated the learning curve for ECV on nulli and over para 1 group using LC-CUSUM analysis on the assumption that 50% and 70% of ECV procedures succeeded by description a trend-line of quadratic function with reliable R2 values. Results The overall success rate for ECV was 64.8% (188/290), while the success rate for nullipara and over para 1 groups was 56.2% (100/178) and 78.6% (88/112), respectively. ‘H’ value, that the actual failure rate does not differ from the acceptable failure rate, was −3.27 and −1.635 when considering ECV success rates of 50% and 70%, respectively. Consequently, in order to obtain a consistent 50% success rate, we would require 57 nullipara cases, and in order to obtain a consistent 70% success rate, we would require 130 nullipara cases. In contrast, 8 to 10 over para 1 cases would be required for an expected success rate of 50% and 70% on over para 1 group. Conclusion Even a relatively inexperienced physician can experience success with multipara and after accumulating experience, they will manage nullipara cases. Further research is required for LC-CUSUM involving several practitioners instead of a single practitioner. This will lead to the gradual implementation of standard learning curve guidelines for ECV. PMID:28791265

  19. Quantitative Detection of Cracks in Steel Using Eddy Current Pulsed Thermography.

    PubMed

    Shi, Zhanqun; Xu, Xiaoyu; Ma, Jiaojiao; Zhen, Dong; Zhang, Hao

    2018-04-02

    Small cracks are common defects in steel and often lead to catastrophic accidents in industrial applications. Various nondestructive testing methods have been investigated for crack detection; however, most current methods focus on qualitative crack identification and image processing. In this study, eddy current pulsed thermography (ECPT) was applied for quantitative crack detection based on derivative analysis of temperature variation. The effects of the incentive parameters on the temperature variation were analyzed in the simulation study. The crack profile and position are identified in the thermal image based on the Canny edge detection algorithm. Then, one or more trajectories are determined through the crack profile in order to determine the crack boundary through its temperature distribution. The slope curve along the trajectory is obtained. Finally, quantitative analysis of the crack sizes was performed by analyzing the features of the slope curves. The experimental verification showed that the crack sizes could be quantitatively detected with errors of less than 1%. Therefore, the proposed ECPT method was demonstrated to be a feasible and effective nondestructive approach for quantitative crack detection.

  20. An advanced approach for computer modeling and prototyping of the human tooth.

    PubMed

    Chang, Kuang-Hua; Magdum, Sheetalkumar; Khera, Satish C; Goel, Vijay K

    2003-05-01

    This paper presents a systematic and practical method for constructing accurate computer and physical models that can be employed for the study of human tooth mechanics. The proposed method starts with a histological section preparation of a human tooth. Through tracing outlines of the tooth on the sections, discrete points are obtained and are employed to construct B-spline curves that represent the exterior contours and dentino-enamel junction (DEJ) of the tooth using a least square curve fitting technique. The surface skinning technique is then employed to quilt the B-spline curves to create a smooth boundary and DEJ of the tooth using B-spline surfaces. These surfaces are respectively imported into SolidWorks via its application protocol interface to create solid models. The solid models are then imported into Pro/MECHANICA Structure for finite element analysis (FEA). The major advantage of the proposed method is that it first generates smooth solid models, instead of finite element models in discretized form. As a result, a more advanced p-FEA can be employed for structural analysis, which usually provides superior results to traditional h-FEA. In addition, the solid model constructed is smooth and can be fabricated with various scales using the solid freeform fabrication technology. This method is especially useful in supporting bioengineering applications, where the shape of the object is usually complicated. A human maxillary second molar is presented to illustrate and demonstrate the proposed method. Note that both the solid and p-FEA models of the molar are presented. However, comparison between p- and h-FEA models is out of the scope of the paper.

  1. Problems in using p-curve analysis and text-mining to detect rate of p-hacking and evidential value

    PubMed Central

    Thompson, Paul A.

    2016-01-01

    Background. The p-curve is a plot of the distribution of p-values reported in a set of scientific studies. Comparisons between ranges of p-values have been used to evaluate fields of research in terms of the extent to which studies have genuine evidential value, and the extent to which they suffer from bias in the selection of variables and analyses for publication, p-hacking. Methods. p-hacking can take various forms. Here we used R code to simulate the use of ghost variables, where an experimenter gathers data on several dependent variables but reports only those with statistically significant effects. We also examined a text-mined dataset used by Head et al. (2015) and assessed its suitability for investigating p-hacking. Results. We show that when there is ghost p-hacking, the shape of the p-curve depends on whether dependent variables are intercorrelated. For uncorrelated variables, simulated p-hacked data do not give the “p-hacking bump” just below .05 that is regarded as evidence of p-hacking, though there is a negative skew when simulated variables are inter-correlated. The way p-curves vary according to features of underlying data poses problems when automated text mining is used to detect p-values in heterogeneous sets of published papers. Conclusions. The absence of a bump in the p-curve is not indicative of lack of p-hacking. Furthermore, while studies with evidential value will usually generate a right-skewed p-curve, we cannot treat a right-skewed p-curve as an indicator of the extent of evidential value, unless we have a model specific to the type of p-values entered into the analysis. We conclude that it is not feasible to use the p-curve to estimate the extent of p-hacking and evidential value unless there is considerable control over the type of data entered into the analysis. In particular, p-hacking with ghost variables is likely to be missed. PMID:26925335

  2. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    PubMed

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  3. A New Approach for Obtaining Cosmological Constraints from Type Ia Supernovae using Approximate Bayesian Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Elise; Wolf, Rachel; Sako, Masao

    2016-11-09

    Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less

  4. Analysis of Phenolic Antioxidants in Navy Mobility Fuels by Gas Chromatography-Mass Spectrometry

    DTIC Science & Technology

    2013-06-19

    8.0 LITERATURE CITED .........................................................................................14 APPENDIX A: Calibration Curves for...chromatogram from an F-76 diesel fuel containing 24 ppm of the AO-37 additive package, analyzed using single column GC-MS-SIM method...sulfur diesel fuel containing 6.25 ppm of the AO-37 additive package, analyzed using dual column Deans switch GC-MS-SIM method

  5. Application of texture analysis method for classification of benign and malignant thyroid nodules in ultrasound images.

    PubMed

    Abbasian Ardakani, Ali; Gharbali, Akbar; Mohammadi, Afshin

    2015-01-01

    The aim of this study was to evaluate computer aided diagnosis (CAD) system with texture analysis (TA) to improve radiologists' accuracy in identification of thyroid nodules as malignant or benign. A total of 70 cases (26 benign and 44 malignant) were analyzed in this study. We extracted up to 270 statistical texture features as a descriptor for each selected region of interests (ROIs) in three normalization schemes (default, 3s and 1%-99%). Then features by the lowest probability of classification error and average correlation coefficients (POE+ACC), and Fisher coefficient (Fisher) eliminated to 10 best and most effective features. These features were analyzed under standard and nonstandard states. For TA of the thyroid nodules, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA) were applied. First Nearest-Neighbour (1-NN) classifier was performed for the features resulting from PCA and LDA. NDA features were classified by artificial neural network (A-NN). Receiver operating characteristic (ROC) curve analysis was used for examining the performance of TA methods. The best results were driven in 1-99% normalization with features extracted by POE+ACC algorithm and analyzed by NDA with the area under the ROC curve ( Az) of 0.9722 which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Our results indicate that TA is a reliable method, can provide useful information help radiologist in detection and classification of benign and malignant thyroid nodules.

  6. Pooled Analysis of Long-Term Survival Data From Phase II and Phase III Trials of Ipilimumab in Unresectable or Metastatic Melanoma

    PubMed Central

    Schadendorf, Dirk; Hodi, F. Stephen; Robert, Caroline; Weber, Jeffrey S.; Margolin, Kim; Hamid, Omid; Patt, Debra; Chen, Tai-Tsang; Berman, David M.; Wolchok, Jedd D.

    2015-01-01

    Purpose To provide a more precise estimate of long-term survival observed for ipilimumab-treated patients with advanced melanoma, we performed a pooled analysis of overall survival (OS) data from multiple studies. Methods The primary analysis pooled OS data for 1,861 patients from 10 prospective and two retrospective studies of ipilimumab, including two phase III trials. Patients were previously treated (n = 1,257) or treatment naive (n = 604), and the majority of patients received ipilimumab 3 mg/kg (n = 965) or 10 mg/kg (n = 706). We also conducted a secondary analysis of OS data (n = 4,846) with an additional 2,985 patients from an expanded access program. OS rates were estimated using the Kaplan-Meier method. Results Among 1,861 patients, median OS was 11.4 months (95% CI, 10.7 to 12.1 months), which included 254 patients with at least 3 years of survival follow-up. The survival curve began to plateau around year 3, with follow-up of up to 10 years. Three-year survival rates were 22%, 26%, and 20% for all patients, treatment-naive patients, and previously treated patients, respectively. Including data from the expanded access program, median OS was 9.5 months (95% CI, 9.0 to 10.0 months), with a plateau at 21% in the survival curve beginning around year 3. Conclusion To our knowledge, this is the largest analysis of OS to date for ipilimumab-treated patients with advanced melanoma. We observed a plateau in the survival curve, beginning at approximately 3 years, which was independent of prior therapy or ipilimumab dose. These data add to the evidence supporting the durability of long-term survival in ipilimumab-treated patients with advanced melanoma. PMID:25667295

  7. The composition-explicit distillation curve technique: Relating chemical analysis and physical properties of complex fluids.

    PubMed

    Bruno, Thomas J; Ott, Lisa S; Lovestead, Tara M; Huber, Marcia L

    2010-04-16

    The analysis of complex fluids such as crude oils, fuels, vegetable oils and mixed waste streams poses significant challenges arising primarily from the multiplicity of components, the different properties of the components (polarity, polarizability, etc.) and matrix properties. We have recently introduced an analytical strategy that simplifies many of these analyses, and provides the added potential of linking compositional information with physical property information. This aspect can be used to facilitate equation of state development for the complex fluids. In addition to chemical characterization, the approach provides the ability to calculate thermodynamic properties for such complex heterogeneous streams. The technique is based on the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. The analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. By far, the most widely used analytical technique we have used with the ADC is gas chromatography. This has enabled us to study finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this special issue of the Journal of Chromatography, specifically dedicated to extraction technologies, we describe the essential features of the advanced distillation curve metrology as an analytical strategy for complex fluids. Published by Elsevier B.V.

  8. Novel Method of Production Decline Analysis

    NASA Astrophysics Data System (ADS)

    Xie, Shan; Lan, Yifei; He, Lei; Jiao, Yang; Wu, Yong

    2018-02-01

    ARPS decline curves is the most commonly used in oil and gas field due to its minimal data requirements and ease application. And prediction of production decline which is based on ARPS analysis rely on known decline type. However, when coefficient index are very approximate under different decline type, it is difficult to directly recognize decline trend of matched curves. Due to difficulties above, based on simulation results of multi-factor response experiments, a new dynamic decline prediction model is introduced with using multiple linear regression of influence factors. First of all, according to study of effect factors of production decline, interaction experimental schemes are designed. Based on simulated results, annual decline rate is predicted by decline model. Moreover, the new method is applied in A gas filed of Ordos Basin as example to illustrate reliability. The result commit that the new model can directly predict decline tendency without needing recognize decline style. From arithmetic aspect, it also take advantage of high veracity. Finally, the new method improves the evaluation method of gas well production decline in low permeability gas reservoir, which also provides technical support for further understanding of tight gas field development laws.

  9. Comparison and statistical analysis of four write stability metrics in bulk CMOS static random access memory cells

    NASA Astrophysics Data System (ADS)

    Qiu, Hao; Mizutani, Tomoko; Saraya, Takuya; Hiramoto, Toshiro

    2015-04-01

    The commonly used four metrics for write stability were measured and compared based on the same set of 2048 (2k) six-transistor (6T) static random access memory (SRAM) cells by the 65 nm bulk technology. The preferred one should be effective for yield estimation and help predict edge of stability. Results have demonstrated that all metrics share the same worst SRAM cell. On the other hand, compared to butterfly curve with non-normality and write N-curve where no cell state flip happens, bit-line and word-line margins have good normality as well as almost perfect correlation. As a result, both bit line method and word line method prove themselves preferred write stability metrics.

  10. Could CCI or FBCI Fully Eliminate the Impact of Curve Flexibility When Evaluating the Surgery Outcome for Thoracic Curve Idiopathic Scoliosis Patient? A Retrospective Study

    PubMed Central

    Ni, Haijian; Zhu, Xiaodong; Li, Ming

    2015-01-01

    Purpose To clarify if CCI or FBCI could fully eliminate the influence of curve flexibility on the coronal correction rate. Methods We reviewed medical record of all thoracic curve AIS cases undergoing posterior spinal fusion with all pedicle screw systems from June 2011 to July 2013. Radiographical data was collected and calculated. Student t test, Pearson correlation analysis and linear regression analysis were used to analyze the data. Results 60 were included in this study. The mean age was 14.7y (10-18y) with 10 males (17%) and 50 females (83%). The average Risser sign was 2.7. The mean thoracic Cobb angle before operation was 51.9°. The mean bending Cobb angle was 27.6° and the mean fulcrum bending Cobb angle was 17.4°. The mean Cobb angle at 2 week after surgery was 16.3°. The Pearson correlation coefficient r between CCI and BFR was -0.856(P<0.001), and between FBCI and FFR was -0.728 (P<0.001). A modified FBCI (M-FBCI) = (CR-0.513)/BFR or a modified CCI (M-CCI) = (CR-0.279)/FFR was generated by curve estimation has no significant correlation with FFR (r=-0.08, p=0.950) or with BFR (r=0.123, p=0.349). Conclusions Fulcrum-bending radiographs may better predict the outcome of AIS coronal correction than bending radiographs in thoracic curveAIS patients. Neither CCI nor FBCI can fully eliminate the impact of curve flexibility on the outcome of correction. A modified CCI or FBCI can better evaluating the corrective effects of different surgical techniques or instruments. PMID:25984945

  11. Information Management Systems in the Undergraduate Instrumental Analysis Laboratory.

    ERIC Educational Resources Information Center

    Merrer, Robert J.

    1985-01-01

    Discusses two applications of Laboratory Information Management Systems (LIMS) in the undergraduate laboratory. They are the coulometric titration of thiosulfate with electrogenerated triiodide ion and the atomic absorption determination of calcium using both analytical calibration curve and standard addition methods. (JN)

  12. Evaluation of PCR and high-resolution melt curve analysis for differentiation of Salmonella isolates.

    PubMed

    Saeidabadi, Mohammad Sadegh; Nili, Hassan; Dadras, Habibollah; Sharifiyazdi, Hassan; Connolly, Joanne; Valcanis, Mary; Raidal, Shane; Ghorashi, Seyed Ali

    2017-06-01

    Consumption of poultry products contaminated with Salmonella is one of the major causes of foodborne diseases worldwide and therefore detection and differentiation of Salmonella spp. in poultry is important. In this study, oligonucleotide primers were designed from hemD gene and a PCR followed by high-resolution melt (HRM) curve analysis was developed for rapid differentiation of Salmonella isolates. Amplicons of 228 bp were generated from 16 different Salmonella reference strains and from 65 clinical field isolates mainly from poultry farms. HRM curve analysis of the amplicons differentiated Salmonella isolates and analysis of the nucleotide sequence of the amplicons from selected isolates revealed that each melting curve profile was related to a unique DNA sequence. The relationship between reference strains and tested specimens was also evaluated using a mathematical model without visual interpretation of HRM curves. In addition, the potential of the PCR-HRM curve analysis was evaluated for genotyping of additional Salmonella isolates from different avian species. The findings indicate that PCR followed by HRM curve analysis provides a rapid and robust technique for genotyping of Salmonella isolates to determine the serovar/serotype.

  13. Determination of thermoelastic material properties by differential heterodyne detection of impulsive stimulated thermal scattering

    PubMed Central

    Verstraeten, B.; Sermeus, J.; Salenbien, R.; Fivez, J.; Shkerdin, G.; Glorieux, C.

    2015-01-01

    The underlying working principle of detecting impulsive stimulated scattering signals in a differential configuration of heterodyne diffraction detection is unraveled by involving optical scattering theory. The feasibility of the method for the thermoelastic characterization of coating-substrate systems is demonstrated on the basis of simulated data containing typical levels of noise. Besides the classical analysis of the photoacoustic part of the signals, which involves fitting surface acoustic wave dispersion curves, the photothermal part of the signals is analyzed by introducing thermal wave dispersion curves to represent and interpret their grating wavelength dependence. The intrinsic possibilities and limitations of both inverse problems are quantified by making use of least and most squares analysis. PMID:26236643

  14. Plasma and magnetospheric research

    NASA Technical Reports Server (NTRS)

    Comfort, R. H.; Horwitz, J. L.

    1984-01-01

    Methods employed in the analysis of plasmas and the magnetosphere are examined. Computer programs which generate distribution functions are used in the analysis of charging phenomena and non maxwell plasmas in terms of density and average energy. An analytical model for spin curve analysis is presented. A program for the analysis of the differential ion flux probe on the space shuttle mission is complete. Satellite data analysis for ion heating, plasma flows in the polar cap, polar wind flow, and density and temperature profiles for several plasmasphere transits are included.

  15. Meta-analysis of single-arm survival studies: a distribution-free approach for estimating summary survival curves with random effects.

    PubMed

    Combescure, Christophe; Foucher, Yohann; Jackson, Daniel

    2014-07-10

    In epidemiologic studies and clinical trials with time-dependent outcome (for instance death or disease progression), survival curves are used to describe the risk of the event over time. In meta-analyses of studies reporting a survival curve, the most informative finding is a summary survival curve. In this paper, we propose a method to obtain a distribution-free summary survival curve by expanding the product-limit estimator of survival for aggregated survival data. The extension of DerSimonian and Laird's methodology for multiple outcomes is applied to account for the between-study heterogeneity. Statistics I(2)  and H(2) are used to quantify the impact of the heterogeneity in the published survival curves. A statistical test for between-strata comparison is proposed, with the aim to explore study-level factors potentially associated with survival. The performance of the proposed approach is evaluated in a simulation study. Our approach is also applied to synthesize the survival of untreated patients with hepatocellular carcinoma from aggregate data of 27 studies and synthesize the graft survival of kidney transplant recipients from individual data from six hospitals. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Extracting information from S-curves of language change.

    PubMed

    Ghanbarnejad, Fakhteh; Gerlach, Martin; Miotto, José M; Altmann, Eduardo G

    2014-12-06

    It is well accepted that adoption of innovations are described by S-curves (slow start, accelerating period and slow end). In this paper, we analyse how much information on the dynamics of innovation spreading can be obtained from a quantitative description of S-curves. We focus on the adoption of linguistic innovations for which detailed databases of written texts from the last 200 years allow for an unprecedented statistical precision. Combining data analysis with simulations of simple models (e.g. the Bass dynamics on complex networks), we identify signatures of endogenous and exogenous factors in the S-curves of adoption. We propose a measure to quantify the strength of these factors and three different methods to estimate it from S-curves. We obtain cases in which the exogenous factors are dominant (in the adoption of German orthographic reforms and of one irregular verb) and cases in which endogenous factors are dominant (in the adoption of conventions for romanization of Russian names and in the regularization of most studied verbs). These results show that the shape of S-curve is not universal and contains information on the adoption mechanism. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  17. Analysis of anthocyanins in commercial fruit juices by using nano-liquid chromatography-electrospray-mass spectrometry and high-performance liquid chromatography with UV-vis detector.

    PubMed

    Fanali, Chiara; Dugo, Laura; D'Orazio, Giovanni; Lirangi, Melania; Dachà, Marina; Dugo, Paola; Mondello, Luigi

    2011-01-01

    Nano-LC and conventional HPLC techniques were applied for the analysis of anthocyanins present in commercial fruit juices using a capillary column of 100 μm id and a 2.1 mm id narrow-bore C(18) column. Analytes were detected by UV-Vis at 518 nm and ESI-ion trap MS with HPLC and nano-LC, respectively. Commercial blueberry juice (14 anthocyanins detected) was used to optimize chromatographic separation of analytes and other analysis parameters. Qualitative identification of anthocyanins was performed by comparing the recorded mass spectral data with those of published papers. The use of the same mobile phase composition in both techniques revealed that the miniaturized method exhibited shorter analysis time and higher sensitivity than narrow-bore chromatography. Good intra-day and day-to-day precision of retention time was obtained in both methods with values of RSD less than 3.4 and 0.8% for nano-LC and HPLC, respectively. Quantitative analysis was performed by external standard curve calibration of cyanidin-3-O-glucoside standard. Calibration curves were linear in the concentration ranges studied, 0.1-50 and 6-50 μg/mL for HPLC-UV/Vis and nano-LC-MS, respectively. LOD and LOQ values were good for both methods. In addition to commercial blueberry juice, qualitative and quantitative analysis of other juices (e.g. raspberry, sweet cherry and pomegranate) was performed. The optimized nano-LC-MS method allowed an easy and selective identification and quantification of anthocyanins in commercial fruit juices; it offered good results, shorter analysis time and reduced mobile phase volume with respect to narrow-bore HPLC. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Non-hoop winding effect on bonding temperature of laser assisted tape winding process

    NASA Astrophysics Data System (ADS)

    Zaami, Amin; Baran, Ismet; Akkerman, Remko

    2018-05-01

    One of the advanced methods for production of thermoplastic composite methods is laser assisted tape winding (LATW). Predicting the temperature in LATW process is very important since the temperature at nip-point (bonding line through width) plays a pivotal role in a proper bonding and hence the mechanical performance. Despite the hoop-winding where the nip-point is the straight line, non-hoop winding includes a curved nip-point line. Hence, the non-hoop winding causes somewhat a different power input through laser-rays and-reflections and consequently generates unknown complex temperature profile on the curved nip-point line. Investigating the temperature at the nip-point line is the point of interest in this study. In order to understand this effect, a numerical model is proposed to capture the effect of laser-rays and their reflections on the nip-point temperature. To this end, a 3D optical model considering the objects in LATW process is considered. Then, the power distribution (absorption and reflection) from the optical analysis is used as an input (heat flux distribution) for the thermal analysis. The thermal analysis employs a fully-implicit advection-diffusion model to calculate the temperature on the surfaces. The results are examined to demonstrate the effect of winding direction on the curved nip-point line (tape width) which has not been considered in literature up to now. Furthermore, the results can be used for designing a better and more efficient setup in the LATW process.

  19. Evaluation of pollutant loads from stormwater BMPs to receiving water using load frequency curves with uncertainty analysis.

    PubMed

    Park, Daeryong; Roesner, Larry A

    2012-12-15

    This study examined pollutant loads released to receiving water from a typical urban watershed in the Los Angeles (LA) Basin of California by applying a best management practice (BMP) performance model that includes uncertainty. This BMP performance model uses the k-C model and incorporates uncertainty analysis and the first-order second-moment (FOSM) method to assess the effectiveness of BMPs for removing stormwater pollutants. Uncertainties were considered for the influent event mean concentration (EMC) and the aerial removal rate constant of the k-C model. The storage treatment overflow and runoff model (STORM) was used to simulate the flow volume from watershed, the bypass flow volume and the flow volume that passes through the BMP. Detention basins and total suspended solids (TSS) were chosen as representatives of stormwater BMP and pollutant, respectively. This paper applies load frequency curves (LFCs), which replace the exceedance percentage with an exceedance frequency as an alternative to load duration curves (LDCs), to evaluate the effectiveness of BMPs. An evaluation method based on uncertainty analysis is suggested because it applies a water quality standard exceedance based on frequency and magnitude. As a result, the incorporation of uncertainty in the estimates of pollutant loads can assist stormwater managers in determining the degree of total daily maximum load (TMDL) compliance that could be expected from a given BMP in a watershed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Stress analysis in curved composites due to thermal loading

    NASA Astrophysics Data System (ADS)

    Polk, Jared Cornelius

    Many structures in aircraft, cars, trucks, ships, machines, tools, bridges, and buildings, consist of curved sections. These sections vary from straight line segments that have curvature at either one or both ends, segments with compound curvatures, segments with two mutually perpendicular curvatures or Gaussian curvatures, and segments with a simple curvature. With the advancements made in multi-purpose composites over the past 60 years, composites slowly but steadily have been appearing in these various vehicles, compound structures, and buildings. These composite sections provide added benefits over isotropic, polymeric, and ceramic materials by generally having a higher specific strength, higher specific stiffnesses, longer fatigue life, lower density, possibilities in reduction of life cycle and/or acquisition cost, and greater adaptability to intended function of structure via material composition and geometry. To be able to design and manufacture a safe composite laminate or structure, it is imperative that the stress distributions, their causes, and effects are thoroughly understood in order to successfully accomplish mission objectives and manufacture a safe and reliable composite. The objective of the thesis work is to expand upon the knowledge of simply curved composite structures by exploring and ascertaining all pertinent parameters, phenomenon, and trends in stress variations in curved laminates due to thermal loading. The simply curved composites consist of composites with one radius of curvature throughout the span of the specimen about only one axis. Analytical beam theory, classical lamination theory, and finite element analysis were used to ascertain stress variations in a flat, isotropic beam. An analytical method was developed to ascertain the stress variations in an isotropic, simply curved beam under thermal loading that is under both free-free and fixed-fixed constraint conditions. This is the first such solution to Author's best knowledge of such a problem. It was ascertained and proven that the general, non-modified (original) version of classical lamination theory cannot be used for an analytical solution for a simply curved beam or any other structure that would require rotations of laminates out their planes in space. Finite element analysis was used to ascertain stress variations in a simply curved beam. It was verified that these solutions reduce to the flat beam solutions as the radius of curvature of the beams tends to infinity. MATLAB was used to conduct the classical lamination theory numerical analysis. A MATLAB program was written to conduct the finite element analysis for the flat and curved beams, isotropic and composite. It does not require incompatibility techniques used in mechanics of isotropic materials for indeterminate structures that are equivalent to fixed-beam problems. Finally, it has the ability to enable the user to define and create unique elements not accessible in commercial software, and modify finite element procedures to take advantage of new paradigms.

  1. Linking Parameters Estimated with the Generalized Graded Unfolding Model: A Comparison of the Accuracy of Characteristic Curve Methods

    ERIC Educational Resources Information Center

    Anderson Koenig, Judith; Roberts, James S.

    2007-01-01

    Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…

  2. Application of derivative spectrophotometry under orthogonal polynomial at unequal intervals: determination of metronidazole and nystatin in their pharmaceutical mixture.

    PubMed

    Korany, Mohamed A; Abdine, Heba H; Ragab, Marwa A A; Aboras, Sara I

    2015-05-15

    This paper discusses a general method for the use of orthogonal polynomials for unequal intervals (OPUI) to eliminate interferences in two-component spectrophotometric analysis. In this paper, a new approach was developed by using first derivative D1 curve instead of absorbance curve to be convoluted using OPUI method for the determination of metronidazole (MTR) and nystatin (NYS) in their mixture. After applying derivative treatment of the absorption data many maxima and minima points appeared giving characteristic shape for each drug allowing the selection of different number of points for the OPUI method for each drug. This allows the specific and selective determination of each drug in presence of the other and in presence of any matrix interference. The method is particularly useful when the two absorption spectra have considerable overlap. The results obtained are encouraging and suggest that the method can be widely applied to similar problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. A comparison of solute-transport solution techniques based on inverse modelling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2000-01-01

    Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.

  4. Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.

  5. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  6. The RATIO method for time-resolved Laue crystallography

    PubMed Central

    Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya

    2009-01-01

    A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334

  7. Dynamic rating curve assessment in hydrometric stations and calculation of the associated uncertainties : Quality and monitoring indicators

    NASA Astrophysics Data System (ADS)

    Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine

    2013-04-01

    Whether we talk about safety reasons, energy production or regulation, water resources management is one of EDF's (French hydropower company) main concerns. To meet these needs, since the fifties EDF-DTG operates a hydrometric network that includes more than 350 hydrometric stations. The data collected allow real time monitoring of rivers (hydro meteorological forecasts at points of interests), as well as hydrological studies and the sizing of structures. Ensuring the quality of stream flow data is a priority. A rating curve is an indirect method of estimating the discharge in rivers based on water level measurements. The value of discharge obtained thanks to the rating curve is not entirely accurate due to the constant changes of the river bed morphology, to the precision of the gaugings (direct and punctual discharge measurements) and to the quality of the tracing. As time goes on, the uncertainty of the estimated discharge from a rating curve « gets older » and increases: therefore the final level of uncertainty remains particularly difficult to assess. Moreover, the current EDF capacity to produce a rating curve is not suited to the frequency of change of the stage-discharge relationship. The actual method does not take into consideration the variation of the flow conditions and the modifications of the river bed which occur due to natural processes such as erosion, sedimentation and seasonal vegetation growth. In order to get the most accurate stream flow data and to improve their reliability, this study undertakes an original « dynamic» method to compute rating curves based on historical gaugings from a hydrometric station. A curve is computed for each new gauging and a model of uncertainty is adjusted for each of them. The model of uncertainty takes into account the inaccuracies in the measurement of the water height, the quality of the tracing, the uncertainty of the gaugings and the aging of the confidence intervals calculated with a variographic analysis. These rating curves enable to provide values of stream flow taking into account the variability of flow conditions, while providing a model of uncertainties resulting from the aging of the rating curves. By taking into account the variability of the flow conditions and the life of the hydrometric station, this original dynamic method can answer important questions in the field of hydrometry such as « How many gaugings a year have to be made so as to produce stream flow data with an average uncertainty of X% ? » and « When and in which range of water flow do we have to realize those gaugings ? ». KEY WORDS : Uncertainty, Rating curve, Hydrometric station, Gauging, Variogram, Stream Flow

  8. The index-flood and the GRADEX methods combination for flood frequency analysis.

    NASA Astrophysics Data System (ADS)

    Fuentes, Diana; Di Baldassarre, Giuliano; Quesada, Beatriz; Xu, Chong-Yu; Halldin, Sven; Beven, Keith

    2017-04-01

    Flood frequency analysis is used in many applications, including flood risk management, design of hydraulic structures, and urban planning. However, such analysis requires of long series of observed discharge data which are often not available in many basins around the world. In this study, we tested the usefulness of combining regional discharge and local precipitation data to estimate the event flood volume frequency curve for 63 catchments in Mexico, Central America and the Caribbean. This was achieved by combining two existing flood frequency analysis methods, the regionalization index-flood approach with the GRADEX method. For up to 10-years return period, similar shape of the scaled flood frequency curve for catchments with similar flood behaviour was assumed from the index-flood approach. For return periods larger than 10-years the probability distribution of rainfall and discharge volumes were assumed to be asymptotically and exponential-type functions with the same scale parameter from the GRADEX method. Results showed that if the mean annual flood (MAF), used as index-flood, is known, the index-flood approach performed well for up to 10 years return periods, resulting in 25% mean relative error in prediction. For larger return periods the prediction capability decreased but could be improved by the use of the GRADEX method. As the MAF is unknown at ungauged and short-period measured basins, we tested predicting the MAF using catchments climate-physical characteristics, and discharge statistics, the latter when observations were available for only 8 years. Only the use of discharge statistics resulted in acceptable predictions.

  9. Comparative Analysis of Growth and Photosynthetic Characteristics of (Populus simonii × P. nigra) × (P. nigra × P. simonii) Hybrid Clones of Different Ploidides

    PubMed Central

    Bian, Xiuyan; Liu, Mengran; Sun, Yanshuang; Jiang, Jing; Wang, Fuwei; Li, Shuchun; Cui, Yonghong; Liu, Guifeng; Yang, Chuanping

    2015-01-01

    To evaluate differences among poplar clones of various ploidies, 12 hybrid poplar clones (P. simonii × P. nigra) × (P. nigra × P. simonii) with different ploidies were used to study phenotypic variation in growth traits and photosynthetic characteristics. Analysis of variance showed remarkable differences for each of the investigated traits among these clones (P < 0.01). Coefficients of phenotypic variation (PCV) ranged from 2.38% to 56.71%, and repeatability ranged from 0.656 to 0.987. The Pn (photosynthetic rate) photosynthetic photon flux density (PPFD) curves of the 12 clones were S-shaped, but the Pn-ambient CO2 (Ca) curves were shaped like an inverted “V”. The stomatal conductance (Gs)-PPFD and transpiration rate (Tr)-PPFD curves had an upward tendency; however, with increasing PFFD, the intercellular CO2 concentration (Ci)-PPFD curves had a downward tendency in all of the clones. The Pn-PPFD and Pn-Ca curves followed the pattern of a quadratic equation. The average light saturation point and light compensation point of the triploid clones were the highest and lowest, respectively, among the three types of clones. For Pn-Ca curves, diploid clones had a higher average CO2 saturation point and average CO2 compensation point compared with triploid and tetraploid clones. Correlation analyses indicated that all investigated traits were strongly correlated with each other. In future studies, molecular methods should be used to analyze poplar clones of different ploidies to improve our understanding of the growth and development mechanisms of polyploidy. PMID:25867100

  10. Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys

    NASA Astrophysics Data System (ADS)

    Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.

    2016-08-01

    Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.

  11. Development and Validation of a Method for Alcohol Analysis in Brain Tissue by Headspace Gas Chromatography with Flame Ionization Detector

    PubMed Central

    Chun, Hao-Jung; Poklis, Justin L.; Poklis, Alphonse; Wolf, Carl E.

    2016-01-01

    Ethanol is the most widely used and abused drug. While blood is the preferred specimen for analysis, tissue specimens such as brain serve as alternative specimens for alcohol analysis in post-mortem cases where blood is unavailable or contaminated. A method was developed using headspace gas chromatography with flame ionization detection (HS-GC-FID) for the detection and quantification of ethanol, acetone, isopropanol, methanol and n-propanol in brain tissue specimens. Unfixed volatile-free brain tissue specimens were obtained from the Department of Pathology at Virginia Commonwealth University. Calibrators and controls were prepared from 4-fold diluted homogenates of these brain tissue specimens, and were analyzed using t-butanol as the internal standard. The chromatographic separation was performed with a Restek BAC2 column. A linear calibration was generated for all analytes (mean r2 > 0.9992) with the limits of detection and quantification of 100–110 mg/kg. Matrix effect from the brain tissue was determined by comparing the slopes of matrix prepared calibration curves with those of aqueous calibration curves; no significant differences were observed for ethanol, acetone, isopropanol, methanol and n-propanol. The bias and the CVs for all volatile controls were ≤10%. The method was also evaluated for carryover, selectivity, interferences, bench-top stability and freeze-thaw stability. The HS-GC-FID method was determined to be reliable and robust for the analysis of ethanol, acetone, isopropanol, methanol and n-propanol concentrations in brain tissue, effectively expanding the specimen options for post-mortem alcohol analysis. PMID:27488829

  12. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    USGS Publications Warehouse

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  13. Effect of discrete track support by sleepers on rail corrugation at a curved track

    NASA Astrophysics Data System (ADS)

    Jin, X. S.; Wen, Z. F.

    2008-08-01

    The paper investigates into the effect of discrete track support by sleepers on the initiation and development of rail corrugation at a curved track when a railway vehicle passes through using a numerical method. The numerical method considers a combination of Kalker's rolling contact theory with non-Hertzian form, a linear frictional work model and a dynamics model of a half railway vehicle coupled with the curved track. The half-vehicle has a two-axle bogie and doubled suspension systems. It is treated as a full dynamic rigid multi-body model. In the track model, an Euler beam is used to model the rail, and the discrete track support by sleepers moving backward with respect to the vehicle running direction is considered to simulate the effect of the discrete sleeper support on the wheels/rails in rolling contact when the vehicle moves on the track. The sleeper is treated as a rigid body and the ballast bed is replaced with equivalent mass bodies. The numerical analysis exams in detail the variations of wheel/rail normal loads, the creepages, and the rail wear volume along the curved track. Their variations are much concerned with the discrete track support. The numerical results show that the discrete track support causes the fluctuating of the normal loads and creepages at a few frequencies. These frequencies comprise the passing frequency of the sleepers and the excited track resonant frequencies, which are higher than the sleeper passing frequency. Consequently, rail corrugation with several wavelengths initiates and develops. Also the results show that the contact vibrating between the curved rails and the four wheels of the same bogie has different frequencies. In this way, the different key frequencies to be excited play an important role in the initiation and development of curved rail corrugation. Therefore, the corrugations caused by the four wheels of the same bogie present different wavelengths. The paper shows and discusses the depths of the initial corrugations caused by the four wheels of the same bogie, at the entering transition curve, the circle curve and the exit transition curve of the curved track, respectively.

  14. Label-free Chemical Imaging of Fungal Spore Walls by Raman Microscopy and Multivariate Curve Resolution Analysis

    PubMed Central

    Noothalapati, Hemanth; Sasaki, Takahiro; Kaino, Tomohiro; Kawamukai, Makoto; Ando, Masahiro; Hamaguchi, Hiro-o; Yamamoto, Tatsuyuki

    2016-01-01

    Fungal cell walls are medically important since they represent a drug target site for antifungal medication. So far there is no method to directly visualize structurally similar cell wall components such as α-glucan, β-glucan and mannan with high specificity, especially in a label-free manner. In this study, we have developed a Raman spectroscopy based molecular imaging method and combined multivariate curve resolution analysis to enable detection and visualization of multiple polysaccharide components simultaneously at the single cell level. Our results show that vegetative cell and ascus walls are made up of both α- and β-glucans while spore wall is exclusively made of α-glucan. Co-localization studies reveal the absence of mannans in ascus wall but are distributed primarily in spores. Such detailed picture is believed to further enhance our understanding of the dynamic spore wall architecture, eventually leading to advancements in drug discovery and development in the near future. PMID:27278218

  15. Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures

    NASA Technical Reports Server (NTRS)

    James, Benjamin Wylie

    1935-01-01

    This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.

  16. Fracture mechanics evaluation of heavy welded structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprung, I.; Ericksson, C.W.; Zilberstein, V.A.

    1982-05-01

    This paper describes some applications of nondestructive examination (NDE) and engineering fracture mechanics to evaluation of flaws in heavy welded structures. The paper discusses not only widely recognized linear elastic fracture mechanics (LEFM) analysis, but also methods of the elastic-plastic fracture mechanics (EPFM), such as COD, J-integral, and Failure Assessment Diagram. Examples are given to highlight the importance of interaction between specialists providing input and the specialists performing the analysis. The paper points out that the critical parameters for as-welded structures when calculated by these methods are conservative since they are based on two pessimistic assumptions: that the magnitude ofmore » residual stress is always at the yield strength level, and that the residual stress always acts in the same direction as the applied (mechanical) stress. The suggestion is made that it would be prudent to use the COD or the FAD design curves for a conservative design. The appendix examines a J-design curve modified to include residual stresses.« less

  17. Authentication of virgin olive oil by a novel curve resolution approach combined with visible spectroscopy.

    PubMed

    Ferreiro-González, Marta; Barbero, Gerardo F; Álvarez, José A; Ruiz, Antonio; Palma, Miguel; Ayuso, Jesús

    2017-04-01

    Adulteration of olive oil is not only a major economic fraud but can also have major health implications for consumers. In this study, a combination of visible spectroscopy with a novel multivariate curve resolution method (CR), principal component analysis (PCA) and linear discriminant analysis (LDA) is proposed for the authentication of virgin olive oil (VOO) samples. VOOs are well-known products with the typical properties of a two-component system due to the two main groups of compounds that contribute to the visible spectra (chlorophylls and carotenoids). Application of the proposed CR method to VOO samples provided the two pure-component spectra for the aforementioned families of compounds. A correlation study of the real spectra and the resolved component spectra was carried out for different types of oil samples (n=118). LDA using the correlation coefficients as variables to discriminate samples allowed the authentication of 95% of virgin olive oil samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Poor interoperability of the Adams-Harbertson method for analysis of anthocyanins: comparison with AOAC pH differential method.

    PubMed

    Brooks, Larry M; Kuhlman, Benjamin J; McKesson, Doug W; McCloskey, Leo

    2013-01-01

    The poor interoperability of anthocyanin glycosides measurements by two pH differential methods is documented. Adams-Harbertson, which was proposed for commercial winemaking, was compared to AOAC Official Method 2005.02 for wine. California bottled wines (Pinot Noir, Merlot, and Cabernet Sauvignon) were assayed in a collaborative study (n=105), which found mean precision of Adams-Harbertson winery versus reference measurements to be 77 +/- 20%. Maximum error is expected to be 48% for Pinot Noir, 42% for Merlot, and 34% for Cabernet Sauvignon from reproducibility RSD. Range of measurements was actually 30 to 91% for Pinot Noir. An interoperability study (n=30) found Adams-Harbertson produces measurements that are nominally 150% of the AOAC pH differential method. Large analytical chemistry differences are: AOAC method uses Beer-Lambert equation and measures absorbance at pH 1.0 and 4.5, proposed a priori by Flueki and Francis; whereas Adams-Harbertson uses "universal" standard curve and measures absorbance ad hoc at pH 1.8 and 4.9 to reduce the effects of so-called co-pigmentation. Errors relative to AOAC are produced by Adams-Harbertson standard curve over Beer-Lambert and pH 1.8 over pH 1.0. The study recommends using AOAC Official Method 2005.02 for analysis of wine anthocyanin glycosides.

  19. Age dependence of Olympic weightlifting ability.

    PubMed

    Meltzer, D E

    1994-08-01

    There is increasing interest among Masters athletes in standards for comparing performances of competitors of different ages. The goal of this study was to develop one such age-comparison method by examining the age dependence of ability in Olympic-style weightlifting. Previous research on the deterioration of muscular strength and power with increasing age offers only limited guidance toward this goal; therefore, analysis of performance data was required. The variation of weightlifting ability as a function of age was examined by two different methods. First, cross-sectional data corresponding to two separate populations of Masters weightlifters were analyzed in detail. Then, a longitudinal study of 64 U.S. male Masters weightlifters was carried out, with performance versus age curves resulting from the two methods were very similar, reflecting approximately 1.0-1.5% x yr-1 deterioration rates. These curves were characterized by common features regarding the rate of decline of muscular power with increasing age, in apparent agreement with published data regarding Masters sprinters and jumpers. We tentatively conclude that Olympic weightlifting ability in trained subjects undergoes a nonlinear decline with age, in which the second derivative of the performance versus age curve repeatedly changes sign.

  20. Advantages of soft versus hard constraints in self-modeling curve resolution problems. Alternating least squares with penalty functions.

    PubMed

    Gemperline, Paul J; Cash, Eric

    2003-08-15

    A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.

  1. Bootstrap rolling window estimation approach to analysis of the Environment Kuznets Curve hypothesis: evidence from the USA.

    PubMed

    Aslan, Alper; Destek, Mehmet Akif; Okumus, Ilyas

    2018-01-01

    This study aims to examine the validity of inverted U-shaped Environmental Kuznets Curve by investigating the relationship between economic growth and environmental pollution for the period from 1966 to 2013 in the USA. Previous studies based on the assumption of parameter stability and obtained parameters do not change over the full sample. This study uses bootstrap rolling window estimation method to detect the possible changes in causal relations and also obtain the parameters for sub-sample periods. The results show that the parameter of economic growth has increasing trend in 1982-1996 sub-sample periods, and it has decreasing trend in 1996-2013 sub-sample periods. Therefore, the existence of inverted U-shaped Environmental Kuznets Curve is confirmed in the USA.

  2. Surface Location In Scene Content Analysis

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Tio, J. B. K.; McPherson, C. A.; Hwang, J. J.

    1981-12-01

    The purpose of this paper is to describe techniques and algorithms for the location in three dimensions of planar and curved object surfaces using a computer vision approach. Stereo imaging techniques are demonstrated for planar object surface location using automatic segmentation, vertex location and relational table matching. For curved surfaces, the locations of corresponding 'points is very difficult. However, an example using a grid projection technique for the location of the surface of a curved cup is presented to illustrate a solution. This method consists of first obtaining the perspective transformation matrix from the images, then using these matrices to compute the three dimensional point locations of the grid points on the surface. These techniques may be used in object location for such applications as missile guidance, robotics, and medical diagnosis and treatment.

  3. Novel methods for parameter-based analysis of myocardial tissue in MR images

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.

    2007-03-01

    The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.

  4. LCMS analysis of fingerprints, the amino acid profile of 20 donors.

    PubMed

    de Puit, Marcel; Ismail, Mahado; Xu, Xiaoma

    2014-03-01

    The analysis of amino acids present in fingerprints has been studied several times. In this paper, we report a method for the analysis of amino acids using an fluorenylmethyloxycarbonyl chloride-derivatization for LC separation and MS detection. We have obtained good results with regard to the calibration curves and the limit of detection and LOQ for the target compounds. The extraction of the amino acids from the substrates used proved to be very efficient. Analysis of the derivatized amino acids enabled us to obtain full amino acid profiles for 20 donors. The intervariability is as expected rather large, with serine as the most abundant constituent, and when examining the total profile of the amino acids per donor, a characteristic pattern can be observed. Some amino acids were not detected in some donors, or fell out of the range of the calibration curve, where others showed a surprisingly high amount of material in the deposition analyses. Further investigations will have to address the intravariability of the amino acid profiles of the fingerprints from donors. By the development of the analytical method and the application to the analysis of fingerprints, we were able to gain insight in the variability of the constituents of fingerprints between the donors. © 2013 American Academy of Forensic Sciences.

  5. eulerAPE: Drawing Area-Proportional 3-Venn Diagrams Using Ellipses

    PubMed Central

    Micallef, Luana; Rodgers, Peter

    2014-01-01

    Venn diagrams with three curves are used extensively in various medical and scientific disciplines to visualize relationships between data sets and facilitate data analysis. The area of the regions formed by the overlapping curves is often directly proportional to the cardinality of the depicted set relation or any other related quantitative data. Drawing these diagrams manually is difficult and current automatic drawing methods do not always produce appropriate diagrams. Most methods depict the data sets as circles, as they perceptually pop out as complete distinct objects due to their smoothness and regularity. However, circles cannot draw accurate diagrams for most 3-set data and so the generated diagrams often have misleading region areas. Other methods use polygons to draw accurate diagrams. However, polygons are non-smooth and non-symmetric, so the curves are not easily distinguishable and the diagrams are difficult to comprehend. Ellipses are more flexible than circles and are similarly smooth, but none of the current automatic drawing methods use ellipses. We present eulerAPE as the first method and software that uses ellipses for automatically drawing accurate area-proportional Venn diagrams for 3-set data. We describe the drawing method adopted by eulerAPE and we discuss our evaluation of the effectiveness of eulerAPE and ellipses for drawing random 3-set data. We compare eulerAPE and various other methods that are currently available and we discuss differences between their generated diagrams in terms of accuracy and ease of understanding for real world data. PMID:25032825

  6. eulerAPE: drawing area-proportional 3-Venn diagrams using ellipses.

    PubMed

    Micallef, Luana; Rodgers, Peter

    2014-01-01

    Venn diagrams with three curves are used extensively in various medical and scientific disciplines to visualize relationships between data sets and facilitate data analysis. The area of the regions formed by the overlapping curves is often directly proportional to the cardinality of the depicted set relation or any other related quantitative data. Drawing these diagrams manually is difficult and current automatic drawing methods do not always produce appropriate diagrams. Most methods depict the data sets as circles, as they perceptually pop out as complete distinct objects due to their smoothness and regularity. However, circles cannot draw accurate diagrams for most 3-set data and so the generated diagrams often have misleading region areas. Other methods use polygons to draw accurate diagrams. However, polygons are non-smooth and non-symmetric, so the curves are not easily distinguishable and the diagrams are difficult to comprehend. Ellipses are more flexible than circles and are similarly smooth, but none of the current automatic drawing methods use ellipses. We present eulerAPE as the first method and software that uses ellipses for automatically drawing accurate area-proportional Venn diagrams for 3-set data. We describe the drawing method adopted by eulerAPE and we discuss our evaluation of the effectiveness of eulerAPE and ellipses for drawing random 3-set data. We compare eulerAPE and various other methods that are currently available and we discuss differences between their generated diagrams in terms of accuracy and ease of understanding for real world data.

  7. Analysis shear wave velocity structure obtained from surface wave methods in Bornova, Izmir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pamuk, Eren, E-mail: eren.pamuk@deu.edu.tr; Akgün, Mustafa, E-mail: mustafa.akgun@deu.edu.tr; Özdağ, Özkan Cevdet, E-mail: cevdet.ozdag@deu.edu.tr

    2016-04-18

    Properties of the soil from the bedrock is necessary to describe accurately and reliably for the reduction of earthquake damage. Because seismic waves change their amplitude and frequency content owing to acoustic impedance difference between soil and bedrock. Firstly, shear wave velocity and depth information of layers on bedrock is needed to detect this changing. Shear wave velocity can be obtained using inversion of Rayleigh wave dispersion curves obtained from surface wave methods (MASW- the Multichannel Analysis of Surface Waves, ReMi-Refraction Microtremor, SPAC-Spatial Autocorrelation). While research depth is limeted in active source study, a passive source methods are utilized formore » deep depth which is not reached using active source methods. ReMi method is used to determine layer thickness and velocity up to 100 m using seismic refraction measurement systems.The research carried out up to desired depth depending on radius using SPAC which is utilized easily in conditions that district using of seismic studies in the city. Vs profiles which are required to calculate deformations in under static and dynamic loads can be obtained with high resolution using combining rayleigh wave dispersion curve obtained from active and passive source methods. In the this study, Surface waves data were collected using the measurements of MASW, ReMi and SPAC at the İzmir Bornova region. Dispersion curves obtained from surface wave methods were combined in wide frequency band and Vs-depth profiles were obtained using inversion. Reliability of the resulting soil profiles were provided by comparison with theoretical transfer function obtained from soil paremeters and observed soil transfer function from Nakamura technique and by examination of fitting between these functions. Vs values are changed between 200-830 m/s and engineering bedrock (Vs>760 m/s) depth is approximately 150 m.« less

  8. Generation of a pseudo-2D shear-wave velocity section by inversion of a series of 1D dispersion curves

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Liu, J.; Xu, Y.; Liu, Q.

    2008-01-01

    Multichannel Analysis of Surface Waves utilizes a multichannel recording system to estimate near-surface shear (S)-wave velocities from high-frequency Rayleigh waves. A pseudo-2D S-wave velocity (vS) section is constructed by aligning 1D models at the midpoint of each receiver spread and using a spatial interpolation scheme. The horizontal resolution of the section is therefore most influenced by the receiver spread length and the source interval. The receiver spread length sets the theoretical lower limit and any vS structure with its lateral dimension smaller than this length will not be properly resolved in the final vS section. A source interval smaller than the spread length will not improve the horizontal resolution because spatial smearing has already been introduced by the receiver spread. In this paper, we first analyze the horizontal resolution of a pair of synthetic traces. Resolution analysis shows that (1) a pair of traces with a smaller receiver spacing achieves higher horizontal resolution of inverted S-wave velocities but results in a larger relative error; (2) the relative error of the phase velocity at a high frequency is smaller than at a low frequency; and (3) a relative error of the inverted S-wave velocity is affected by the signal-to-noise ratio of data. These results provide us with a guideline to balance the trade-off between receiver spacing (horizontal resolution) and accuracy of the inverted S-wave velocity. We then present a scheme to generate a pseudo-2D S-wave velocity section with high horizontal resolution using multichannel records by inverting high-frequency surface-wave dispersion curves calculated through cross-correlation combined with a phase-shift scanning method. This method chooses only a pair of consecutive traces within a shot gather to calculate a dispersion curve. We finally invert surface-wave dispersion curves of synthetic and real-world data. Inversion results of both synthetic and real-world data demonstrate that inverting high-frequency surface-wave dispersion curves - by a pair of traces through cross-correlation with phase-shift scanning method and with the damped least-square method and the singular-value decomposition technique - can feasibly achieve a reliable pseudo-2D S-wave velocity section with relatively high horizontal resolution. ?? 2008 Elsevier B.V. All rights reserved.

  9. Statistical analysis of radioimmunoassay. In comparison with bioassay (in Japanese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, R.

    1973-01-01

    Using the data of RIA (radioimmunoassay), statistical procedures for dealing with two problems of the linearization of dose response curve and calculation of relative potency were described. There were three methods for linearization of dose response curve of RIA. In each method, the following parameters were shown on the horizontal and vertical axis: dose x, (B/T)/sup -1/; c/x + c, B/T (C: dose which makes B/T 50%); log x, logit B/T. Among them, the last method seems to be most practical. The statistical procedures for bioassay were employed for calculating the relative potency of unknown samples compared to the standardmore » samples from dose response curves of standand and unknown samples using regression coefficient. It is desirable that relative potency is calculated by plotting more than 5 points in the standard curve and plotting more than 2 points in unknow samples. For examining the statistical limit of precision of measuremert, LH activity of gonadotropin in urine was measured and relative potency, precision coefficient and the upper and lower limits of relative potency at 95% confidence limit were calculated. On the other hand, bioassay (by the ovarian ascorbic acid reduction method and anteriol lobe of prostate weighing method) was done in the same samples, and the precision was compared with that of RIA. In these examinations, the upper and lower limits of the relative potency at 95% confidence limit were near each other, while in bioassay, a considerable difference was observed between the upper and lower limits. The necessity of standardization and systematization of the statistical procedures for increasing the precision of RIA was pointed out. (JA)« less

  10. Estimation and detection information trade-off for x-ray system optimization

    NASA Astrophysics Data System (ADS)

    Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali

    2016-05-01

    X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.

  11. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: Derivative spectrophotometry versus wavelet transform

    NASA Astrophysics Data System (ADS)

    Salem, Hesham; Lotfy, Hayam M.; Hassan, Nagiba Y.; El-Zeiny, Mohamed B.; Saleh, Sarah S.

    2015-01-01

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision.

  12. A comparative study of different aspects of manipulating ratio spectra applied for ternary mixtures: derivative spectrophotometry versus wavelet transform.

    PubMed

    Salem, Hesham; Lotfy, Hayam M; Hassan, Nagiba Y; El-Zeiny, Mohamed B; Saleh, Sarah S

    2015-01-25

    This work represents a comparative study of different aspects of manipulating ratio spectra, which are: double divisor ratio spectra derivative (DR-DD), area under curve of derivative ratio (DR-AUC) and its novel approach, namely area under the curve correction method (AUCCM) applied for overlapped spectra; successive derivative of ratio spectra (SDR) and continuous wavelet transform (CWT) methods. The proposed methods represent different aspects of manipulating ratio spectra of the ternary mixture of Ofloxacin (OFX), Prednisolone acetate (PA) and Tetryzoline HCl (TZH) combined in eye drops in the presence of benzalkonium chloride as a preservative. The proposed methods were checked using laboratory-prepared mixtures and were successfully applied for the analysis of pharmaceutical formulation containing the cited drugs. The proposed methods were validated according to the ICH guidelines. A comparative study was conducted between those methods regarding simplicity, limitation and sensitivity. The obtained results were statistically compared with those obtained from the reported HPLC method, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Managing the uncertainties of the streamflow data produced by the French national hydrological services

    NASA Astrophysics Data System (ADS)

    Puechberty, Rachel; Bechon, Pierre-Marie; Le Coz, Jérôme; Renard, Benjamin

    2015-04-01

    The French national hydrological services (NHS) manage the production of streamflow time series throughout the national territory. The hydrological data are made available to end-users through different web applications and the national hydrological archive (Banque Hydro). Providing end-users with qualitative and quantitative information on the uncertainty of the hydrological data is key to allow them drawing relevant conclusions and making appropriate decisions. Due to technical and organisational issues that are specific to the field of hydrometry, quantifying the uncertainty of hydrological measurements is still challenging and not yet standardized. The French NHS have made progress on building a consistent strategy to assess the uncertainty of their streamflow data. The strategy consists of addressing the uncertainties produced and propagated at each step of the data production with uncertainty analysis tools that are compatible with each other and compliant with international uncertainty guidance and standards. Beyond the necessary research and methodological developments, operational software tools and procedures are absolutely necessary to the data management and uncertainty analysis by field hydrologists. A first challenge is to assess, and if possible reduce, the uncertainty of streamgauging data, i.e. direct stage-discharge measurements. Interlaboratory experiments proved to be a very efficient way to empirically measure the uncertainty of a given streamgauging technique in given measurement conditions. The Q+ method (Le Coz et al., 2012) was developed to improve the uncertainty propagation method proposed in the ISO748 standard for velocity-area gaugings. Both empirical or computed (with Q+) uncertainty values can now be assigned in BAREME, which is the software used by the French NHS for managing streamgauging measurements. A second pivotal step is to quantify the uncertainty related to stage-discharge rating curves and their application to water level records to produce continuous discharge time series. The management of rating curves is also done using BAREME. The BaRatin method (Le Coz et al., 2014) was developed as a Bayesian approach of rating curve development and uncertainty analysis. Since BaRatin accounts for the individual uncertainties of gauging data used to build the rating curve, it was coupled with BAREME. The BaRatin method is still undergoing development and research, in particular to address non univocal or time-varying stage-discharge relations, due to hysteresis, variable backwater, rating shifts, etc. A new interface including new options is under development. The next steps are now to propagate the uncertainties of water level records, through uncertain rating curves, up to discharge time series and derived variables (e.g. annual mean flow) and statistics (e.g. flood quantiles). Bayesian tools are already available for both tasks but further validation and development is necessary for their integration in the operational data workflow of the French NHS. References Le Coz, J., Camenen, B., Peyrard, X., Dramais, G., 2012. Uncertainty in open-channel discharges measured with the velocity-area method. Flow Measurement and Instrumentation 26, 18-29. Le Coz, J., Renard, B., Bonnifait, L., Branger, F., Le Boursicaud, R., 2014. Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: a Bayesian approach, Journal of Hydrology, 509, 573-587.

  14. A polymerase chain reaction-coupled high-resolution melting curve analytical approach for the monitoring of monospecificity of avian Eimeria species.

    PubMed

    Kirkpatrick, Naomi C; Blacker, Hayley P; Woods, Wayne G; Gasser, Robin B; Noormohammadi, Amir H

    2009-02-01

    Coccidiosis is a significant disease of poultry caused by different species of Eimeria. Differentiation of Eimeria species is important for the quality control of the live attenuated Eimeria vaccines derived from monospecific lines of Eimeria spp. In this study, high-resolution melting (HRM) curve analysis of the amplicons generated from the second internal transcribed spacer of nuclear ribosomal DNA (ITS-2) was used to distinguish between seven pathogenic Eimeria species of chickens, and the results were compared with those obtained from the previously described technique, capillary electrophoresis. Using a series of known monospecific lines of Eimeria species, HRM curve analysis was shown to distinguish between Eimeria acervulina, Eimeria brunetti, Eimeria maxima, Eimeria mitis, Eimeria necatrix, Eimeria praecox and Eimeria tenella. Computerized analysis of the HRM curves and capillary electrophoresis profiles could detect the dominant species in several specimens containing different ratios of E. necatrix and E. maxima and of E. tenella and E. acervulina. The HRM curve analysis identified all of the mixtures as "variation" to the reference species, and also identified the minor species in some mixtures. Computerized HRM curve analysis also detected impurities in 21 possible different combinations of the seven Eimeria species. The PCR-based HRM curve analysis of the ITS-2 provides a powerful tool for the detection and identification of pure Eimeria species. The HRM curve analysis could also be used as a rapid tool in the quality assurance of Eimeria vaccine production to confirm the purity of the monospecific cell lines. The HRM curve analysis is rapid and reliable and can be performed in a single test tube in less than 3 h.

  15. Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves.

    PubMed

    Guyot, Patricia; Ades, A E; Ouwens, Mario J N M; Welton, Nicky J

    2012-02-01

    The results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated. We develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers. The validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported. The algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.

  16. The Complex Relationship Between Heavy Storms and Floods: Implication on Stormwater Drainage design and Management

    NASA Astrophysics Data System (ADS)

    Demissie, Y.; Mortuza, M. R.; Moges, E.; Yan, E.; Li, H. Y.

    2017-12-01

    Due to the lack of historical and future streamflow data for flood frequency analysis at or near most drainage sites, it is a common practice to directly estimate the design flood (maximum discharge or volume of stream for a given return period) based on storm frequency analysis and the resulted Intensity-Duration-Frequency (IDF) curves. Such analysis assumes a direct relationship between storms and floods with, for example, the 10-year rainfall expected to produce the 10-year flood. However, in reality, a storm is just one factor among the many other hydrological and metrological factors that can affect the peak flow and hydrograph. Consequently, a heavy storm does not necessarily always lead to flooding or a flood events with the same frequency. This is evident by the observed difference in the seasonality of heavy storms and floods in most regions. In order to understand site specific causal-effect relationship between heavy storms and floods and improve the flood analysis for stormwater drainage design and management, we have examined the contributions of various factors that affect floods using statistical and information theory methods. Based on the identified dominant causal-effect relationships, hydrologic and probability analyses were conducted to develop the runoff IDF curves taking into consideration the snowmelt and rain-on-snow effect, the difference in the storm and flood seasonality, soil moisture conditions, and catchment potential for flash and riverine flooding. The approach was demonstrated using data from military installations located in different parts of the United States. The accuracy of the flood frequency analysis and the resulted runoff IDF curves were evaluated based on the runoff IDF curves developed from streamflow measurements.

  17. Anatomical curve identification

    PubMed Central

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  18. Classification of Uxo by Principal Dipole Polarizability

    NASA Astrophysics Data System (ADS)

    Kappler, K. N.

    2010-12-01

    Data acquired by multiple-Transmitter, multiple-receiver time-domain electromagnetic devices show great potential for determining the geometric and compositional information relating to near surface conductive targets. Here is presented an analysis of data from one such system; the Berkeley Unexploded-ordnance Discriminator (BUD) system. BUD data are succinctly reduced by processing the multi-static data matrices to obtain magnetic dipole polarizability matrices for data from each time gate. When viewed over all time gates, the projections of the data onto the principal polar axes yield so-called polarizability curves. These curves are especially well suited to discriminating between subsurface conductivity anomalies which correspond to objects of rotational symmetry and irregularly shaped objects. The curves have previously been successfully employed as library elements in a pattern recognition scheme aimed at discriminating harmless scrap metal from dangerous intact unexploded ordnance. However, previous polarizability-curve matching methods have only been applied at field sites which are known a priori to be contaminated by a single type of ordnance, and furthermore, the particular ordnance present in the subsurface was known to be large. Thus signal amplitude was a key element in the discrimination process. The work presented here applies feature-based pattern classification techniques to BUD field data where more than 20 categories of object are present. Data soundings from a calibration grid at the Yuma, AZ proving ground are used in a cross validation study to calibrate the pattern recognition method. The resultant method is then applied to a Blind Test Grid. Results indicate that when lone UXO are present and SNR is reasonably high, Polarizability Curve Matching successfully discriminates UXO from scrap metal when a broad range of objects are present.

  19. Evaluation of methods for characterizing the melting curves of a high temperature cobalt-carbon fixed point to define and determine its melting temperature

    NASA Astrophysics Data System (ADS)

    Lowe, David; Machin, Graham

    2012-06-01

    The future mise en pratique for the realization of the kelvin will be founded on the melting temperatures of particular metal-carbon eutectic alloys as thermodynamic temperature references. However, at the moment there is no consensus on what should be taken as the melting temperature. An ideal melting or freezing curve should be a completely flat plateau at a specific temperature. Any departure from the ideal is due to shortcomings in the realization and should be accommodated within the uncertainty budget. However, for the proposed alloy-based fixed points, melting takes place over typically some hundreds of millikelvins. Including the entire melting range within the uncertainties would lead to an unnecessarily pessimistic view of the utility of these as reference standards. Therefore, detailed analysis of the shape of the melting curve is needed to give a value associated with some identifiable aspect of the phase transition. A range of approaches are or could be used; some purely practical, determining the point of inflection (POI) of the melting curve, some attempting to extrapolate to the liquidus temperature just at the end of melting, and a method that claims to give the liquidus temperature and an impurity correction based on the analytical Scheil model of solidification that has not previously been applied to eutectic melting. The different methods have been applied to cobalt-carbon melting curves that were obtained under conditions for which the Scheil model might be valid. In the light of the findings of this study it is recommended that the POI continue to be used as a pragmatic measure of temperature but where required a specified limits approach should be used to define and determine the melting temperature.

  20. Ink dating using thermal desorption and gas chromatography/mass spectrometry: comparison of results obtained in two laboratories.

    PubMed

    Koenig, Agnès; Bügler, Jürgen; Kirsch, Dieter; Köhler, Fritz; Weyermann, Céline

    2015-01-01

    An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in "normal" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity. © 2014 American Academy of Forensic Science.

Top