Sample records for curve number method

  1. Estimating the SCS runoff curve number in forest catchments of Korea

    NASA Astrophysics Data System (ADS)

    Choi, Hyung Tae; Kim, Jaehoon; Lim, Hong-geun

    2016-04-01

    To estimate flood runoff discharge is a very important work in design for many hydraulic structures in streams, rivers and lakes such as dams, bridges, culverts, and so on. So, many researchers have tried to develop better methods for estimating flood runoff discharge. The SCS runoff curve number is an empirical parameter determined by empirical analysis of runoff from small catchments and hillslope plots monitored by the USDA. This method is an efficient method for determining the approximate amount of runoff from a rainfall even in a particular area, and is very widely used all around the world. However, there is a quite difference between the conditions of Korea and USA in topography, geology and land use. Therefore, examinations in adaptability of the SCS runoff curve number need to raise the accuracy of runoff prediction using SCS runoff curve number method. The purpose of this study is to find the SCS runoff curve number based on the analysis of observed data from several experimental forest catchments monitored by the National Institute of Forest Science (NIFOS), as a pilot study to modify SCS runoff curve number for forest lands in Korea. Rainfall and runoff records observed in Gwangneung coniferous and broad leaves forests, Sinwol, Hwasoon, Gongju and Gyeongsan catchments were selected to analyze the variability of flood runoff coefficients during the last 5 years. This study shows that runoff curve numbers of the experimental forest catchments range from 55 to 65. SCS Runoff Curve number method is a widely used method for estimating design discharge for small ungauged watersheds. Therefore, this study can be helpful technically to estimate the discharge for forest watersheds in Korea with more accuracy.

  2. Development of an improved method for determining advisory speeds on horizontal curves.

    DOT National Transportation Integrated Search

    2016-07-01

    Horizontal curves are an integral part of the highway alignment. However, a disproportionately high number of severe : crashes occur on them. One method transportation agencies use to reduce the number crashes at horizontal curves is the : installati...

  3. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  4. Evolution of the SCS curve number method and its applications to continuous runoff simulation

    USDA-ARS?s Scientific Manuscript database

    The Natural Resources Conservation Service (NRCS) [previously Soil Conservation Service (SCS)] developed the SCS runoff curve-number (CN) method for estimating direct runoff from storm rainfall. The NRCS uses the CN method for designing structures and for evaluating their effectiveness. Structural...

  5. Curve numbers for nine mountainous eastern United States watersheds: seasonal variation and forest cutting

    Treesearch

    Negussie H. Tedela; Steven C. McCutcheon; John L. Campbell; Wayne T. Swank; Mary Beth Adams; Todd C. Rasmussen

    2012-01-01

    Many engineers and hydrologists use the curve number method to estimate runoff from ungaged watersheds; however, the method does not explicitly account for the influence of season or forest cutting on runoff. This study of observed rainfall and runoff for small, forested watersheds that span the Appalachian Mountains of the eastern United States showed that curve...

  6. Curve Numbers for Nine Mountainous Eastern United States Watersheds: Seasonal Variation and Forest Cutting

    EPA Science Inventory

    Many engineers and hydrologists use the curve number method to estimate runoff from ungaged watersheds; however, the method does not explicitly account for the influence of season or forest cutting on runoff. This study of observed rainfall and runoff for small, forested watershe...

  7. Runoff Curve Numbers from Ten, Small Forested Watersheds in the Mountains of the Eastern United States

    EPA Science Inventory

    Engineers and hydrologists use the curve number method to estimate runoff from rainfall for different land use and soil conditions; however, large uncertainties occur for estimates from forested watersheds. This investigation evaluates the accuracy and consistency of the method u...

  8. Runoff curve numbers for 10 small forested watersheds in the mountains of the eastern United States

    Treesearch

    Negussie H. Tedela; Steven C. McCutcheon; Todd C. Rasmussen; Richard H. Hawkins; Wayne T. Swank; John L. Campbell; Mary Beth Adams; C. Rhett Jackson; Ernest W. Tollner

    2012-01-01

    Engineers and hydrologists use the curve number method to estimate runoff from rainfall for different land use and soil conditions; however, large uncertainties occur for estimates from forested watersheds. This investigation evaluates the accuracy and consistency of the method using rainfall-runoff series from 10 small forested-mountainous watersheds in the eastern...

  9. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  10. Empirical solution of Green-Ampt equation using soil conservation service - curve number values

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-09-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular widely used rainfall-runoff model for quantifying the total stream-flow volume generated by storm rainfall, but its application is not appropriate for sub-daily resolutions. In order to overcome this drawback, the Green-Ampt (GA) infiltration equation is considered and an empirical solution is proposed and evaluated. The procedure, named CN4GA (Curve Number for Green-Ampt), aims to calibrate the Green-Ampt model parameters distributing in time the global information provided by the SCS-CN method. The proposed procedure is evaluated by analysing observed rainfall-runoff events; results show that CN4GA seems to provide better agreement with the observed hydrographs respect to the classic SCS-CN method.

  11. Fuzzy Multi-Objective Vendor Selection Problem with Modified S-CURVE Membership Function

    NASA Astrophysics Data System (ADS)

    Díaz-Madroñero, Manuel; Peidro, David; Vasant, Pandian

    2010-06-01

    In this paper, the S-Curve membership function methodology is used in a vendor selection (VS) problem. An interactive method for solving multi-objective VS problems with fuzzy goals is developed. The proposed method attempts simultaneously to minimize the total order costs, the number of rejected items and the number of late delivered items with reference to several constraints such as meeting buyers' demand, vendors' capacity, vendors' quota flexibility, vendors' allocated budget, etc. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in VS problems, with linear membership functions.

  12. Urban stormwater capture curve using three-parameter mixed exponential probability density function and NRCS runoff curve number method.

    PubMed

    Kim, Sangdan; Han, Suhee

    2010-01-01

    Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.

  13. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  14. Guidelines for application of learning/cost improvement curves

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1975-01-01

    The differences between the terms learning curve and improvement curve are noted, as well as the differences between the Wright system and the Crawford system. Learning curve computational techniques were reviewed along with a method to arrive at a composite learning curve for a system given detail curves either by the functional techniques classification or simply categorized by subsystem. Techniques are discussed for determination of the theoretical first unit (TFU) cost using several of the currently accepted methods. Sometimes TFU cost is referred to as simply number one cost. A tabular presentation of the various learning curve slope values is given. A discussion of the various trends in the application of learning/improvement curves and an outlook for the future are presented.

  15. Curve number method response to historical climate variability and trends

    USDA-ARS?s Scientific Manuscript database

    With the dependence on the curve number (CN) model by the engineering community, the question arises as to whether changes in climate may affect the performance of the CN algorithm which impacts estimates of runoff. A study was conducted to determine the effects of “climate period” (period of unifor...

  16. Curve numbers for long-term no-till corn and agricultural practices with high watershed infiltration

    USDA-ARS?s Scientific Manuscript database

    The Curve Number (CN) method is an engineering and land management tool for estimating surface runoff from rainstorms. There are few watershed runoff records available during which a no-till crop was growing and hence there are few field-measured CN values. We investigated CN under continuous long-...

  17. Curved-line search algorithm for ab initio atomic structure relaxation

    NASA Astrophysics Data System (ADS)

    Chen, Zhanghui; Li, Jingbo; Li, Shushen; Wang, Lin-Wang

    2017-09-01

    Ab initio atomic relaxations often take large numbers of steps and long times to converge, especially when the initial atomic configurations are far from the local minimum or there are curved and narrow valleys in the multidimensional potentials. An atomic relaxation method based on on-the-flight force learning and a corresponding curved-line search algorithm is presented to accelerate this process. Results demonstrate the superior performance of this method for metal and magnetic clusters when compared with the conventional conjugate-gradient method.

  18. Single curved fiber sedimentation under gravity

    Treesearch

    Xiaoying Rong; Dewei Qi; Junyong Zhu

    2005-01-01

    Dynamics of single curved fiber sedimentation under the gravity are simulated by using lattice Boltzmann method. The results of migration and rotation of the curved fiber at different Reynolds numbers are reported. The results show that the rotation and migration processes are sensitive to the curvature of the fiber.

  19. Single curved fiber sedimentation under gravity

    Treesearch

    Xiaoying Rong; Dewei Qi; Guowei He; Jun Yong Zhu; Tim Scott

    2008-01-01

    Dynamics of single curved fiber sedimentation under gravity are simulated by using the lattice Boltzmann method. The results of migration and rotation of the curved fiber at different Reynolds numbers are reported. The results show that the rotation and migration processes are sensitive to the curvature of the fiber.

  20. Soil conservation service curve number: How to take into account spatial and temporal variability

    NASA Astrophysics Data System (ADS)

    Rianna, M.; Orlando, D.; Montesarchio, V.; Russo, F.; Napolitano, F.

    2012-09-01

    The most commonly used method to evaluate rainfall excess, is the Soil Conservation Service (SCS) runoff curve number model. This method is based on the determination of the CN valuethat is linked with a hydrological soil group, cover type, treatment, hydrologic condition and antecedent runoff condition. To calculate the antecedent runoff condition the standard procedure needs to calculate the rainfall over the entire basin during the five days previous to the beginning of the event in order to simulate and then to use that volume of rainfall to calculate the antecedent moisture condition (AMC). This is necessary in order to obtain the correct curve number value. The value of the modified parameter is then kept constant throughout the whole event. The aim of this work is to evaluate the possibility of improving the curve number method. The various assumptions are focused on modifying those related to rainfall and the determination of an AMC condition and their role in the determination of the value of the curve number parameter. In order to consider the spatial variability we assumed that the rainfall which influences the AMC and the CN value does not account for the rainfall over the entire basin, but for the rainfall within a single cell where the basin domain is discretized. Furthermore, in order to consider the temporal variability of rainfall we assumed that the value of the CN of the single cell is not maintained constant during the whole event, but instead varies throughout it according to the time interval used to define the AMC conditions.

  1. Finding Planets in K2: A New Method of Cleaning the Data

    NASA Astrophysics Data System (ADS)

    Currie, Miles; Mullally, Fergal; Thompson, Susan E.

    2017-01-01

    We present a new method of removing systematic flux variations from K2 light curves by employing a pixel-level principal component analysis (PCA). This method decomposes the light curves into its principal components (eigenvectors), each with an associated eigenvalue, the value of which is correlated to how much influence the basis vector has on the shape of the light curve. This method assumes that the most influential basis vectors will correspond to the unwanted systematic variations in the light curve produced by K2’s constant motion. We correct the raw light curve by automatically fitting and removing the strongest principal components. The strongest principal components generally correspond to the flux variations that result from the motion of the star in the field of view. Our primary method of calculating the strongest principal components to correct for in the raw light curve estimates the noise by measuring the scatter in the light curve after using an algorithm for Savitsy-Golay detrending, which computes the combined photometric precision value (SG-CDPP value) used in classic Kepler. We calculate this value after correcting the raw light curve for each element in a list of cumulative sums of principal components so that we have as many noise estimate values as there are principal components. We then take the derivative of the list of SG-CDPP values and take the number of principal components that correlates to the point at which the derivative effectively goes to zero. This is the optimal number of principal components to exclude from the refitting of the light curve. We find that a pixel-level PCA is sufficient for cleaning unwanted systematic and natural noise from K2’s light curves. We present preliminary results and a basic comparison to other methods of reducing the noise from the flux variations.

  2. Applications of species accumulation curves in large-scale biological data analysis.

    PubMed

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2015-09-01

    The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.

  3. Applications of species accumulation curves in large-scale biological data analysis

    PubMed Central

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2016-01-01

    The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899

  4. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    NASA Astrophysics Data System (ADS)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  5. Data preparation for functional data analysis of PM10 in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Shaadan, Norshahida; Jemain, Abdul Aziz; Deni, Sayang Mohd

    2014-07-01

    The use of curves or functional data in the study analysis is increasingly gaining momentum in the various fields of research. The statistical method to analyze such data is known as functional data analysis (FDA). The first step in FDA is to convert the observed data points which are repeatedly recorded over a period of time or space into either a rough (raw) or smooth curve. In the case of the smooth curve, basis functions expansion is one of the methods used for the data conversion. The data can be converted into a smooth curve either by using the regression smoothing or roughness penalty smoothing approach. By using the regression smoothing approach, the degree of curve's smoothness is very dependent on k number of basis functions; meanwhile for the roughness penalty approach, the smoothness is dependent on a roughness coefficient given by parameter λ Based on previous studies, researchers often used the rather time-consuming trial and error or cross validation method to estimate the appropriate number of basis functions. Thus, this paper proposes a statistical procedure to construct functional data or curves for the hourly and daily recorded data. The Bayesian Information Criteria is used to determine the number of basis functions while the Generalized Cross Validation criteria is used to identify the parameter λ The proposed procedure is then applied on a ten year (2001-2010) period of PM10 data from 30 air quality monitoring stations that are located in Peninsular Malaysia. It was found that the number of basis functions required for the construction of the PM10 daily curve in Peninsular Malaysia was in the interval of between 14 and 20 with an average value of 17; the first percentile is 15 and the third percentile is 19. Meanwhile the initial value of the roughness coefficient was in the interval of between 10-5 and 10-7 and the mode was 10-6. An example of the functional descriptive analysis is also shown.

  6. Quantification and Qualification of Bacteria Trapped in Chewed Gum

    PubMed Central

    Wessel, Stefan W.; van der Mei, Henny C.; Morando, David; Slomp, Anje M.; van de Belt-Gritter, Betsy; Maitra, Amarnath; Busscher, Henk J.

    2015-01-01

    Chewing of gum contributes to the maintenance of oral health. Many oral diseases, including caries and periodontal disease, are caused by bacteria. However, it is unknown whether chewing of gum can remove bacteria from the oral cavity. Here, we hypothesize that chewing of gum can trap bacteria and remove them from the oral cavity. To test this hypothesis, we developed two methods to quantify numbers of bacteria trapped in chewed gum. In the first method, known numbers of bacteria were finger-chewed into gum and chewed gums were molded to standard dimensions, sonicated and plated to determine numbers of colony-forming-units incorporated, yielding calibration curves of colony-forming-units retrieved versus finger-chewed in. In a second method, calibration curves were created by finger-chewing known numbers of bacteria into gum and subsequently dissolving the gum in a mixture of chloroform and tris-ethylenediaminetetraacetic-acid (TE)-buffer. The TE-buffer was analyzed using quantitative Polymerase-Chain-Reaction (qPCR), yielding calibration curves of total numbers of bacteria versus finger-chewed in. Next, five volunteers were requested to chew gum up to 10 min after which numbers of colony-forming-units and total numbers of bacteria trapped in chewed gum were determined using the above methods. The qPCR method, involving both dead and live bacteria yielded higher numbers of retrieved bacteria than plating, involving only viable bacteria. Numbers of trapped bacteria were maximal during initial chewing after which a slow decrease over time up to 10 min was observed. Around 108 bacteria were detected per gum piece depending on the method and gum considered. The number of species trapped in chewed gum increased with chewing time. Trapped bacteria were clearly visualized in chewed gum using scanning-electron-microscopy. Summarizing, using novel methods to quantify and qualify oral bacteria trapped in chewed gum, the hypothesis is confirmed that chewing of gum can trap and remove bacteria from the oral cavity. PMID:25602256

  7. Elliptic Curve Integral Points on y2 = x3 + 3x ‑ 14

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhong

    2018-03-01

    The positive integer points and integral points of elliptic curves are very important in the theory of number and arithmetic algebra, it has a wide range of applications in cryptography and other fields. There are some results of positive integer points of elliptic curve y 2 = x 3 + ax + b, a, b ∈ Z In 1987, D. Zagier submit the question of the integer points on y 2 = x 3 ‑ 27x + 62, it count a great deal to the study of the arithmetic properties of elliptic curves. In 2009, Zhu H L and Chen J H solved the problem of the integer points on y 2 = x 3 ‑ 27x + 62 by using algebraic number theory and P-adic analysis method. In 2010, By using the elementary method, Wu H M obtain all the integral points of elliptic curves y 2 = x 3 ‑ 27x ‑ 62. In 2015, Li Y Z and Cui B J solved the problem of the integer points on y 2 = x 3 ‑ 21x ‑ 90 By using the elementary method. In 2016, Guo J solved the problem of the integer points on y 2 = x 3 + 27x + 62 by using the elementary method. In 2017, Guo J proved that y 2 = x 3 ‑ 21x + 90 has no integer points by using the elementary method. Up to now, there is no relevant conclusions on the integral points of elliptic curves y 2 = x 3 + 3x ‑ 14, which is the subject of this paper. By using congruence and Legendre Symbol, it can be proved that elliptic curve y 2 = x 3 + 3x ‑ 14 has only one integer point: (x, y) = (2, 0).

  8. Thermoluminescence glow curve deconvolution and trapping parameters determination of dysprosium doped magnesium borate glass

    NASA Astrophysics Data System (ADS)

    Salama, E.; Soliman, H. A.

    2018-07-01

    In this paper, thermoluminescence glow curves of gamma irradiated magnesium borate glass doped with dysprosium were studied. The number of interfering peaks and in turn the number of electron trap levels are determined using the Repeated Initial Rise (RIR) method. At different heating rates (β), the glow curves were deconvoluted into two interfering peaks based on the results of RIR method. Kinetic parameters such as trap depth, kinetic order (b) and frequency factor (s) for each electron trap level is determined using the Peak Shape (PS) method. The obtained results indicated that, the magnesium borate glass doped with dysprosium has two electron trap levels with the average depth energies of 0.63 and 0.79 eV respectively. These two traps have second order kinetic and are formed at low temperature region. The obtained results due to the glow curve analysis could be used to explain some observed properties such as, high thermal fading and light sensitivity for such thermoluminescence material. In this work, systematic procedures to determine the kinetic parameters of any thermoluminescence material are successfully introduced.

  9. In situ method for estimating cell survival in a solid tumor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfieri, A.A.; Hahn, E.W.

    1978-09-01

    The response of the murine Meth-A fibrosarcoma to single and fractionated doses of x-irradiation, actinomycin D chemotherapy, and/or concomitant local tumor hyperthermia was assayed with the use of an in situ method for estimating cell kill within a solid tumor. The cell survival assay was based on a standard curve plotting number of inoculated viable cells with and without radiation-inactivated homologous tumor cells versus the time required for i.m. tumors to grow to 1.0 cu cm. The time for post-treatment tumors to grow to 1.0 cu cm was cross-referenced to the standard curve, and the number of surviving cells contributingmore » to tumor regrowth was estimated. The resulting surviving fraction curves closely resemble those obtained with in vitro systems.« less

  10. Curve number derivation for watersheds draining two headwater streams in lower coastal plain South Carolina, USA

    Treesearch

    Thomas H. Epps; Daniel R. Hitchcock; Anand D. Jayakaran; Drake R. Loflin; Thomas M. Williams; Devendra M. Amatya

    2013-01-01

    The objective of this study was to assess curve number (CN) values derived for two forested headwater catchments in the Lower Coastal Plain (LCP) of South Carolina using a three-year period of storm event rainfall and runoff data in comparison with results obtained from CN method calculations. Derived CNs from rainfall/runoff pairs ranged from 46 to 90 for the Upper...

  11. Turbine blade profile design method based on Bezier curves

    NASA Astrophysics Data System (ADS)

    Alexeev, R. A.; Tishchenko, V. A.; Gribin, V. G.; Gavrilov, I. Yu.

    2017-11-01

    In this paper, the technique of two-dimensional parametric blade profile design is presented. Bezier curves are used to create the profile geometry. The main feature of the proposed method is an adaptive approach of curve fitting to given geometric conditions. Calculation of the profile shape is produced by multi-dimensional minimization method with a number of restrictions imposed on the blade geometry.The proposed method has been used to describe parametric geometry of known blade profile. Then the baseline geometry was modified by varying some parameters of the blade. The numerical calculation of obtained designs has been carried out. The results of calculations have shown the efficiency of chosen approach.

  12. Soil Conservation Service Curve Number method: How to mend a wrong soil moisture accounting procedure?

    NASA Astrophysics Data System (ADS)

    Michel, Claude; Andréassian, Vazken; Perrin, Charles

    2005-02-01

    This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.

  13. Rethinking non-inferiority: a practical trial design for optimising treatment duration.

    PubMed

    Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb

    2018-06-01

    Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.

  14. An approach for mapping the number and distribution of Salmonella contamination on the poultry carcass.

    PubMed

    Oscar, T P

    2008-09-01

    Mapping the number and distribution of Salmonella on poultry carcasses will help guide better design of processing procedures to reduce or eliminate this human pathogen from poultry. A selective plating media with multiple antibiotics (xylose-lysine agar medium [XL] containing N-(2-hydroxyethyl)piperazine-N'-(2-ethanesulfonic acid) and the antibiotics chloramphenicol, ampicillin, tetracycline, and streptomycin [XLH-CATS]) and a multiple-antibiotic-resistant strain (ATCC 700408) of Salmonella Typhimurium definitive phage type 104 (DT104) were used to develop an enumeration method for mapping the number and distribution of Salmonella Typhimurium DT104 on the carcasses of young chickens in the Cornish game hen class. The enumeration method was based on the concept that the time to detection by drop plating on XLH-CATS during incubation of whole chicken parts in buffered peptone water would be inversely related to the initial log number (N0) of Salmonella Typhimurium DT104 on the chicken part. The sampling plan for mapping involved dividing the chicken into 12 parts, which ranged in average size from 36 to 80 g. To develop the enumeration method, whole parts were spot inoculated with 0 to 6 log Salmonella Typhimurium DT104, incubated in 300 ml of buffered peptone water, and detected on XLH-CATS by drop plating. An inverse relationship between detection time on XLH-CATS and N0 was found (r = -0.984). The standard curve was similar for the individual chicken parts and therefore, a single standard curve for all 12 chicken parts was developed. The final standard curve, which contained a 95% prediction interval for providing stochastic results for N0, had high goodness of fit (r2 = 0.968) and was N0 (log) = 7.78 +/- 0.61 - (0.995 x detention time). Ninety-five percent of N0 were within +/- 0.61 log of the standard curve. The enumeration method and sampling plan will be used in future studies to map changes in the number and distribution of Salmonella on carcasses of young chickens fed the DT104 strain used in standard curve development and subjected to different processing procedures.

  15. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    NASA Astrophysics Data System (ADS)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  16. Transition to Turbulence in curved pipe

    NASA Astrophysics Data System (ADS)

    Hashemi, Amirreza; Loth, Francis

    2014-11-01

    Studies have shown that transitional turbulence in a curved pipe is delayed significantly compared with straight pipes. These analytical, numerical and experimental studies employed a helical geometry that is infinitely long such that the effect of the inlet and outlet can be neglected. The present study examined transition to turbulence in a finite curved pipe with a straight inlet/outlet and a 180 degrees curved pipe with a constant radius of curvature and diameter (D). We have employed the large scale direct numerical simulation (DNS) by using the spectral element method, nek5000, to simulate the flow field within curved pipe geometry with different curvature radii and Reynolds numbers to determine the point of the transition to turbulence. Long extensions for the inlet (5D) and outlet (20D) were used to diminish the effect of the boundary conditions. Our numerical results for radius of curvatures of 1.5D and 5D show transition turbulence is near Re = 3000. This is delayed compared with a straight pipe (Re = 2200) but still less that observed for helical geometries (Reynolds number less than 5000). Our research aims to describe the critical Reynolds number for transition to turbulence for a finite curved pipe at various curvature radii.

  17. Aerodynamic calculational methods for curved-blade Darrieus VAWT WECS

    NASA Astrophysics Data System (ADS)

    Templin, R. J.

    1985-03-01

    Calculation of aerodynamic performance and load distributions for curved-blade wind turbines is discussed. Double multiple stream tube theory, and the uncertainties that remain in further developing adequate methods are considered. The lack of relevant airfoil data at high Reynolds numbers and high angles of attack, and doubts concerning the accuracy of models of dynamic stall are underlined. Wind tunnel tests of blade airbrake configurations are summarized.

  18. Is larger scoliosis curve magnitude associated with increased perioperative health-care resource utilization?: a multicenter analysis of 325 adolescent idiopathic scoliosis curves.

    PubMed

    Miyanji, Firoz; Slobogean, Gerard P; Samdani, Amer F; Betz, Randal R; Reilly, Christopher W; Slobogean, Bronwyn L; Newton, Peter O

    2012-05-02

    The treatment of patients with large adolescent idiopathic scoliosis curves has been associated with increased surgical complexity. The purpose of this study was to determine whether surgical correction of larger adolescent idiopathic scoliosis curves increased the utilization of health-care resources and to identify potential predictors associated with increased perioperative health-care resource utilization. A nested cohort of patients with adolescent idiopathic scoliosis with Lenke type 1A and 1B curves were identified from a prospective longitudinal multicenter database. Four perioperative outcomes were selected as the primary health-care resource utilization outcomes of interest: operative time, number of vertebral levels instrumented, duration of hospitalization, and allogeneic blood transfusion. The effect of curve magnitude on these outcomes was assessed with use of univariate and multivariate regression. Three hundred and twenty-five patients with a mean age of 15 ± 2 years were included. The mean main thoracic curve was 54.4° ± 7.8°. Larger curves were associated with longer operative time (p = 0.03), a greater number of vertebral levels instrumented (p = 0.0005), and the need for blood transfusion (with every 10° increase associated with 1.5 times higher odds of receiving a transfusion). In addition to curve magnitude, surgical center, bone graft method, and upper and lower instrumented levels were strong predictors of operative time (R2 = 0.76). The duration of hospitalization was influenced by the surgical center and intraoperative blood loss (R2 < 0.4), whereas the number of levels instrumented was influenced by the curve magnitude, curve correction percentage, upper instrumented vertebra, and surgical center (R2 = 0.64). Correction of larger curves was associated with increased utilization of perioperative health-care resources, specifically longer operative time, a greater number of vertebral levels instrumented, and higher odds of receiving a blood transfusion.

  19. Mixing the Green-Ampt model and Curve Number method as an empirical tool for rainfall excess estimation in small ungauged catchments.

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-04-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.

  20. Comparison of geometric morphometric outline methods in the discrimination of age-related differences in feather shape

    PubMed Central

    Sheets, H David; Covino, Kristen M; Panasiewicz, Joanna M; Morris, Sara R

    2006-01-01

    Background Geometric morphometric methods of capturing information about curves or outlines of organismal structures may be used in conjunction with canonical variates analysis (CVA) to assign specimens to groups or populations based on their shapes. This methodological paper examines approaches to optimizing the classification of specimens based on their outlines. This study examines the performance of four approaches to the mathematical representation of outlines and two different approaches to curve measurement as applied to a collection of feather outlines. A new approach to the dimension reduction necessary to carry out a CVA on this type of outline data with modest sample sizes is also presented, and its performance is compared to two other approaches to dimension reduction. Results Two semi-landmark-based methods, bending energy alignment and perpendicular projection, are shown to produce roughly equal rates of classification, as do elliptical Fourier methods and the extended eigenshape method of outline measurement. Rates of classification were not highly dependent on the number of points used to represent a curve or the manner in which those points were acquired. The new approach to dimensionality reduction, which utilizes a variable number of principal component (PC) axes, produced higher cross-validation assignment rates than either the standard approach of using a fixed number of PC axes or a partial least squares method. Conclusion Classification of specimens based on feather shape was not highly dependent of the details of the method used to capture shape information. The choice of dimensionality reduction approach was more of a factor, and the cross validation rate of assignment may be optimized using the variable number of PC axes method presented herein. PMID:16978414

  1. Parasitic worms: how many really?

    PubMed

    Strona, Giovanni; Fattorini, Simone

    2014-04-01

    Accumulation curves are useful tools to estimate species diversity. Here we argue that they can also be used in the study of global parasite species richness. Although this basic idea is not completely new, our approach differs from the previous ones as it treats each host species as an independent sample. We show that randomly resampling host-parasite records from the existing databases makes it possible to empirically model the relationship between the number of investigated host species, and the corresponding number of parasite species retrieved from those hosts. This method was tested on 21 inclusive lists of parasitic worms occurring on vertebrate hosts. All of the obtained models conform well to a power law curve. These curves were then used to estimate global parasite species richness. Results obtained with the new method suggest that current predictions are likely to severely overestimate parasite diversity. Copyright © 2014 Australian Society for Parasitology Inc. Published by Elsevier Ltd. All rights reserved.

  2. An Improved Quantitative Real-Time PCR Assay for the Enumeration of Heterosigma akashiwo (Raphidophyceae) Cysts Using a DNA Debris Removal Method and a Cyst-Based Standard Curve.

    PubMed

    Kim, Joo-Hwan; Kim, Jin Ho; Wang, Pengbin; Park, Bum Soo; Han, Myung-Soo

    2016-01-01

    The identification and quantification of Heterosigma akashiwo cysts in sediments by light microscopy can be difficult due to the small size and morphology of the cysts, which are often indistinguishable from those of other types of algae. Quantitative real-time PCR (qPCR) based assays represent a potentially efficient method for quantifying the abundance of H. akashiwo cysts, although standard curves must be based on cyst DNA rather than on vegetative cell DNA due to differences in gene copy number and DNA extraction yield between these two cell types. Furthermore, qPCR on sediment samples can be complicated by the presence of extracellular DNA debris. To solve these problems, we constructed a cyst-based standard curve and developed a simple method for removing DNA debris from sediment samples. This cyst-based standard curve was compared with a standard curve based on vegetative cells, as vegetative cells may have twice the gene copy number of cysts. To remove DNA debris from the sediment, we developed a simple method involving dilution with distilled water and heating at 75°C. A total of 18 sediment samples were used to evaluate this method. Cyst abundance determined using the qPCR assay without DNA debris removal yielded results up to 51-fold greater than with direct counting. By contrast, a highly significant correlation was observed between cyst abundance determined by direct counting and the qPCR assay in conjunction with DNA debris removal (r2 = 0.72, slope = 1.07, p < 0.001). Therefore, this improved qPCR method should be a powerful tool for the accurate quantification of H. akashiwo cysts in sediment samples.

  3. Numerical Simulation of Particle Motion in a Curved Channel

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Nie, Deming

    2018-01-01

    In this work the lattice Boltzmann method (LBM) is used to numerically study the motion of a circular particle in a curved channel at intermediate Reynolds numbers (Re). The effects of the Reynolds number and the initial particle position are taken into account. Numerical results include the streamlines, particle trajectories and final equilibrium positions. It has been found that the particle is likely to migrate to a similar equilibrium position irrespective of its initial position when Re is large.

  4. A systematic methodology for creep master curve construction using the stepped isostress method (SSM): a numerical assessment

    NASA Astrophysics Data System (ADS)

    Miranda Guedes, Rui

    2018-02-01

    Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.

  5. On the Power of Multivariate Latent Growth Curve Models to Detect Correlated Change

    ERIC Educational Resources Information Center

    Hertzog, Christopher; Lindenberger, Ulman; Ghisletta, Paolo; Oertzen, Timo von

    2006-01-01

    We evaluated the statistical power of single-indicator latent growth curve models (LGCMs) to detect correlated change between two variables (covariance of slopes) as a function of sample size, number of longitudinal measurement occasions, and reliability (measurement error variance). Power approximations following the method of Satorra and Saris…

  6. Learning curve for laparoscopic Heller myotomy and Dor fundoplication for achalasia

    PubMed Central

    Omura, Nobuo; Tsuboi, Kazuto; Hoshino, Masato; Yamamoto, Seryung; Akimoto, Shunsuke; Masuda, Takahiro; Kashiwagi, Hideyuki; Yanaga, Katsuhiko

    2017-01-01

    Purpose Although laparoscopic Heller myotomy and Dor fundoplication (LHD) is widely performed to address achalasia, little is known about the learning curve for this technique. We assessed the learning curve for performing LHD. Methods Of the 514 cases with LHD performed between August 1994 and March 2016, the surgical outcomes of 463 cases were evaluated after excluding 50 cases with reduced port surgery and one case with the simultaneous performance of laparoscopic distal partial gastrectomy. A receiver operating characteristic (ROC) curve analysis was used to identify the cut-off value for the number of surgical experiences necessary to become proficient with LHD, which was defined as the completion of the learning curve. Results We defined the completion of the learning curve when the following 3 conditions were satisfied. 1) The operation time was less than 165 minutes. 2) There was no blood loss. 3) There was no intraoperative complication. In order to establish the appropriate number of surgical experiences required to complete the learning curve, the cut-off value was evaluated by using a ROC curve (AUC 0.717, p < 0.001). Finally, we identified the cut-off value as 16 surgical cases (sensitivity 0.706, specificity 0.646). Conclusion Learning curve seems to complete after performing 16 cases. PMID:28686640

  7. A study of active learning methods for named entity recognition in clinical text.

    PubMed

    Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua

    2015-12-01

    Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. A Simple and Rapid Method for Standard Preparation of Gas Phase Extract of Cigarette Smoke

    PubMed Central

    Higashi, Tsunehito; Mai, Yosuke; Noya, Yoichi; Horinouchi, Takahiro; Terada, Koji; Hoshi, Akimasa; Nepal, Prabha; Harada, Takuya; Horiguchi, Mika; Hatate, Chizuru; Kuge, Yuji; Miwa, Soichi

    2014-01-01

    Cigarette smoke consists of tar and gas phase: the latter is toxicologically important because it can pass through lung alveolar epithelium to enter the circulation. Here we attempt to establish a standard method for preparation of gas phase extract of cigarette smoke (CSE). CSE was prepared by continuously sucking cigarette smoke through a Cambridge filter to remove tar, followed by bubbling it into phosphate-buffered saline (PBS). An increase in dry weight of the filter was defined as tar weight. Characteristically, concentrations of CSEs were represented as virtual tar concentrations, assuming that tar on the filter was dissolved in PBS. CSEs prepared from smaller numbers of cigarettes (original tar concentrations ≤15 mg/ml) showed similar concentration-response curves for cytotoxicity versus virtual tar concentrations, but with CSEs from larger numbers (tar ≥20 mg/ml), the curves were shifted rightward. Accordingly, the cytotoxic activity was detected in PBS of the second reservoir downstream of the first one with larger numbers of cigarettes. CSEs prepared from various cigarette brands showed comparable concentration-response curves for cytotoxicity. Two types of CSEs prepared by continuous and puff smoking protocols were similar regarding concentration-response curves for cytotoxicity, pharmacology of their cytotoxicity, and concentrations of cytotoxic compounds. These data show that concentrations of CSEs expressed by virtual tar concentrations can be a reference value to normalize their cytotoxicity, irrespective of numbers of combusted cigarettes, cigarette brands and smoking protocols, if original tar concentrations are ≤15 mg/ml. PMID:25229830

  9. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  10. Testing the performance of pure spectrum resolution from Raman hyperspectral images of differently manufactured pharmaceutical tablets.

    PubMed

    Vajna, Balázs; Farkas, Attila; Pataki, Hajnalka; Zsigmond, Zsolt; Igricz, Tamás; Marosi, György

    2012-01-27

    Chemical imaging is a rapidly emerging analytical method in pharmaceutical technology. Due to the numerous chemometric solutions available, characterization of pharmaceutical samples with unknown components present has also become possible. This study compares the performance of current state-of-the-art curve resolution methods (multivariate curve resolution-alternating least squares, positive matrix factorization, simplex identification via split augmented Lagrangian and self-modelling mixture analysis) in the estimation of pure component spectra from Raman maps of differently manufactured pharmaceutical tablets. The batches of different technologies differ in the homogeneity level of the active ingredient, thus, the curve resolution methods are tested under different conditions. An empirical approach is shown to determine the number of components present in a sample. The chemometric algorithms are compared regarding the number of detected components, the quality of the resolved spectra and the accuracy of scores (spectral concentrations) compared to those calculated with classical least squares, using the true pure component (reference) spectra. It is demonstrated that using appropriate multivariate methods, Raman chemical imaging can be a useful tool in the non-invasive characterization of unknown (e.g. illegal or counterfeit) pharmaceutical products. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Studies of excited states of HeH by the multi-reference configuration-interaction method

    NASA Astrophysics Data System (ADS)

    Lee, Chun-Woo; Gim, Yeongrok

    2013-11-01

    The excited states of a HeH molecule for an n of up to 4 are studied using the multi-reference configuration-interaction method and Kaufmann's Rydberg basis functions. The advantages of using two different ways of locating Rydberg orbitals, either on the atomic nucleus or at the charge centre of molecules, are exploited by limiting their application to different ranges of R. Using this method, the difference between the experimental binding energies of the lower Rydberg states obtained by Ketterle and the ab initio results obtained by van Hemert and Peyerimhoff is reduced from a few hundreds of wave numbers to a few tens of wave numbers. A substantial improvement in the accuracy allows us to obtain quantum defect curves characterized by the correct behaviour. We obtain several Rydberg series that have more than one member, such as the ns series (n = 2, 3 and 4), npσ series (n = 3 and 4), npπ (n = 2, 3, 4) series and ndπ (n = 3, 4) series. These quantum defect curves are compared to the quantum defect curves obtained by the R-matrix or the multichannel quantum defect theory methods.

  12. On the formation of localized peaks and non-monotonic tailing of breakthrough curves

    NASA Astrophysics Data System (ADS)

    Siirila, Erica R.; Sanchez-Vila, Xavier; Fernàndez-Garcia, Daniel

    2014-05-01

    While breakthrough curve (BTC) analysis is a traditional tool in hydrogeology to obtain hydraulic parameters, in recent years emphasis has been placed on analyzing the shape of the receding portion of the curve. A number of field and laboratory observations have found a constant BTC slope in log-log space, and thus it has been hypothesized that a power law behavior is representative of real aquifers. Usually, monotonicity of the late-time BTC slope is just assumed, meaning that local peaks in the BTC are not considered, and that a local (in time) increase or decrease of BTC slope is also not considered. We contend that local peaks may exist but are sometimes not reported for a number of reasons. For example, when BTCs are obtained from actual measurements, sub-sampling may mask non-monotonicity, or small peaks may be reported as measurement errors and thus smoothed out or removed. When numerical analyses of synthetic aquifers are performed, the simulation method may yield artificially monotonous curves as a consequence of the methods used. For example, Eulerian methods may suffer from numerical dispersion, where curves tend to become over-smoothed while Lagrangian methods may suffer from artificial BTC oscillations stemming from the reconstruction of concentrations from a limited number of particles. A paradigm shift in terms of the BTC shape must also accompany two major advancements within the hydrogeology field: 1) the increase of high frequency data and progression of data collection techniques that diminish the problems of under-sampling BTCs and 2) advancements in supercomputing and numerical simulation allowing for higher resolution of flow and transport problems. As more information is incorporated into BTCs and/or they are obtained in more spatial locations, it is likely that classical definitions of BTC shapes will no longer be adequate descriptors for future treatment of contaminant transport problems. For example, the presence of localized peaks in BTCs (when, at what magnitude and duration) is imperative in accurately assessing environmental and human health risk, as discrepancies in the environmental concentration at a given time could potentially affect risk management decisions. In this work, the presence of multiple peaks in BTCs is assessed from high-resolution numerical simulations with particle tracking techniques and a kernel density estimator. Individual realizations of three-dimensional heterogeneous hydraulic conductivity fields with varying combinations of statistical anisotropy, geostatistical models, and local dispersivity are utilized to test for mechanisms of physical mass transfer. BTCs of non-reactive solutes are analyzed for the presence of local maxima, and for the corresponding slope of the receding limb of the curve as a function of travel distance and number of integral scales traveled, a question which has received little to no attention in the literature. This uniquely designed numerical experiment allows the discussion of BTC evolution in terms of not only the number of local peaks in the BTC, but also how knowledge of the number of local peaks in a BTC relates to pre-Fickian transport. We show that the number of local peaks and corresponding slopes strongly depend on statistical anisotropy and travel distance, but are less sensitive to the number of integral scales traveled. We also illustrate the sensitivity of BTC shapes resulting from the geostatistical model used, how local peaks may potentially change the apparent overall slope of the curves, and the implications of these results in water quality management decisions.

  13. Monitoring crack extension in fracture toughness tests by ultrasonics

    NASA Technical Reports Server (NTRS)

    Klima, S. J.; Fisher, D. M.; Buzzard, R. J.

    1975-01-01

    An ultrasonic method was used to observe the onset of crack extension and to monitor continued crack growth in fracture toughness specimens during three point bend tests. A 20 MHz transducer was used with commercially available equipment to detect average crack extension less than 0.09 mm. The material tested was a 300-grade maraging steel in the annealed condition. A crack extension resistance curve was developed to demonstrate the usefulness of the ultrasonic method for minimizing the number of tests required to generate such curves.

  14. TU-EF-304-06: A Comparison of CT Number to Relative Linear Stopping Power Conversion Curves Used by Proton Therapy Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, P; Lowenstein, J; Kry, S

    Purpose: To compare the CT Number (CTN) to Relative Linear Stopping Power (RLSP) conversion curves used by 14 proton institutions in their dose calculations. Methods: The proton institution’s CTN to RLSP conversion curves were collected by the Imaging and Radiation Oncology Core (IROC) Houston QA Center during its on-site dosimetry review audits. The CTN values were converted to scaled CT Numbers. The scaling assigns a CTN of 0 to air and 1000 to water to allow intercomparison. The conversion curves were compared and the mean curve was calculated based on institutions’ predicted RLSP values for air (CTN 0), lung (CTNmore » 250), fat (CTN 950), water (1000), liver (CTN 1050), and bone (CTN 2000) points. Results: One institution’s curve was found to have a unique curve shape between the scaled CTN of 1025 to 1225. This institution modified its curve based on the findings. Another institution had higher RLSP values than expected for both low and high CTNs. This institution recalibrated their two CT scanners and the new data placed their curve closer to the mean of all institutions. After corrections were made to several conversion curves, four institutions still fall outside 2 standard deviations at very low CTNs (100–200), and two institutions fall outside between CTN 850–900. The largest percent difference in RLSP values between institutions for the specific tissues reviewed was 22% for the lung point. Conclusion: The review and comparison of CTN to RLSP conversion curves allows IROC Houston to identify any outliers and make recommendations for improvement. Several institutions improved their clinical dose calculation accuracy as a Result of this review. There is still area for improvement, particularly in the lung area of the curve. The IROC Houston QA Center is supported by NCI grant CA180803.« less

  15. Learning curve in robotic rectal cancer surgery: current state of affairs.

    PubMed

    Jiménez-Rodríguez, Rosa M; Rubio-Dorado-Manzanares, Mercedes; Díaz-Pavón, José Manuel; Reyes-Díaz, M Luisa; Vazquez-Monchul, Jorge Manuel; Garcia-Cabrera, Ana M; Padillo, Javier; De la Portilla, Fernando

    2016-12-01

    Robotic-assisted rectal cancer surgery offers multiple advantages for surgeons, and it seems to yield the same clinical outcomes as regards the short-time follow-up of patients compared to conventional laparoscopy. This surgical approach emerges as a technique aiming at overcoming the limitations posed by rectal cancer and other surgical fields of difficult access, in order to obtain better outcomes and a shorter learning curve. A systematic review of the literature of robot-assisted rectal surgery was carried out according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The search was conducted in October 2015 in PubMed, MEDLINE and the Cochrane Central Register of Controlled Trials, for articles published in the last 10 years and pertaining the learning curve of robotic surgery for colorectal cancer. It consisted of the following key words: "rectal cancer/learning curve/robotic-assisted laparoscopic surgery". A total of 34 references were identified, but only 9 full texts specifically addressed the analysis of the learning curve in robot-assisted rectal cancer surgery, 7 were case series and 2 were non-randomised case-comparison series. Eight papers used the cumulative sum (CUSUM) method, and only one author divided the series into two groups to compare both. The mean number of cases for phase I of the learning curve was calculated to be 29.7 patients; phase II corresponds to a mean number 37.4 patients. The mean number of cases required for the surgeon to be classed as an expert in robotic surgery was calculated to be 39 patients. Robotic advantages could have an impact on learning curve for rectal cancer and lower the number of cases that are necessary for rectal resections.

  16. Application of GIS in Modeling Zilberchai Basin Runoff

    NASA Astrophysics Data System (ADS)

    Malekani, L.; Khaleghi, S.; Mahmoodi, M.

    2014-10-01

    Runoff is one of most important hydrological variables that are used in many civil works, planning for optimal use of reservoirs, organizing rivers and warning flood. The runoff curve number (CN) is a key factor in determining runoff in the SCS (Soil Conservation Service) based hydrologic modeling method. The traditional SCS-CN method for calculating the composite curve number consumes a major portion of the hydrologic modeling time. Therefore, geographic information systems (GIS) are now being used in combination with the SCS-CN method. This work uses a methodology of determining surface runoff by Geographic Information System model and applying SCS-CN method that needs the necessary parameters such as land use map, hydrologic soil groups, rainfall data, DEM, physiographic characteristic of the basin. The model is built by implementing some well known hydrologic methods in GIS like as ArcHydro, ArcCN-Runoff for modeling of Zilberchai basin runoff. The results show that the high average weighted of curve number indicate that permeability of the basin is low and therefore likelihood of flooding is high. So the fundamental works is essential in order to increase water infiltration in Zilberchai basin and to avoid wasting surface water resources. Also comparing the results of the computed and observed runoff value show that use of GIS tools in addition to accelerate the calculation of the runoff also increase the accuracy of the results. This paper clearly demonstrates that the integration of GIS with the SCS-CN method provides a powerful tool for estimating runoff volumes in large basins.

  17. Identification of Reliable Components in Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS): a Data-Driven Approach across Metabolic Processes.

    PubMed

    Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun

    2015-11-04

    There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.

  18. Estimating sunspot number

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.

    1984-01-01

    An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.

  19. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  20. An efficient user-oriented method for calculating compressible flow in an about three-dimensional inlets. [panel method

    NASA Technical Reports Server (NTRS)

    Hess, J. L.; Mack, D. P.; Stockman, N. O.

    1979-01-01

    A panel method is used to calculate incompressible flow about arbitrary three-dimensional inlets with or without centerbodies for four fundamental flow conditions: unit onset flows parallel to each of the coordinate axes plus static operation. The computing time is scarcely longer than for a single solution. A linear superposition of these solutions quite rigorously gives incompressible flow about the inlet for any angle of attack, angle of yaw, and mass flow rate. Compressibility is accounted for by applying a well-proven correction to the incompressible flow. Since the computing times for the combination and the compressibility correction are small, flows at a large number of inlet operating conditions are obtained rather cheaply. Geometric input is aided by an automatic generating program. A number of graphical output features are provided to aid the user, including surface streamline tracing and automatic generation of curves of curves of constant pressure, Mach number, and flow inclination at selected inlet cross sections. The inlet method and use of the program are described. Illustrative results are presented.

  1. Statistical assessment of the learning curves of health technologies.

    PubMed

    Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T

    2001-01-01

    (1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second was a case series of consecutive laparoscopic cholecystectomy procedures performed by ten surgeons; the third was randomised trial data derived from the laparoscopic procedure arm of a multicentre trial of groin hernia repair, supplemented by data from non-randomised operations performed during the trial. RESULTS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: Of 4571 abstracts identified, 272 (6%) were later included in the study after review of the full paper. Some 51% of studies assessed a surgical minimal access technique and 95% were case series. The statistical method used most often (60%) was splitting the data into consecutive parts (such as halves or thirds), with only 14% attempting a more formal statistical analysis. The reporting of the studies was poor, with 31% giving no details of data collection methods. RESULTS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: Of 9431 abstracts assessed, 115 (1%) were deemed appropriate for further investigation and, of these, 18 were included in the study. All of the methods for complex data sets were identified in the non-clinical literature. These were discriminant analysis, two-stage estimation of learning rates, generalised estimating equations, multilevel models, latent curve models, time series models and stochastic parameter models. In addition, eight new shapes of learning curves were identified. RESULTS - TESTING OF STATISTICAL METHODS: No one particular shape of learning curve performed significantly better than another. The performance of 'operation time' as a proxy for learning differed between the three procedures. Multilevel modelling using the laparoscopic cholecystectomy data demonstrated and measured surgeon-specific and confounding effects. The inclusion of non-randomised cases, despite the possible limitations of the method, enhanced the interpretation of learning effects. CONCLUSIONS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: The statistical methods used for assessing learning effects in health technology assessment have been crude and the reporting of studies poor. CONCLUSIONS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: A number of statistical methods for assessing learning effects were identified that had not hitherto been used in health technology assessment. There was a hierarchy of methods for the identification and measurement of learning, and the more sophisticated methods for both have had little if any use in health technology assessment. This demonstrated the value of considering fields outside clinical research when addressing methodological issues in health technology assessment. CONCLUSIONS - TESTING OF STATISTICAL METHODS: It has been demonstrated that the portfolio of techniques identified can enhance investigations of learning curve effects. (ABSTRACT TRUNCATED)

  2. Gaussian decomposition of high-resolution melt curve derivatives for measuring genome-editing efficiency

    PubMed Central

    Zaboikin, Michail; Freter, Carl

    2018-01-01

    We describe a method for measuring genome editing efficiency from in silico analysis of high-resolution melt curve data. The melt curve data derived from amplicons of genome-edited or unmodified target sites were processed to remove the background fluorescent signal emanating from free fluorophore and then corrected for temperature-dependent quenching of fluorescence of double-stranded DNA-bound fluorophore. Corrected data were normalized and numerically differentiated to obtain the first derivatives of the melt curves. These were then mathematically modeled as a sum or superposition of minimal number of Gaussian components. Using Gaussian parameters determined by modeling of melt curve derivatives of unedited samples, we were able to model melt curve derivatives from genetically altered target sites where the mutant population could be accommodated using an additional Gaussian component. From this, the proportion contributed by the mutant component in the target region amplicon could be accurately determined. Mutant component computations compared well with the mutant frequency determination from next generation sequencing data. The results were also consistent with our earlier studies that used difference curve areas from high-resolution melt curves for determining the efficiency of genome-editing reagents. The advantage of the described method is that it does not require calibration curves to estimate proportion of mutants in amplicons of genome-edited target sites. PMID:29300734

  3. Flow in curved ducts of varying cross-section

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, F.; Patel, V. C.

    1992-07-01

    Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.

  4. Design of a rotary dielectric elastomer actuator using a topology optimization method based on pairs of curves

    NASA Astrophysics Data System (ADS)

    Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin

    2018-05-01

    Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.

  5. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  6. Mathematics of quantitative kinetic PCR and the application of standard curves.

    PubMed

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  7. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  8. Rainfall Runoff Modelling for Cedar Creek using HEC-HMS model

    NASA Astrophysics Data System (ADS)

    Pathak, P.; Kalra, A.

    2015-12-01

    Rainfall-runoff modelling studies are carried out for the purpose of basin and river management. Different models have been effectively used to examine relationships between rainfall and runoff. Cedar Creek Watershed Basin, the largest tributary of St. Josephs River, located in northeastern Indiana, was selected as a study area. The HEC-HMS model developed by US Army Corps of Engineers was used for the hydrological modelling. The national elevation and national hydrography data was obtained from United States Geological Survey National Map Viewer and the SSURGO soil data was obtained from United States Department of Agriculture. The watershed received hypothetical uniform rainfall for a duration of 13 hours. The Soil Conservation Service Curve Number and Unit Hydrograph methods were used for simulating surface runoff. The simulation provided hydrological details about the quantity and variability of runoff in the watershed. The runoff for different curve numbers was computed for the same basin and rainfall, and it was found that outflow peaked at an earlier time with a higher value for higher curve numbers than for smaller curve numbers. It was also noticed that the impact on outflow values nearly doubled with an increase of curve number of 10 for each subbasin in the watershed. The results from the current analysis may aid water managers in effectively managing the water resources within the basin. 1 Graduate Student, Department of Civil and Environmental Engineering, Southern Illinois University Carbondale, Carbondale, Illinois, 62901-6603 2 Development Review Division, Clark County Public Works, 500 S. Grand Central Parkway, Las Vegas, NV 89155, USA

  9. Identifying Blocks Formed by Curbed Fractures Using Exact Arithmetic

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Xia, L.; Yu, Q.; Zhang, X.

    2015-12-01

    Identifying blocks formed by fractures is important in rock engineering. Most studies assume the fractures to be perfect planar whereas curved fractures are rarely considered. However, large fractures observed in the field are often curved. This paper presents a new method for identifying rock blocks formed by both curved and planar fractures based on the element-block-assembling approach. The curved and planar fractures are represented as triangle meshes and planar discs, respectively. In the beginning of the identification method, the intersection segments between different triangle meshes are calculated and the intersected triangles are re-meshed to construct a piecewise linear complex (PLC). Then, the modeling domain is divided into tetrahedral subdomains under the constraint of the PLC and these subdomains are further decomposed into element blocks by extended planar fractures. Finally, the element blocks are combined and the subdomains are assembled to form complex blocks. The combination of two subdomains is skipped if and only if the common facet lies on a curved fracture. In this study, the exact arithmetic is used to handle the computational errors, which may threat the robustness of the block identification program when the degenerated cases are encountered. Specifically, a real number is represented as the ratio between two integers and the basic arithmetic such as addition, subtraction, multiplication and division between different real numbers can be performed exactly if an arbitrary precision integer package is used. In this way, the exact construction of blocks can be achieved without introducing computational errors. Several analytical examples are given in this paper and the results show effectiveness of this method in handling arbitrary shaped blocks. Moreover, there is no limitation on the number of blocks in a block system. The results also show (suggest) that the degenerated cases can be handled without affecting the robustness of the identification program.

  10. Long-term predictive capability of erosion models

    NASA Technical Reports Server (NTRS)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  11. Thermoluminescence properties of gamma-irradiated nano-structure hydroxyapatite.

    PubMed

    Shafaei, M; Ziaie, F; Sardari, D; Larijani, M M

    2016-02-01

    The suitability of nano-structured hydroxyapatite (HAP) for use as a thermoluminescence dosimeter was investigated. HAP samples were synthesized using a hydrolysis method. The formation of nanoparticles was confirmed by X-ray diffraction and average particle size was estimated to be ~30 nm. The glow curve exhibited a peak centered at around 200 °C. The additive dose method was applied and this showed that the thermoluminescence (TL) glow curves follow first-order kinetics due to the non-shifting nature of Tm after different doses. The numbers of overlapping peaks and related kinetic parameters were identified from Tm -Tstop through computerized glow curve deconvolution methods. The dependence of the TL responses on radiation dose was studied and a linear dose response up to 1000 Gy was observed for the samples. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Learning Curve of the Application of Huang Three-Step Maneuver in a Laparoscopic Spleen-Preserving Splenic Hilar Lymphadenectomy for Advanced Gastric Cancer

    PubMed Central

    Huang, Ze-Ning; Huang, Chang-Ming; Zheng, Chao-Hui; Li, Ping; Xie, Jian-Wei; Wang, Jia-Bin; Lin, Jian-Xian; Lu, Jun; Chen, Qi-Yue; Cao, Long-long; Lin, Mi; Tu, Ru-Hong

    2016-01-01

    Abstract To investigate the learning curve of the application of Huang 3-step maneuver, which was summarized and proposed by our center for the treatment of advanced upper gastric cancer. From April 2012 to March 2013, 130 consecutive patients who underwent a laparoscopic spleen-preserving splenic hilar lymphadenectomy (LSPL) by a single surgeon who performed Huang 3-step maneuver were retrospectively analyzed. The learning curve was analyzed based on the moving average (MA) method and the cumulative sum method (CUSUM). Surgical outcomes, short-term outcomes, and follow-up results before and after learning curve were contrastively analyzed. A stepwise multivariate logistic regression was used for a multivariable analysis to determine the factors that affect the operative time using Huang 3-step maneuver. Based on the CUSUM, the learning curve for Huang 3-step maneuver was divided into phase 1 (cases 1–40) and phase 2 (cases 41–130). The dissection time (DT) (P < 0.001), blood loss (BL) (P < 0.001), and number of vessels injured in phase 2 were significantly less than those in phase 1. There were no significant differences in the clinicopathological characteristics, short-term outcomes, or major postoperative complications between the learning curve phases. Univariate and multivariate analyses revealed that body mass index (BMI), short gastric vessels (SGVs), splenic hilar artery (SpA) type, and learning curve phase were significantly associated with DT. In the entire group, 124 patients were followed for a median time of 23.0 months (range, 3–30 months). There was no significant difference in the survival curve between phases. AUGC patients with a BMI less than 25 kg/m2, a small number of SGVs, and a concentrated type of SpA are ideal candidates for surgeons who are in phase 1 of the learning curve. PMID:27043698

  13. A curve fitting method for solving the flutter equation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Cooper, J. L.

    1972-01-01

    A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.

  14. A relook at NEH-4 curve number data and antecedent moisture condition criteria

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Singh, Vijay P.

    2006-08-01

    This paper investigates the variation of the popular curve number (CN) values given in the National Engineering Hand Book-Section 4 (NEH-4) of the Soil Conservation Service (SCS) with antecedent moisture condition (AMC) and soil type. Using the volumetric concept, involving soil, water, and air, a significant condensation of the NEH-4 tables is achieved. This leads to a procedure for determination of CN for gauged as well as ungauged watersheds. The rainfall-runoff events derived from daily data of four Indian watersheds exhibited a power relation between the potential maximum retention or CN and the 5-day antecedent rainfall amount. Including this power relation, the SCS-CN method was modified. This modification also eliminates the problem of sudden jumps from one AMC level to the other. The runoff values predicted using the modified method and the existing method utilizing the NEH-4 AMC criteria yielded similar results.

  15. Using Caspar Creek flow records to test peak flow estimation methods applicable to crossing design

    Treesearch

    Peter H. Cafferata; Leslie M. Reid

    2017-01-01

    Long-term flow records from sub-watersheds in the Caspar Creek Experimental Watersheds were used to test the accuracy of four methods commonly used to estimate peak flows in small forested watersheds: the Rational Method, the updated USGS Magnitude and Frequency Method, flow transference methods, and the NRCS curve number method. Comparison of measured and calculated...

  16. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia

    2014-06-10

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placingmore » them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.« less

  17. Evaluation of qPCR curve analysis methods for reliable biomarker discovery: bias, resolution, precision, and implications.

    PubMed

    Ruijter, Jan M; Pfaffl, Michael W; Zhao, Sheng; Spiess, Andrej N; Boggy, Gregory; Blom, Jochen; Rutledge, Robert G; Sisti, Davide; Lievens, Antoon; De Preter, Katleen; Derveaux, Stefaan; Hellemans, Jan; Vandesompele, Jo

    2013-01-01

    RNA transcripts such as mRNA or microRNA are frequently used as biomarkers to determine disease state or response to therapy. Reverse transcription (RT) in combination with quantitative PCR (qPCR) has become the method of choice to quantify small amounts of such RNA molecules. In parallel with the democratization of RT-qPCR and its increasing use in biomedical research or biomarker discovery, we witnessed a growth in the number of gene expression data analysis methods. Most of these methods are based on the principle that the position of the amplification curve with respect to the cycle-axis is a measure for the initial target quantity: the later the curve, the lower the target quantity. However, most methods differ in the mathematical algorithms used to determine this position, as well as in the way the efficiency of the PCR reaction (the fold increase of product per cycle) is determined and applied in the calculations. Moreover, there is dispute about whether the PCR efficiency is constant or continuously decreasing. Together this has lead to the development of different methods to analyze amplification curves. In published comparisons of these methods, available algorithms were typically applied in a restricted or outdated way, which does not do them justice. Therefore, we aimed at development of a framework for robust and unbiased assessment of curve analysis performance whereby various publicly available curve analysis methods were thoroughly compared using a previously published large clinical data set (Vermeulen et al., 2009) [11]. The original developers of these methods applied their algorithms and are co-author on this study. We assessed the curve analysis methods' impact on transcriptional biomarker identification in terms of expression level, statistical significance, and patient-classification accuracy. The concentration series per gene, together with data sets from unpublished technical performance experiments, were analyzed in order to assess the algorithms' precision, bias, and resolution. While large differences exist between methods when considering the technical performance experiments, most methods perform relatively well on the biomarker data. The data and the analysis results per method are made available to serve as benchmark for further development and evaluation of qPCR curve analysis methods (http://qPCRDataMethods.hfrc.nl). Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Evaluation of design flood frequency methods for Iowa streams : final report, June 2009.

    DOT National Transportation Integrated Search

    2009-06-01

    The objective of this project was to assess the predictive accuracy of flood frequency estimation for small Iowa streams based : on the Rational Method, the NRCS curve number approach, and the Iowa Runoff Chart. The evaluation was based on : comparis...

  19. Beyond the SCS curve number: A new stochastic spatial runoff approach

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  20. MILS in a general surgery unit: learning curve, indications, and limitations.

    PubMed

    Patriti, Alberto; Marano, Luigi; Casciola, Luciano

    2015-06-01

    Minimally invasive liver surgery (MILS) is going to be a method with a wide diffusion even in general surgery units. Organization, learning curve effect, and the environment are crucial issues to evaluate before starting a program of minimally invasive liver resections. Analysis of a consecutive series of 70 patients has been used to define advantages and limits of starting a program of MILS in a general surgery unit. Seventeen MILS have been calculated with the cumulative sum method as the number of cases to complete the learning curve. Operative times [270 (60-480) vs. 180 (15-550) min; p 0.01] and rate of conversion (6/17 vs. 5/53; p 0.018) decrease after this number of cases. More complex cases can be managed after a proper optimization of all steps of liver resection. When a high confidence of the medical and nurse staff with MILS is reached, economical and strategic issues should be evaluated in order to establish a multidisciplinary hepatobiliary unit independent from the general surgery unit to manage more complex cases.

  1. Tools to identify linear combination of prognostic factors which maximizes area under receiver operator curve.

    PubMed

    Todor, Nicolae; Todor, Irina; Săplăcan, Gavril

    2014-01-01

    The linear combination of variables is an attractive method in many medical analyses targeting a score to classify patients. In the case of ROC curves the most popular problem is to identify the linear combination which maximizes area under curve (AUC). This problem is complete closed when normality assumptions are met. With no assumption of normality search algorithm are avoided because it is accepted that we have to evaluate AUC n(d) times where n is the number of distinct observation and d is the number of variables. For d = 2, using particularities of AUC formula, we described an algorithm which lowered the number of evaluations of AUC from n(2) to n(n-1) + 1. For d > 2 our proposed solution is an approximate method by considering equidistant points on the unit sphere in R(d) where we evaluate AUC. The algorithms were applied to data from our lab to predict response of treatment by a set of molecular markers in cervical cancers patients. In order to evaluate the strength of our algorithms a simulation was added. In the case of no normality presented algorithms are feasible. For many variables computation time could be increased but acceptable.

  2. Curve Number Application in Continuous Runoff Models: An Exercise in Futility?

    NASA Astrophysics Data System (ADS)

    Lamont, S. J.; Eli, R. N.

    2006-12-01

    The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.

  3. Interpretation of magnetotelluric resistivity and phase soundings over horizontal layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patella, D.

    1976-02-01

    The present paper deals with a new inverse method for quantitatively interpreting magnetotelluric apparent resistivity and phase-lag sounding curves over horizontally stratified earth sections. The recurrent character of the general formula relating the wave impedance of an (n-l)-layered medium to that of an n-layered medium suggests the use of the method of reduction to a lower boundary plane, as originally termed by Koefoed in the case of dc resistivity soundings. The layering parameters are so directly derived by a simple iterative procedure. The method is applicable for any number of layers but only when both apparent resistivity and phase-lag soundingmore » curves are jointly available. Moreover no sophisticated algorithm is required: a simple desk electronic calculator together with a sheet of two-layer apparent resistivity and phase-lag master curves are sufficient to reproduce earth sections which, in the range of equivalence, are all consistent with field data.« less

  4. Numerical Characterization of Piezoceramics Using Resonance Curves

    PubMed Central

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-01

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875

  5. Numerical Characterization of Piezoceramics Using Resonance Curves.

    PubMed

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-27

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.

  6. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  7. Evaluation of the Soil Conservation Service curve number methodology using data from agricultural plots

    NASA Astrophysics Data System (ADS)

    Lal, Mohan; Mishra, S. K.; Pandey, Ashish; Pandey, R. P.; Meena, P. K.; Chaudhary, Anubhav; Jha, Ranjit Kumar; Shreevastava, Ajit Kumar; Kumar, Yogendra

    2017-01-01

    The Soil Conservation Service curve number (SCS-CN) method, also known as the Natural Resources Conservation Service curve number (NRCS-CN) method, is popular for computing the volume of direct surface runoff for a given rainfall event. The performance of the SCS-CN method, based on large rainfall (P) and runoff (Q) datasets of United States watersheds, is evaluated using a large dataset of natural storm events from 27 agricultural plots in India. On the whole, the CN estimates from the National Engineering Handbook (chapter 4) tables do not match those derived from the observed P and Q datasets. As a result, the runoff prediction using former CNs was poor for the data of 22 (out of 24) plots. However, the match was little better for higher CN values, consistent with the general notion that the existing SCS-CN method performs better for high rainfall-runoff (high CN) events. Infiltration capacity (fc) was the main explanatory variable for runoff (or CN) production in study plots as it exhibited the expected inverse relationship between CN and fc. The plot-data optimization yielded initial abstraction coefficient (λ) values from 0 to 0.659 for the ordered dataset and 0 to 0.208 for the natural dataset (with 0 as the most frequent value). Mean and median λ values were, respectively, 0.030 and 0 for the natural rainfall-runoff dataset and 0.108 and 0 for the ordered rainfall-runoff dataset. Runoff estimation was very sensitive to λ and it improved consistently as λ changed from 0.2 to 0.03.

  8. Multimodal approach to seismic pavement testing

    USGS Publications Warehouse

    Ryden, N.; Park, C.B.; Ulriksen, P.; Miller, R.D.

    2004-01-01

    A multimodal approach to nondestructive seismic pavement testing is described. The presented approach is based on multichannel analysis of all types of seismic waves propagating along the surface of the pavement. The multichannel data acquisition method is replaced by multichannel simulation with one receiver. This method uses only one accelerometer-receiver and a light hammer-source, to generate a synthetic receiver array. This data acquisition technique is made possible through careful triggering of the source and results in such simplification of the technique that it is made generally available. Multiple dispersion curves are automatically and objectively extracted using the multichannel analysis of surface waves processing scheme, which is described. Resulting dispersion curves in the high frequency range match with theoretical Lamb waves in a free plate. At lower frequencies there are several branches of dispersion curves corresponding to the lower layers of different stiffness in the pavement system. The observed behavior of multimodal dispersion curves is in agreement with theory, which has been validated through both numerical modeling and the transfer matrix method, by solving for complex wave numbers. ?? ASCE / JUNE 2004.

  9. Group Velocity Dispersion Curves from Wigner-Ville Distributions

    NASA Astrophysics Data System (ADS)

    Lloyd, Simon; Bokelmann, Goetz; Sucic, Victor

    2013-04-01

    With the widespread adoption of ambient noise tomography, and the increasing number of local earthquakes recorded worldwide due to dense seismic networks and many very dense temporary experiments, we consider it worthwhile to evaluate alternative Methods to measure surface wave group velocity dispersions curves. Moreover, the increased computing power of even a simple desktop computer makes it feasible to routinely use methods other than the typically employed multiple filtering technique (MFT). To that end we perform tests with synthetic and observed seismograms using the Wigner-Ville distribution (WVD) frequency time analysis, and compare dispersion curves measured with WVD and MFT with each other. Initial results suggest WVD to be at least as good as MFT at measuring dispersion, albeit at a greater computational expense. We therefore need to investigate if, and under which circumstances, WVD yields better dispersion curves than MFT, before considering routinely applying the method. As both MFT and WVD generally work well for teleseismic events and at longer periods, we explore how well the WVD method performs at shorter periods and for local events with smaller epicentral distances. Such dispersion information could potentially be beneficial for improving velocity structure resolution within the crust.

  10. Calibration and validation of a general infiltration model

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  11. Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves.

    PubMed

    Guyot, Patricia; Ades, A E; Ouwens, Mario J N M; Welton, Nicky J

    2012-02-01

    The results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated. We develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers. The validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported. The algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.

  12. Automatisierung des Verfahrens nach Beyer & Schweiger (1969) zur Bestimmung von Durchlässigkeit und Porosität aus Kornverteilungskurven

    NASA Astrophysics Data System (ADS)

    Houben, Georg J.; Blümel, Martin

    2017-11-01

    Porosity is a fundamental parameter in hydrogeology. The empirical method of Beyer and Schweiger (1969) allows the calculation of hydraulic conductivity and both the total and effective porosity from granulometric data. However, due to its graphical nature with type curves, it is tedious to apply and prone to reading errors. In this work, the type curves were digitized and emulated by mathematical functions. The latter were implemented into a spreadsheet and a visual basic program, allowing the fast automated application of the method for any number of samples.

  13. Improving runoff risk estimates: Formulating runoff as a bivariate process using the SCS curve number method

    NASA Astrophysics Data System (ADS)

    Shaw, Stephen B.; Walter, M. Todd

    2009-03-01

    The Soil Conservation Service curve number (SCS-CN) method is widely used to predict storm runoff for hydraulic design purposes, such as sizing culverts and detention basins. As traditionally used, the probability of calculated runoff is equated to the probability of the causative rainfall event, an assumption that fails to account for the influence of variations in soil moisture on runoff generation. We propose a modification to the SCS-CN method that explicitly incorporates rainfall return periods and the frequency of different soil moisture states to quantify storm runoff risks. Soil moisture status is assumed to be correlated to stream base flow. Fundamentally, this approach treats runoff as the outcome of a bivariate process instead of dictating a 1:1 relationship between causative rainfall and resulting runoff volumes. Using data from the Fall Creek watershed in western New York and the headwaters of the French Broad River in the mountains of North Carolina, we show that our modified SCS-CN method improves frequency discharge predictions in medium-sized watersheds in the eastern United States in comparison to the traditional application of the method.

  14. Hydrologic impacts of climate change and urbanization in Las Vegas Wash Watershed, Nevada

    EPA Science Inventory

    In this study, a cell-based model for the Las Vegas Wash (LVW) Watershed in Clark County, Nevada, was developed by combining the traditional hydrologic modeling methods (Thornthwaite’s water balance model and the Soil Conservation Survey’s Curve Number method) with the pixel-base...

  15. Statistical Analyses for Probabilistic Assessments of the Reactor Pressure Vessel Structural Integrity: Building a Master Curve on an Extract of the 'Euro' Fracture Toughness Dataset, Controlling Statistical Uncertainty for Both Mono-Temperature and multi-temperature tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josse, Florent; Lefebvre, Yannick; Todeschini, Patrick

    2006-07-01

    Assessing the structural integrity of a nuclear Reactor Pressure Vessel (RPV) subjected to pressurized-thermal-shock (PTS) transients is extremely important to safety. In addition to conventional deterministic calculations to confirm RPV integrity, Electricite de France (EDF) carries out probabilistic analyses. Probabilistic analyses are interesting because some key variables, albeit conventionally taken at conservative values, can be modeled more accurately through statistical variability. One variable which significantly affects RPV structural integrity assessment is cleavage fracture initiation toughness. The reference fracture toughness method currently in use at EDF is the RCCM and ASME Code lower-bound K{sub IC} based on the indexing parameter RT{submore » NDT}. However, in order to quantify the toughness scatter for probabilistic analyses, the master curve method is being analyzed at present. Furthermore, the master curve method is a direct means of evaluating fracture toughness based on K{sub JC} data. In the framework of the master curve investigation undertaken by EDF, this article deals with the following two statistical items: building a master curve from an extract of a fracture toughness dataset (from the European project 'Unified Reference Fracture Toughness Design curves for RPV Steels') and controlling statistical uncertainty for both mono-temperature and multi-temperature tests. Concerning the first point, master curve temperature dependence is empirical in nature. To determine the 'original' master curve, Wallin postulated that a unified description of fracture toughness temperature dependence for ferritic steels is possible, and used a large number of data corresponding to nuclear-grade pressure vessel steels and welds. Our working hypothesis is that some ferritic steels may behave in slightly different ways. Therefore we focused exclusively on the basic french reactor vessel metal of types A508 Class 3 and A 533 grade B Class 1, taking the sampling level and direction into account as well as the test specimen type. As for the second point, the emphasis is placed on the uncertainties in applying the master curve approach. For a toughness dataset based on different specimens of a single product, application of the master curve methodology requires the statistical estimation of one parameter: the reference temperature T{sub 0}. Because of the limited number of specimens, estimation of this temperature is uncertain. The ASTM standard provides a rough evaluation of this statistical uncertainty through an approximate confidence interval. In this paper, a thorough study is carried out to build more meaningful confidence intervals (for both mono-temperature and multi-temperature tests). These results ensure better control over uncertainty, and allow rigorous analysis of the impact of its influencing factors: the number of specimens and the temperatures at which they have been tested. (authors)« less

  16. SU-E-J-122: The CBCT Dose Calculation Using a Patient Specific CBCT Number to Mass Density Conversion Curve Based On a Novel Image Registration and Organ Mapping Method in Head-And-Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, J; Lasio, G; Chen, S

    2015-06-15

    Purpose: To develop a CBCT HU correction method using a patient specific HU to mass density conversion curve based on a novel image registration and organ mapping method for head-and-neck radiation therapy. Methods: There are three steps to generate a patient specific CBCT HU to mass density conversion curve. First, we developed a novel robust image registration method based on sparseness analysis to register the planning CT (PCT) and the CBCT. Second, a novel organ mapping method was developed to transfer the organs at risk (OAR) contours from the PCT to the CBCT and corresponding mean HU values of eachmore » OAR were measured in both the PCT and CBCT volumes. Third, a set of PCT and CBCT HU to mass density conversion curves were created based on the mean HU values of OARs and the corresponding mass density of the OAR in the PCT. Then, we compared our proposed conversion curve with the traditional Catphan phantom based CBCT HU to mass density calibration curve. Both curves were input into the treatment planning system (TPS) for dose calculation. Last, the PTV and OAR doses, DVH and dose distributions of CBCT plans are compared to the original treatment plan. Results: One head-and-neck cases which contained a pair of PCT and CBCT was used. The dose differences between the PCT and CBCT plans using the proposed method are −1.33% for the mean PTV, 0.06% for PTV D95%, and −0.56% for the left neck. The dose differences between plans of PCT and CBCT corrected using the CATPhan based method are −4.39% for mean PTV, 4.07% for PTV D95%, and −2.01% for the left neck. Conclusion: The proposed CBCT HU correction method achieves better agreement with the original treatment plan compared to the traditional CATPhan based calibration method.« less

  17. Extracting 3D Parametric Curves from 2D Images of Helical Objects.

    PubMed

    Willcocks, Chris G; Jackson, Philip T G; Nelson, Carl J; Obara, Boguslaw

    2017-09-01

    Helical objects occur in medicine, biology, cosmetics, nanotechnology, and engineering. Extracting a 3D parametric curve from a 2D image of a helical object has many practical applications, in particular being able to extract metrics such as tortuosity, frequency, and pitch. We present a method that is able to straighten the image object and derive a robust 3D helical curve from peaks in the object boundary. The algorithm has a small number of stable parameters that require little tuning, and the curve is validated against both synthetic and real-world data. The results show that the extracted 3D curve comes within close Hausdorff distance to the ground truth, and has near identical tortuosity for helical objects with a circular profile. Parameter insensitivity and robustness against high levels of image noise are demonstrated thoroughly and quantitatively.

  18. Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, X.; Suganuma, Y.; Fujii, M.

    2017-12-01

    Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.

  19. Variation of curve number with storm depth

    NASA Astrophysics Data System (ADS)

    Banasik, K.; Hejduk, L.

    2012-04-01

    The NRCS Curve Number (known also as SCS-CN) method is well known as a tool in predicting flood runoff depth from small ungauged catchment. The traditional way of determination the CNs, based on soil characteristics, land use and hydrological conditions, seemed to have tendency to overpredict the floods in some cases. Over 30 year rainfall-runoff data, collected in two small (A=23.4 & 82.4 km2), lowland, agricultural catchments in Center of Poland (Banasik & Woodward 2010), were used to determine runoff Curve Number and to check a tendency of changing. The observed CN declines with increasing storm size, which according recent views of Hawkins (1993) could be classified as a standard response of watershed. The analysis concluded, that using CN value according to the procedure described in USDA-SCS Handbook one receives representative value for estimating storm runoff from high rainfall depths in the analyzes catchments. This has been confirmed by applying "asymptotic approach" for estimating the watershed curve number from the rainfall-runoff data. Furthermore, the analysis indicated that CN, estimated from mean retention parameter S of recorded events with rainfall depth higher than initial abstraction, is also approaching the theoretical CN. The observed CN, ranging from 59.8 to 97.1 and from 52.3 to 95.5, in the smaller and the larger catchment respectively, declines with increasing storm size, which has been classified as a standard response of watershed. The investigation demonstrated also changeability of the CN during a year, with much lower values during the vegetation season. Banasik K. & D.E. Woodward (2010). "Empirical determination of curve number for a small agricultural watrshed in Poland". 2nd Joint Federal Interagency Conference, Las Vegas, NV, June 27 - July 1, 2010 (http://acwi.gov/sos/pubs/2ndJFIC/Contents/10E_Banasik_ 28_02_10. pdf). Hawkins R. H. (1993). "Asymptotic determination of curve numbers from data". Journal of Irrigation and Drainage Division. American Society of Civil Engineers, 119(2). pp. 334-345. ACKNOWLEDGMENTS The investigation described in the paper is part of the research project no. N N305 396238 founded by PL-Ministry of Science and Higher Education. The support provided by this organization is gratefully acknowledged.

  20. Bubble number saturation curve and asymptotics of hypobaric and hyperbaric exposures.

    PubMed

    Wienke, B R

    1991-12-01

    Within bubble number limits of the varying permeability and reduced gradient bubble models, it is shown that a linear form of the saturation curve for hyperbaric exposures and a nearly constant decompression ratio for hypobaric exposures are simultaneously recovered from the phase volume constraint. Both limits are maintained within a single bubble number saturation curve. A bubble term, varying exponentially with inverse pressure, provides closure. Two constants describe the saturation curve, both linked to seed numbers. Limits of other decompression models are also discussed and contrasted for completeness. It is suggested that the bubble number saturation curve thus provides a consistent link between hypobaric and hyperbaric data, a link not established by earlier decompression models.

  1. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  2. Transforaminal Lumbar Interbody Fusion with Rigid Interspinous Process Fixation: A Learning Curve Analysis of a Surgeon Team's First 74 Cases.

    PubMed

    Doherty, Patrick; Welch, Arthur; Tharpe, Jason; Moore, Camille; Ferry, Chris

    2017-05-30

    Studies have shown that a significant learning curve may be associated with adopting minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) with bilateral pedicle screw fixation (BPSF). Accordingly, several hybrid TLIF techniques have been proposed as surrogates to the accepted BPSF technique, asserting that less/fewer fixation(s) or less disruptive fixation may decrease the learning curve while still maintaining the minimally disruptive benefits. TLIF with interspinous process fixation (ISPF) is one such surrogate procedure. However, despite perceived ease of adaptability given the favorable proximity of the spinous processes, no evidence exists demonstrating whether or not the technique may possess its own inherent learning curve. The purpose of this study was to determine whether an intraoperative learning curve for one- and two-level TLIF + ISPF may exist for a single lead surgeon. Seventy-four consecutive patients who received one- or two-Level TLIF with rigid ISPF by a single lead surgeon were retrospectively reviewed. It was the first TLIF + ISPF case series for the lead surgeon. Intraoperative blood loss (EBL), hospitalization length-of-stay (LOS), fluoroscopy time, and postoperative complications were collected. EBL, LOS, and fluoroscopy time were modeled as a function of case number using multiple linear regression methods. A change point was included in each model to allow the trajectory of the outcomes to change during the duration of the case series. These change points were determined using profile likelihood methods. Models were fit using the maximum likelihood estimates for the change points. Age, sex, body mass index (BMI), and the number of treated levels were included as covariates. EBL, LOS, and fluoroscopy time did not significantly differ by age, sex, or BMI (p ≥ 0.12). Only EBL differed significantly by the number of levels (p = 0.026). The case number was not a significant predictor of EBL, LOS, or fluoroscopy time (p ≥ 0.21). At the time of data collection (mean time from surgery: 13.3 months), six patients had undergone revision due to interbody migration. No ISPF device complications were observed. Study outcomes support the ideal that TLIF + ISPF can be a readily adopted procedure without a significant intraoperative learning curve. However, the authors emphasize that further assessment of long-term healing outcomes is essential in fully characterizing both the efficacy and the indication learning curve for the TLIF + ISPF technique.

  3. Photonic devices on planar and curved substrates and methods for fabrication thereof

    DOEpatents

    Bartl, Michael H.; Barhoum, Moussa; Riassetto, David

    2016-08-02

    A versatile and rapid sol-gel technique for the fabrication of high quality one-dimensional photonic bandgap materials. For example, silica/titania multi-layer materials may be fabricated by a sol-gel chemistry route combined with dip-coating onto planar or curved substrate. A shock-cooling step immediately following the thin film heat-treatment process is introduced. This step was found important in the prevention of film crack formation--especially in silica/titania alternating stack materials with a high number of layers. The versatility of this sol-gel method is demonstrated by the fabrication of various Bragg stack-type materials with fine-tuned optical properties by tailoring the number and sequence of alternating layers, the film thickness and the effective refractive index of the deposited thin films. Measured optical properties show good agreement with theoretical simulations confirming the high quality of these sol-gel fabricated optical materials.

  4. Anomalous I-V curve for mono-atomic carbon chains

    NASA Astrophysics Data System (ADS)

    Song, Bo; Sanvito, Stefano; Fang, Haiping

    2010-10-01

    The electronic transport properties of mono-atomic carbon chains were studied theoretically using a combination of density functional theory and the non-equilibrium Green's functions method. The I-V curves for the chains composed of an even number of atoms and attached to gold electrodes through sulfur exhibit two plateaus where the current becomes bias independent. In contrast, when the number of carbon atoms in the chain is odd, the electric current simply increases monotonically with bias. This peculiar behavior is attributed to dimerization of the chains, directly resulting from their one-dimensional nature. The finding is expected to be helpful in designing molecular devices, such as carbon-chain-based transistors and sensors, for nanoscale and biological applications.

  5. Long-term hydrological simulation based on the Soil Conservation Service curve number

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Singh, Vijay P.

    2004-05-01

    Presenting a critical review of daily flow simulation models based on the Soil Conservation Service curve number (SCS-CN), this paper introduces a more versatile model based on the modified SCS-CN method, which specializes into seven cases. The proposed model was applied to the Hemavati watershed (area = 600 km2) in India and was found to yield satisfactory results in both calibration and validation. The model conserved monthly and annual runoff volumes satisfactorily. A sensitivity analysis of the model parameters was performed, including the effect of variation in storm duration. Finally, to investigate the model components, all seven variants of the modified version were tested for their suitability.

  6. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  7. A Dirichlet process model for classifying and forecasting epidemic curves.

    PubMed

    Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V

    2014-01-09

    A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.

  8. An experimental comparison of two adaptation strategies in an adaptive-walls wind-tunnel

    NASA Astrophysics Data System (ADS)

    Russo, G. P.; Zuppardi, G.; Basciani, M.

    1995-08-01

    In the present work an experimental comparison is made between two adaptation strategies: the Judd's method and the Everhart's method. A NACA 0012 airfoil has been tested at Mach numbers up to 0.4: models with chords up to 200 mm have been tested in a 200 mm × 200 mm test section. The two strategies, though based on different theoretical approaches, show a fairly good agreement as far as c p distribution on the model, lift and drag curves and residual interference are concerned and agree, in terms of lift curve slope and drag coefficient at zero lift, with the McCroskey correlation.

  9. A synthetic method of solar spectrum based on LED

    NASA Astrophysics Data System (ADS)

    Wang, Ji-qiang; Su, Shi; Zhang, Guo-yu; Zhang, Jian

    2017-10-01

    A synthetic method of solar spectrum which based on the spectral characteristics of the solar spectrum and LED, and the principle of arbitrary spectral synthesis was studied by using 14 kinds of LED with different central wavelengths.The LED and solar spectrum data were selected by Origin Software firstly, then calculated the total number of LED for each center band by the transformation relation between brightness and illumination and Least Squares Curve Fit in Matlab.Finally, the spectrum curve of AM1.5 standard solar spectrum was obtained. The results met the technical indexes of the solar spectrum matching with ±20% and the solar constant with >0.5.

  10. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem.

    PubMed

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA.

  11. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem

    PubMed Central

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171

  12. Monitoring the severe acute respiratory syndrome epidemic and assessing effectiveness of interventions in Hong Kong Special Administrative Region

    PubMed Central

    Chau, P; Yip, P

    2003-01-01

    Objective: To estimate the infection curve of severe acute respiratory syndrome (SARS) using the back projection method and to assess the effectiveness of interventions. Design: Statistical method. Data: The daily reported number of SARS and interventions taken by Hong Kong Special Administrative Region (HKSAR) up to 24 June 2003 are used. Method: To use a back projection technique to construct the infection curve of SARS in Hong Kong. The estimated epidemic curve is studied to identify the major events and to assess the effectiveness of interventions over the course of the epidemic. Results: The SARS infection curve in Hong Kong is constructed for the period 1 March 2003 to 24 June 2003. Some interventions seem to be effective while others apparently have little or no effect. The infections among the medical and health workers are high. Conclusions: Quarantine of the close contacts of confirmed and suspected SARS cases seems to be the most effective intervention against spread of SARS in the community. Thorough disinfection of the infected area against environmental hazards is helpful. Infections within hospitals can be reduced by better isolation measures and protective equipments. PMID:14573569

  13. The learning curve to achieve satisfactory completion rates in upper GI endoscopy: an analysis of a national training database.

    PubMed

    Ward, S T; Hancox, A; Mohammed, M A; Ismail, T; Griffiths, E A; Valori, R; Dunckley, P

    2017-06-01

    The aim of this study was to determine the number of OGDs (oesophago-gastro-duodenoscopies) trainees need to perform to acquire competency in terms of successful unassisted completion to the second part of the duodenum 95% of the time. OGD data were retrieved from the trainee e-portfolio developed by the Joint Advisory Group on GI Endoscopy (JAG) in the UK. All trainees were included unless they were known to have a baseline experience of >20 procedures or had submitted data for <20 procedures. The primary outcome measure was OGD completion, defined as passage of the endoscope to the second part of the duodenum without physical assistance. The number of OGDs required to achieve a 95% completion rate was calculated by the moving average method and learning curve cumulative summation (LC-Cusum) analysis. To determine which factors were independently associated with OGD completion, a mixed effects logistic regression model was constructed with OGD completion as the outcome variable. Data were analysed for 1255 trainees over 288 centres, representing 243 555 OGDs. By moving average method, trainees attained a 95% completion rate at 187 procedures. By LC-Cusum analysis, after 200 procedures, >90% trainees had attained a 95% completion rate. Total number of OGDs performed, trainee age and experience in lower GI endoscopy were factors independently associated with OGD completion. There are limited published data on the OGD learning curve. This is the largest study to date analysing the learning curve for competency acquisition. The JAG competency requirement for 200 procedures appears appropriate. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. Comparing kinetic curves in liquid chromatography

    NASA Astrophysics Data System (ADS)

    Kurganov, A. A.; Kanat'eva, A. Yu.; Yakubenko, E. E.; Popova, T. P.; Shiryaeva, V. E.

    2017-01-01

    Five equations for kinetic curves which connect the number of theoretical plates N and time of analysis t 0 for five different versions of optimization, depending on the parameters being varied (e.g., mobile phase flow rate, pressure drop, sorbent grain size), are obtained by means of mathematical modeling. It is found that a method based on the optimization of a sorbent grain size at fixed pressure is most suitable for the optimization of rapid separations. It is noted that the advantages of the method are limited by an area of relatively low efficiency, and the advantage of optimization is transferred to a method based on the optimization of both the sorbent grain size and the drop in pressure across a column in the area of high efficiency.

  15. An independent Cepheid distance scale: Current status

    NASA Technical Reports Server (NTRS)

    Barnes, T. G., III

    1980-01-01

    An independent distance scale for Cepheid variables is discussed. The apparent magnitude and the visual surface brightness, inferred from an appropriate color index, are used to determine the angular diameter variation of the Cepheid. When combined with the linear displacement curve obtained from the integrated radial velocity curve, the distance and linear radius are determined. The attractiveness of the method is its complete independence of all other stellar distance scales, even though a number of practical difficulties currently exist in implementing the technique.

  16. Investigations of Fully Homomorphic Encryption (IFHE)

    DTIC Science & Technology

    2015-05-01

    analysis via experiments using the curve secp256k1 used in the Bitcoin protocol. In particular we show that with as little as 200 signatures we are able...used in Bitcoin [30]. The implementation of the secp256k1 curve in OpenSSL is interesting as it uses the wNAF method for exponentiation, as opposed to... Bitcoin an obvious mitigation against the attack is to limit the number of times a private key is used within the Bitcoin protocol. Each wallet

  17. Runoff potentiality of a watershed through SCS and functional data analysis technique.

    PubMed

    Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z

    2014-01-01

    Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.

  18. Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique

    PubMed Central

    Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.

    2014-01-01

    Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911

  19. Review of Hull Structural Monitoring Systems for Navy Ships

    DTIC Science & Technology

    2013-05-01

    generally based on the same basic form of S-N curve, different correction methods are used by the various classification societies. ii. Methods for...Likewise there are a number of different methods employed for temperature compensation and these vary depending on the type of gauge, although typically...Analysis, Inc.[30] Figure 8. Examples of different methods of temperature compensation of fibre-optic strain sensors. It is noted in NATO

  20. Glimpses of stellar surfaces. II. Origins of the photometric modulations and timing variations of KOI-1452

    NASA Astrophysics Data System (ADS)

    Ioannidis, P.; Schmitt, J. H. M. M.

    2016-10-01

    The deviations of the mid-transit times of an exoplanet from a linear ephemeris are usually the result of gravitational interactions with other bodies in the system. However, these types of transit timing variations (TTV) can also be introduced by the influences of star spots on the shape of the transit profile. Here we use the method of unsharp masking to investigate the photometric light curves of planets with ambiguous TTV to compare the features in their O-C diagram with the occurrence and in-transit positions of spot-crossing events. This method seems to be particularly useful for the examination of transit light curves with only small numbers of in-transit data points, I.e., the long cadence light curves from Kepler satellite. As a proof of concept we apply this method to the light curve and the estimated eclipse timing variations of the eclipsing binary KOI-1452, for which we prove their non-gravitational nature. Furthermore, we use the method to study the rotation properties of the primary star of the system KOI-1452 and show that the spots responsible for the timing variations rotate with different periods than the most prominent periods of the system's light curve. We argue that the main contribution in the measured photometric variability of KOI-1452 originates in g-mode oscillations, which makes the primary star of the system a γ-Dor type variable candidate.

  1. A method to characterize average cervical spine ligament response based on raw data sets for implementation into injury biomechanics models.

    PubMed

    Mattucci, Stephen F E; Cronin, Duane S

    2015-01-01

    Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Manufacturing complexity analysis

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1977-01-01

    The analysis of the complexity of a typical system is presented. Starting with the subsystems of an example system, the step-by-step procedure for analysis of the complexity of an overall system is given. The learning curves for the various subsystems are determined as well as the concurrent numbers of relevant design parameters. Then trend curves are plotted for the learning curve slopes versus the various design-oriented parameters, e.g. number of parts versus slope of learning curve, or number of fasteners versus slope of learning curve, etc. Representative cuts are taken from each trend curve, and a figure-of-merit analysis is made for each of the subsystems. Based on these values, a characteristic curve is plotted which is indicative of the complexity of the particular subsystem. Each such characteristic curve is based on a universe of trend curve data taken from data points observed for the subsystem in question. Thus, a characteristic curve is developed for each of the subsystems in the overall system.

  3. Interactive application of quadratic expansion of chi-square statistic to nonlinear curve fitting

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Everhart, Joel L.

    1987-01-01

    This report contains a detailed theoretical description of an all-purpose, interactive curve-fitting routine that is based on P. R. Bevington's description of the quadratic expansion of the Chi-Square statistic. The method is implemented in the associated interactive, graphics-based computer program. Taylor's expansion of Chi-Square is first introduced, and justifications for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations is derived, then solved by matrix algebra. A brief description of the code is presented along with a limited number of changes that are required to customize the program of a particular task. To evaluate the performance of the method and the goodness of nonlinear curve fitting, two typical engineering problems are examined and the graphical and tabular output of each is discussed. A complete listing of the entire package is included as an appendix.

  4. Concentration Regimes of Biopolymers Xanthan, Tara, and Clairana, Comparing Dynamic Light Scattering and Distribution of Relaxation Time

    PubMed Central

    Oliveira, Patrícia D.; Michel, Ricardo C.; McBride, Alan J. A.; Moreira, Angelita S.; Lomba, Rosana F. T.; Vendruscolo, Claire T.

    2013-01-01

    The aim of this work was to evaluate the utilization of analysis of the distribution of relaxation time (DRT) using a dynamic light back-scattering technique as alternative method for the determination of the concentration regimes in aqueous solutions of biopolymers (xanthan, clairana and tara gums) by an analysis of the overlap (c*) and aggregation (c**) concentrations. The diffusion coefficients were obtained over a range of concentrations for each biopolymer using two methods. The first method analysed the behaviour of the diffusion coefficient as a function of the concentration of the gum solution. This method is based on the analysis of the diffusion coefficient versus the concentration curve. Using the slope of the curves, it was possible to determine the c* and c** for xanthan and tara gum. However, it was not possible to determine the concentration regimes for clairana using this method. The second method was based on an analysis of the DRTs, which showed different numbers of relaxation modes. It was observed that the concentrations at which the number of modes changed corresponded to the c* and c**. Thus, the DRT technique provided an alternative method for the determination of the critical concentrations of biopolymers. PMID:23671627

  5. Circuit analysis method for thin-film solar cell modules

    NASA Technical Reports Server (NTRS)

    Burger, D. R.

    1985-01-01

    The design of a thin-film solar cell module is dependent on the probability of occurrence of pinhole shunt defects. Using known or assumed defect density data, dichotomous population statistics can be used to calculate the number of defects expected in a module. Probability theory is then used to assign the defective cells to individual strings in a selected series-parallel circuit design. Iterative numerical calculation is used to calcuate I-V curves using cell test values or assumed defective cell values as inputs. Good and shunted cell I-V curves are added to determine the module output power and I-V curve. Different levels of shunt resistance can be selected to model different defect levels.

  6. Linear temporal and spatio-temporal stability analysis of a binary liquid film flowing down an inclined uniformly heated plate

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Hadid, Hamda Ben; Henry, Daniel; Mojtabi, Abdelkader

    Temporal and spatio-temporal instabilities of binary liquid films flowing down an inclined uniformly heated plate with Soret effect are investigated by using the Chebyshev collocation method to solve the full system of linear stability equations. Seven dimensionless parameters, i.e. the Kapitza, Galileo, Prandtl, Lewis, Soret, Marangoni, and Biot numbers (Ka, G, Pr, L, ) are used to control the flow system. In the case of pure spanwise perturbations, thermocapillary S- and P-modes are obtained. It is found that the most dangerous modes are stationary for positive Soret numbers (0), and oscillatory for =0 remains so for >0 and even merges with the long-wave S-mode. In the case of streamwise perturbations, a long-wave surface mode (H-mode) is also obtained. From the neutral curves, it is found that larger Soret numbers make the film flow more unstable as do larger Marangoni numbers. The increase of these parameters leads to the merging of the long-wave H- and S-modes, making the situation long-wave unstable for any Galileo number. It also strongly influences the short-wave P-mode which becomes the most critical for large enough Galileo numbers. Furthermore, from the boundary curves between absolute and convective instabilities (AI/CI) calculated for both the long-wave instability (S- and H-modes) and the short-wave instability (P-mode), it is shown that for small Galileo numbers the AI/CI boundary curves are determined by the long-wave instability, while for large Galileo numbers they are determined by the short-wave instability.

  7. Classification of burst and suppression in the neonatal electroencephalogram

    NASA Astrophysics Data System (ADS)

    Löfhede, J.; Löfgren, N.; Thordstein, M.; Flisberg, A.; Kjellmer, I.; Lindecrantz, K.

    2008-12-01

    Fisher's linear discriminant (FLD), a feed-forward artificial neural network (ANN) and a support vector machine (SVM) were compared with respect to their ability to distinguish bursts from suppressions in electroencephalograms (EEG) displaying a burst-suppression pattern. Five features extracted from the EEG were used as inputs. The study was based on EEG signals from six full-term infants who had suffered from perinatal asphyxia, and the methods have been trained with reference data classified by an experienced electroencephalographer. The results are summarized as the area under the curve (AUC), derived from receiver operating characteristic (ROC) curves for the three methods. Based on this, the SVM performs slightly better than the others. Testing the three methods with combinations of increasing numbers of the five features shows that the SVM handles the increasing amount of information better than the other methods.

  8. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  9. The quantification of spermatozoa by real-time quantitative PCR, spectrophotometry, and spermatophore cap size.

    PubMed

    Doyle, Jacqueline M; McCormick, Cory R; DeWoody, J Andrew

    2011-01-01

    Many animals, such as crustaceans, insects, and salamanders, package their sperm into spermatophores, and the number of spermatozoa contained in a spermatophore is relevant to studies of sexual selection and sperm competition. We used two molecular methods, real-time quantitative polymerase chain reaction (RT-qPCR) and spectrophotometry, to estimate sperm numbers from spermatophores. First, we designed gene-specific primers that produced a single amplicon in four species of ambystomatid salamanders. A standard curve generated from cloned amplicons revealed a strong positive relationship between template DNA quantity and cycle threshold, suggesting that RT-qPCR could be used to quantify sperm in a given sample. We then extracted DNA from multiple Ambystoma maculatum spermatophores, performed RT-qPCR on each sample, and estimated template copy numbers (i.e. sperm number) using the standard curve. Second, we used spectrophotometry to determine the number of sperm per spermatophore by measuring DNA concentration relative to the genome size. We documented a significant positive relationship between the estimates of sperm number based on RT-qPCR and those based on spectrophotometry. When these molecular estimates were compared to spermatophore cap size, which in principle could predict the number of sperm contained in the spermatophore, we also found a significant positive relationship between sperm number and spermatophore cap size. This linear model allows estimates of sperm number strictly from cap size, an approach which could greatly simplify the estimation of sperm number in future studies. These methods may help explain variation in fertilization success where sperm competition is mediated by sperm quantity. © 2010 Blackwell Publishing Ltd.

  10. Providing the physical basis of SCS curve number method and its proportionality relationship from Richards' equation

    NASA Astrophysics Data System (ADS)

    Hooshyar, M.; Wang, D.

    2016-12-01

    The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: 1) the soil is saturated at the land surface; and 2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.

  11. An analytical solution of Richards' equation providing the physical basis of SCS curve number method and its proportionality relationship

    NASA Astrophysics Data System (ADS)

    Hooshyar, Milad; Wang, Dingbao

    2016-08-01

    The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: (1) the soil is saturated at the land surface; and (2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.

  12. Interactive contour delineation and refinement in treatment planning of image‐guided radiation therapy

    PubMed Central

    Zhou, Wu

    2014-01-01

    The accurate contour delineation of the target and/or organs at risk (OAR) is essential in treatment planning for image‐guided radiation therapy (IGRT). Although many automatic contour delineation approaches have been proposed, few of them can fulfill the necessities of applications in terms of accuracy and efficiency. Moreover, clinicians would like to analyze the characteristics of regions of interests (ROI) and adjust contours manually during IGRT. Interactive tool for contour delineation is necessary in such cases. In this work, a novel approach of curve fitting for interactive contour delineation is proposed. It allows users to quickly improve contours by a simple mouse click. Initially, a region which contains interesting object is selected in the image, then the program can automatically select important control points from the region boundary, and the method of Hermite cubic curves is used to fit the control points. Hence, the optimized curve can be revised by moving its control points interactively. Meanwhile, several curve fitting methods are presented for the comparison. Finally, in order to improve the accuracy of contour delineation, the process of the curve refinement based on the maximum gradient magnitude is proposed. All the points on the curve are revised automatically towards the positions with maximum gradient magnitude. Experimental results show that Hermite cubic curves and the curve refinement based on the maximum gradient magnitude possess superior performance on the proposed platform in terms of accuracy, robustness, and time calculation. Experimental results of real medical images demonstrate the efficiency, accuracy, and robustness of the proposed process in clinical applications. PACS number: 87.53.Tf PMID:24423846

  13. Meta-analysis of Diagnostic Accuracy and ROC Curves with Covariate Adjusted Semiparametric Mixtures.

    PubMed

    Doebler, Philipp; Holling, Heinz

    2015-12-01

    Many screening tests dichotomize a measurement to classify subjects. Typically a cut-off value is chosen in a way that allows identification of an acceptable number of cases relative to a reference procedure, but does not produce too many false positives at the same time. Thus for the same sample many pairs of sensitivities and false positive rates result as the cut-off is varied. The curve of these points is called the receiver operating characteristic (ROC) curve. One goal of diagnostic meta-analysis is to integrate ROC curves and arrive at a summary ROC (SROC) curve. Holling, Böhning, and Böhning (Psychometrika 77:106-126, 2012a) demonstrated that finite semiparametric mixtures can describe the heterogeneity in a sample of Lehmann ROC curves well; this approach leads to clusters of SROC curves of a particular shape. We extend this work with the help of the [Formula: see text] transformation, a flexible family of transformations for proportions. A collection of SROC curves is constructed that approximately contains the Lehmann family but in addition allows the modeling of shapes beyond the Lehmann ROC curves. We introduce two rationales for determining the shape from the data. Using the fact that each curve corresponds to a natural univariate measure of diagnostic accuracy, we show how covariate adjusted mixtures lead to a meta-regression on SROC curves. Three worked examples illustrate the method.

  14. Comment on “Beyond the SCS-CN method: A theoretical framework for spatially lumped rainfall-runoff response” by M.S. Bartlett, A.J. Parolari, J.J. McDonnell and A. Porporato

    USDA-ARS?s Scientific Manuscript database

    Bartlett et al. [2016] performed a re-interpretation and modification of the space-time lumped USDA NRCS (formerly SCS) Curve Number (CN) method to extend its applicability to forested watersheds. We believe that the well documented limitations of the CN method severely constrains the applicability ...

  15. On the meaning of the weighted alternative free-response operating characteristic figure of merit.

    PubMed

    Chakraborty, Dev P; Zhai, Xuetong

    2016-05-01

    The free-response receiver operating characteristic (FROC) method is being increasingly used to evaluate observer performance in search tasks. Data analysis requires definition of a figure of merit (FOM) quantifying performance. While a number of FOMs have been proposed, the recommended one, namely, the weighted alternative FROC (wAFROC) FOM, is not well understood. The aim of this work is to clarify the meaning of this FOM by relating it to the empirical area under a proposed wAFROC curve. The weighted wAFROC FOM is defined in terms of a quasi-Wilcoxon statistic that involves weights, coding the clinical importance, assigned to each lesion. A new wAFROC curve is proposed, the y-axis of which incorporates the weights, giving more credit for marking clinically important lesions, while the x-axis is identical to that of the AFROC curve. An expression is derived relating the area under the empirical wAFROC curve to the wAFROC FOM. Examples are presented with small numbers of cases showing how AFROC and wAFROC curves are affected by correct and incorrect decisions and how the corresponding FOMs credit or penalize these decisions. The wAFROC, AFROC, and inferred ROC FOMs were applied to three clinical data sets involving multiple reader FROC interpretations in different modalities. It is shown analytically that the area under the empirical wAFROC curve equals the wAFROC FOM. This theorem is the FROC analog of a well-known theorem developed in 1975 for ROC analysis, which gave meaning to a Wilcoxon statistic based ROC FOM. A similar equivalence applies between the area under the empirical AFROC curve and the AFROC FOM. The examples show explicitly that the wAFROC FOM gives equal importance to all diseased cases, regardless of the number of lesions, a desirable statistical property not shared by the AFROC FOM. Applications to the clinical data sets show that the wAFROC FOM yields results comparable to that using the AFROC FOM. The equivalence theorem gives meaning to the weighted AFROC FOM, namely, it is identical to the empirical area under weighted AFROC curve.

  16. Satellite-derived land covers for runoff estimation using SCS-CN method in Chen-You-Lan Watershed, Taiwan

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Yan; Lin, Chao-Yuan

    2017-04-01

    The Soil Conservation Service Curve Number (SCS-CN) method, which was originally developed by the USDA Natural Resources Conservation Service, is widely used to estimate direct runoff volume from rainfall. The runoff Curve Number (CN) parameter is based on the hydrologic soil group and land use factors. In Taiwan, the national land use maps were interpreted from aerial photos in 1995 and 2008. Rapid updating of post-disaster land use map is limited due to the high cost of production, so the classification of satellite images is the alternative method to obtain the land use map. In this study, Normalized Difference Vegetation Index (NDVI) in Chen-You-Lan Watershed was derived from dry and wet season of Landsat imageries during 2003 - 2008. Land covers were interpreted from mean value and standard deviation of NDVI and were categorized into 4 groups i.e. forest, grassland, agriculture and bare land. Then, the runoff volume of typhoon events during 2005 - 2009 were estimated using SCS-CN method and verified with the measured runoff data. The result showed that the model efficiency coefficient is 90.77%. Therefore, estimating runoff by using the land cover map classified from satellite images is practicable.

  17. ArcCN-Runoff: An ArcGIS tool for generating curve number and runoff maps

    USGS Publications Warehouse

    Zhan, X.; Huang, M.-L.

    2004-01-01

    The development and the application of ArcCN-Runoff tool, an extension of ESRI@ ArcGIS software, are reported. This tool can be applied to determine curve numbers and to calculate runoff or infiltration for a rainfall event in a watershed. Implementation of GIS techniques such as dissolving, intersecting, and a curve-number reference table improve efficiency. Technical processing time may be reduced from days, if not weeks, to hours for producing spatially varied curve number and runoff maps. An application example for a watershed in Lyon County and Osage County, Kansas, USA, is presented. ?? 2004 Elsevier Ltd. All rights reserved.

  18. A method for the estimation of dual transmissivities from slug tests

    NASA Astrophysics Data System (ADS)

    Wolny, Filip; Marciniak, Marek; Kaczmarek, Mariusz

    2018-03-01

    Aquifer homogeneity is usually assumed when interpreting the results of pumping and slug tests, although aquifers are essentially heterogeneous. The aim of this study is to present a method of determining the transmissivities of dual-permeability water-bearing formations based on slug tests such as the pressure-induced permeability test. A bi-exponential rate-of-rise curve is typically observed during many of these tests conducted in heterogeneous formations. The work involved analyzing curves deviating from the exponential rise recorded at the Belchatow Lignite Mine in central Poland, where a significant number of permeability tests have been conducted. In most cases, bi-exponential movement was observed in piezometers with a screen installed in layered sediments, each with a different hydraulic conductivity, or in fissured rock. The possibility to identify the flow properties of these geological formations was analyzed. For each piezometer installed in such formations, a set of two transmissivity values was calculated piecewise based on the interpretation algorithm of the pressure-induced permeability test—one value for the first (steeper) part of the obtained rate-of-rise curve, and a second value for the latter part of the curve. The results of transmissivity estimation for each piezometer are shown. The discussion presents the limitations of the interpretational method and suggests future modeling plans.

  19. Asymptotic scalings of developing curved pipe flow

    NASA Astrophysics Data System (ADS)

    Ault, Jesse; Chen, Kevin; Stone, Howard

    2015-11-01

    Asymptotic velocity and pressure scalings are identified for the developing curved pipe flow problem in the limit of small pipe curvature and high Reynolds numbers. The continuity and Navier-Stokes equations in toroidal coordinates are linearized about Dean's analytical curved pipe flow solution (Dean 1927). Applying appropriate scaling arguments to the perturbation pressure and velocity components and taking the limits of small curvature and large Reynolds number yields a set of governing equations and boundary conditions for the perturbations, independent of any Reynolds number and pipe curvature dependence. Direct numerical simulations are used to confirm these scaling arguments. Fully developed straight pipe flow is simulated entering a curved pipe section for a range of Reynolds numbers and pipe-to-curvature radius ratios. The maximum values of the axial and secondary velocity perturbation components along with the maximum value of the pressure perturbation are plotted along the curved pipe section. The results collapse when the scaling arguments are applied. The numerically solved decay of the velocity perturbation is also used to determine the entrance/development lengths for the curved pipe flows, which are shown to scale linearly with the Reynolds number.

  20. Understanding the effect of watershed characteristic on the runoff using SCS curve number

    NASA Astrophysics Data System (ADS)

    Damayanti, Frieta; Schneider, Karl

    2015-04-01

    Runoff modeling is a key component in watershed management. The temporal course and amount of runoff is a complex function of a multitude of parameters such as climate, soil, topography, land use, and water management. Against the background of the current rapid environmental change, which is due to both i) man-made changes (e.g. urban development, land use change, water management) as well as ii) changes in the natural systems (e.g. climate change), understanding and predicting the impacts of these changes upon the runoff is very important and affects the wellbeing of many people living in the watershed. A main tool for predictions is hydrologic models. Particularly process based models are the method of choice to assess the impact of land use and climate change. However, many regions which experience large changes in the watersheds can be described as rather data poor, which limits the applicability of such models. This is particularly also true for the Telomoyo Watershed (545 km2) which is located in southern part of Central Java province. The average annual rainfall of the study area reaches 2971 mm. Irrigated paddy field are the dominating land use (35%), followed by built-up area and dry land agriculture. The only available soil map is the FAO soil digital map of the world, which provides rather general soil information. A field survey accompanied by a lab analysis 65 soil samples of was carried out to provide more detailed soil texture information. The soil texture map is a key input in the SCS method to define hydrological soil groups. In the frame of our study on 'Integrated Analysis on Flood Risk of Telomoyo Watershed in Response to the Climate and Land Use Change' funded by the German Academic Exchange service (DAAD) we analyzed the sensitivity of the modeled runoff upon the choice of the method to estimate the CN values using the SCS-CN method. The goal of this study is to analyze the impact of different data sources on the curve numbers and the estimated runoff. CN values were estimated using the field measurements of soil textures for different combinations of land use and topography. To transfer the local soil texture measurements to the watershed domain a statistical analysis using the frequency distribution of the measured soil textures is applied and used to derive the effective CN value for a given land use, topography and soil texture combination. Since the curve numbers change as a function of parameter combinations, the effect of different methods to estimate the curve number upon the runoff is analyzed and compared to the straight forward method of using the data from the FAO soil map.

  1. Learning/cost-improvement curves

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1976-01-01

    Review guide is an aid to manager or engineer who must determine production costs for components, systems, or services. Methods are described by which manufacturers may use historical data, task characteristics, and current cost data to estimate unit prices as function of number of units to be produced.

  2. Research on the middle-of-receiver-spread assumption of the MASW method

    USGS Publications Warehouse

    Luo, Y.; Xia, J.; Liu, J.; Xu, Y.; Liu, Q.

    2009-01-01

    The multichannel analysis of surface wave (MASW) method has been effectively used to determine near-surface shear- (S-) wave velocity. Estimating the S-wave velocity profile from Rayleigh-wave measurements is straightforward. A three-step process is required to obtain S-wave velocity profiles: acquisition of a multiple number of multichannel records along a linear survey line by use of the roll-along mode, extraction of dispersion curves of Rayleigh waves, and inversion of dispersion curves for an S-wave velocity profile for each shot gather. A pseudo-2D S-wave velocity section can be generated by aligning 1D S-wave velocity models. In this process, it is very important to understand where the inverted 1D S-wave velocity profile should be located: the midpoint of each spread (a middle-of-receiver-spread assumption) or somewhere between the source and the last receiver. In other words, the extracted dispersion curve is determined by the geophysical structure within the geophone spread or strongly affected by the source geophysical structure. In this paper, dispersion curves of synthetic datasets and a real-world example are calculated by fixing the receiver spread and changing the source location. Results demonstrate that the dispersion curves are mainly determined by structures within a receiver spread. ?? 2008 Elsevier Ltd. All rights reserved.

  3. W-curve alignments for HIV-1 genomic comparisons.

    PubMed

    Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H

    2010-06-01

    The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of aligning extremes of the curves to effectively phase-shift them past the HIV-1 gap problem, is presented. Besides yielding similar neighbor-joining phenogram topologies, most Mother and Infant C2-V5 sequences in the cohort pairs geometrically map closest to each other, indicating that W-curve heuristics overcame any gap problem.

  4. A Global Optimization Method to Calculate Water Retention Curves

    NASA Astrophysics Data System (ADS)

    Maggi, S.; Caputo, M. C.; Turturro, A. C.

    2013-12-01

    Water retention curves (WRC) have a key role for the hydraulic characterization of soils and rocks. The behaviour of the medium is defined by relating the unsaturated water content to the matric potential. The experimental determination of WRCs requires an accurate and detailed measurement of the dependence of matric potential on water content, a time-consuming and error-prone process, in particular for rocky media. A complete experimental WRC needs at least a few tens of data points, distributed more or less uniformly from full saturation to oven dryness. Since each measurement requires to wait to reach steady state conditions (i.e., between a few tens of minutes for soils and up to several hours or days for rocks or clays), the whole process can even take a few months. The experimental data are fitted to the most appropriate parametric model, such as the widely used models of Van Genuchten, Brooks and Corey and Rossi-Nimmo, to obtain the analytic WRC. We present here a new method for the determination of the parameters that best fit the models to the available experimental data. The method is based on differential evolution, an evolutionary computation algorithm particularly useful for multidimensional real-valued global optimization problems. With this method it is possible to strongly reduce the number of measurements necessary to optimize the model parameters that accurately describe the WRC of the samples, allowing to decrease the time needed to adequately characterize the medium. In the present work, we have applied our method to calculate the WRCs of sedimentary carbonatic rocks of marine origin, belonging to 'Calcarenite di Gravina' Formation (Middle Pliocene - Early Pleistocene) and coming from two different quarry districts in Southern Italy. WRC curves calculated using the Van Genuchten model by simulated annealing (dashed curve) and differential evolution (solid curve). The curves are calculated using 10 experimental data points randomly extracted from the full experimental dataset. Simulated annealing is not able to find the optimal solution with this reduced data set.

  5. A FEM-based method to determine the complex material properties of piezoelectric disks.

    PubMed

    Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C

    2014-08-01

    Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is considered in the adjustment procedure, the obtained material properties allow simulating the displacement amplitude accurately. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A formulation of tissue- and water-equivalent materials using the stoichiometric analysis method for CT-number calibration in radiotherapy treatment planning.

    PubMed

    Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A

    2012-03-07

    Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.

  7. A Dirichlet process model for classifying and forecasting epidemic curves

    PubMed Central

    2014-01-01

    Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642

  8. An appraisal of the learning curve in robotic general surgery.

    PubMed

    Pernar, Luise I M; Robertson, Faith C; Tavakkoli, Ali; Sheu, Eric G; Brooks, David C; Smink, Douglas S

    2017-11-01

    Robotic-assisted surgery is used with increasing frequency in general surgery for a variety of applications. In spite of this increase in usage, the learning curve is not yet defined. This study reviews the literature on the learning curve in robotic general surgery to inform adopters of the technology. PubMed and EMBASE searches yielded 3690 abstracts published between July 1986 and March 2016. The abstracts were evaluated based on the following inclusion criteria: written in English, reporting original work, focus on general surgery operations, and with explicit statistical methods. Twenty-six full-length articles were included in final analysis. The articles described the learning curves in colorectal (9 articles, 35%), foregut/bariatric (8, 31%), biliary (5, 19%), and solid organ (4, 15%) surgery. Eighteen of 26 (69%) articles report single-surgeon experiences. Time was used as a measure of the learning curve in all studies (100%); outcomes were examined in 10 (38%). In 12 studies (46%), the authors identified three phases of the learning curve. Numbers of cases needed to achieve plateau performance were wide-ranging but overlapping for different kinds of operations: 19-128 cases for colorectal, 8-95 for foregut/bariatric, 20-48 for biliary, and 10-80 for solid organ surgery. Although robotic surgery is increasingly utilized in general surgery, the literature provides few guidelines on the learning curve for adoption. In this heterogeneous sample of reviewed articles, the number of cases needed to achieve plateau performance varies by case type and the learning curve may have multiple phases as surgeons add more complex cases to their case mix with growing experience. Time is the most common determinant for the learning curve. The literature lacks a uniform assessment of outcomes and complications, which would arguably reflect expertise in a more meaningful way than time to perform the operation alone.

  9. MHD Convective Flow of Jeffrey Fluid Due to a Curved Stretching Surface with Homogeneous-Heterogeneous Reactions

    PubMed Central

    Imtiaz, Maria; Hayat, Tasawar; Alsaedi, Ahmed

    2016-01-01

    This paper looks at the flow of Jeffrey fluid due to a curved stretching sheet. Effect of homogeneous-heterogeneous reactions is considered. An electrically conducting fluid in the presence of applied magnetic field is considered. Convective boundary conditions model the heat transfer analysis. Transformation method reduces the governing nonlinear partial differential equations into the ordinary differential equations. Convergence of the obtained series solutions is explicitly discussed. Characteristics of sundry parameters on the velocity, temperature and concentration profiles are analyzed by plotting graphs. Computations for pressure, skin friction coefficient and surface heat transfer rate are presented and examined. It is noted that fluid velocity and temperature through curvature parameter are enhanced. Increasing values of Biot number correspond to the enhancement in temperature and Nusselt number. PMID:27583457

  10. Calculating transient rates from surveys

    NASA Astrophysics Data System (ADS)

    Carbone, D.; van der Horst, A. J.; Wijers, R. A. M. J.; Rowlinson, A.

    2017-03-01

    We have developed a method to determine the transient surface density and transient rate for any given survey, using Monte Carlo simulations. This method allows us to determine the transient rate as a function of both the flux and the duration of the transients in the whole flux-duration plane rather than one or a few points as currently available methods do. It is applicable to every survey strategy that is monitoring the same part of the sky, regardless the instrument or wavelength of the survey, or the target sources. We have simulated both top-hat and Fast Rise Exponential Decay light curves, highlighting how the shape of the light curve might affect the detectability of transients. Another application for this method is to estimate the number of transients of a given kind that are expected to be detected by a survey, provided that their rate is known.

  11. ARBAN-A new method for analysis of ergonomic effort.

    PubMed

    Holzmann, P

    1982-06-01

    ARBAN is a method for the ergonomic analysis of work, including work situations which involve widely differing body postures and loads. The idea of the method is thal all phases of the analysis process that imply specific knowledge on ergonomics are teken over by filming equipment and a computer routine. All tasks that must be carried out by the investigator in the process of analysis are so designed that they appear as evident by the use of systematic common sense. The ARBAN analysis method contains four steps: 1. Recording of the workplace situation on video or film. 2. Coding the posture and load situation at a number of closely spaced 'frozen' situations. 3. Computerisation. 4. Evaluation of the results. The computer calculates figures for the total ergonomic stress on the whole body as well as on different parts of the body separately. They are presented as 'Ergonomic stress/ time curves', where the heavy load situations occur as peaks of the curve. The work cycle may also be divided into different tasks, where the stress and duration patterns can be compared. The integral of the curves are calculated for single-figure comparison of different tasks as well as different work situations.

  12. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    NASA Astrophysics Data System (ADS)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  13. Molecular simulations of Hugoniots of detonation product mixtures at chemical equilibrium: Microscopic calculation of the Chapman-Jouguet state

    NASA Astrophysics Data System (ADS)

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-08-01

    In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at PCJ and TCJ is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Grüneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.

  14. Molecular simulations of Hugoniots of detonation product mixtures at chemical equilibrium: microscopic calculation of the Chapman-Jouguet state.

    PubMed

    Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard

    2007-08-28

    In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at P(CJ) and T(CJ) is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Gruneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.

  15. Benefit and cost curves for typical pollination mutualisms.

    PubMed

    Morris, William F; Vázquez, Diego P; Chacoff, Natacha P

    2010-05-01

    Mutualisms provide benefits to interacting species, but they also involve costs. If costs come to exceed benefits as population density or the frequency of encounters between species increases, the interaction will no longer be mutualistic. Thus curves that represent benefits and costs as functions of interaction frequency are important tools for predicting when a mutualism will tip over into antagonism. Currently, most of what we know about benefit and cost curves in pollination mutualisms comes from highly specialized pollinating seed-consumer mutualisms, such as the yucca moth-yucca interaction. There, benefits to female reproduction saturate as the number of visits to a flower increases (because the amount of pollen needed to fertilize all the flower's ovules is finite), but costs continue to increase (because pollinator offspring consume developing seeds), leading to a peak in seed production at an intermediate number of visits. But for most plant-pollinator mutualisms, costs to the plant are more subtle than consumption of seeds, and how such costs scale with interaction frequency remains largely unknown. Here, we present reasonable benefit and cost curves that are appropriate for typical pollinator-plant interactions, and we show how they can result in a wide diversity of relationships between net benefit (benefit minus cost) and interaction frequency. We then use maximum-likelihood methods to fit net-benefit curves to measures of female reproductive success for three typical pollination mutualisms from two continents, and for each system we chose the most parsimonious model using information-criterion statistics. We discuss the implications of the shape of the net-benefit curve for the ecology and evolution of plant-pollinator mutualisms, as well as the challenges that lie ahead for disentangling the underlying benefit and cost curves for typical pollination mutualisms.

  16. Recalcitrant vulnerability curves: methods of analysis and the concept of fibre bridges for enhanced cavitation resistance.

    PubMed

    Cai, Jing; Li, Shan; Zhang, Haixin; Zhang, Shuoxin; Tyree, Melvin T

    2014-01-01

    Vulnerability curves (VCs) generally can be fitted to the Weibull equation; however, a growing number of VCs appear to be recalcitrant, that is, deviate from a Weibull but seem to fit dual Weibull curves. We hypothesize that dual Weibull curves in Hippophae rhamnoides L. are due to different vessel diameter classes, inter-vessel hydraulic connections or vessels versus fibre tracheids. We used dye staining techniques, hydraulic measurements and quantitative anatomy measurements to test these hypotheses. The fibres contribute 1.3% of the total stem conductivity, which eliminates the hypothesis that fibre tracheids account for the second Weibull curve. Nevertheless, the staining pattern of vessels and fibre tracheids suggested that fibres might function as a hydraulic bridge between adjacent vessels. We also argue that fibre bridges are safer than vessel-to-vessel pits and put forward the concept as a new paradigm. Hence, we tentatively propose that the first Weibull curve may be accounted by vessels connected to each other directly by pit fields, while the second Weibull curve is associated with vessels that are connected almost exclusively by fibre bridges. Further research is needed to test the concept of fibre bridge safety in species that have recalcitrant or normal Weibull curves. © 2013 John Wiley & Sons Ltd.

  17. The new concept of the monitoring and appraisal of bone union inflexibility of fractures treated by Dynastab DK external fixator.

    PubMed

    Lenz, Gerhard P; Stasiak, Andrzej; Deszczyński, Jarosław; Karpiński, Janusz; Stolarczyk, Artur; Ziółkowski, Marcin; Szczesny, Grzegorz

    2003-10-30

    Background. This work focuses on problems of heuristic techniques based on artificial intelligence. Mainly about artificial non-linear and multilayer neurons, which were used to estimate the bone union fractures treatment process using orthopaedic stabilizers Dynastab DK. Material and methods. The author utilizes computer software based on multilayer neuronal network systems, which allows to predict the curve of the bone union at early stages of therapy. The training of the neural net has been made on fifty six cases of bone fracture which has been cured by the Dynastab stabilizers DK. Using such trained net, seventeen fractures of long bones shafts were being examined on strength and prediction of the bone union as well. Results. Analyzing results, it should be underlined that mechanical properties of the bone union in the slot of fracture are changing in nonlinear way in function of time. Especially, major changes were observed during the forth month of the fracture treatment. There is strong correlation between measure number two and measure number six. Measure number two is more strict and in the matter of fact it refers to flexion, as well as the measure number six, to compression of the bone in the fracture slot. Conclusions. Consequently, deflection loads are especially hazardous for healing bone. The very strong correlation between real curves and predicted curves shows the correctness of the neuronal model.

  18. The Shock and Vibration Digest. Volume 16, Number 3

    DTIC Science & Technology

    1984-03-01

    Fluid-induced Statistical Energy Analysis Method excitation, Wind tunnel testing V.R. Miller and L.L. Faulkner Flight Dynamics Lab., Air Force...84475 wall by the statistical energy analysis (SEA) method. The fuselage structure is represented as a series of curved, iso- Probabilistic Fracture...heavy are demonstrated in three-dimensional form. floor, a statistical energy analysis (SEA) model is presented. Only structural systems (i.e., no

  19. Cutting Force Predication Based on Integration of Symmetric Fuzzy Number and Finite Element Method

    PubMed Central

    Wang, Zhanli; Hu, Yanjuan; Wang, Yao; Dong, Chao; Pang, Zaixiang

    2014-01-01

    In the process of turning, pointing at the uncertain phenomenon of cutting which is caused by the disturbance of random factors, for determining the uncertain scope of cutting force, the integrated symmetric fuzzy number and the finite element method (FEM) are used in the prediction of cutting force. The method used symmetric fuzzy number to establish fuzzy function between cutting force and three factors and obtained the uncertain interval of cutting force by linear programming. At the same time, the change curve of cutting force with time was directly simulated by using thermal-mechanical coupling FEM; also the nonuniform stress field and temperature distribution of workpiece, tool, and chip under the action of thermal-mechanical coupling were simulated. The experimental result shows that the method is effective for the uncertain prediction of cutting force. PMID:24790556

  20. Analysis of the width-w non-adjacent form in conjunction with hyperelliptic curve cryptography and with lattices☆

    PubMed Central

    Krenn, Daniel

    2013-01-01

    In this work the number of occurrences of a fixed non-zero digit in the width-w non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange’s method. PMID:23805020

  1. Analysis of the width-[Formula: see text] non-adjacent form in conjunction with hyperelliptic curve cryptography and with lattices.

    PubMed

    Krenn, Daniel

    2013-06-17

    In this work the number of occurrences of a fixed non-zero digit in the width-[Formula: see text] non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange's method.

  2. Classification of Fowl Adenovirus Serotypes by Use of High-Resolution Melting-Curve Analysis of the Hexon Gene Region▿

    PubMed Central

    Steer, Penelope A.; Kirkpatrick, Naomi C.; O'Rourke, Denise; Noormohammadi, Amir H.

    2009-01-01

    Identification of fowl adenovirus (FAdV) serotypes is of importance in epidemiological studies of disease outbreaks and the adoption of vaccination strategies. In this study, real-time PCR and subsequent high-resolution melting (HRM)-curve analysis of three regions of the hexon gene were developed and assessed for their potential in differentiating 12 FAdV reference serotypes. The results were compared to previously described PCR and restriction enzyme analyses of the hexon gene. Both HRM-curve analysis of a 191-bp region of the hexon gene and restriction enzyme analysis failed to distinguish a number of serotypes used in this study. In addition, PCR of the region spanning nucleotides (nt) 144 to 1040 failed to amplify FAdV-5 in sufficient quantities for further analysis. However, HRM-curve analysis of the region spanning nt 301 to 890 proved a sensitive and specific method of differentiating all 12 serotypes. All melt curves were highly reproducible, and replicates of each serotype were correctly genotyped with a mean confidence value of more than 99% using normalized HRM curves. Sequencing analysis revealed that each profile was related to a unique sequence, with some sequences sharing greater than 94% identity. Melting-curve profiles were found to be related mainly to GC composition and distribution throughout the amplicons, regardless of sequence identity. The results presented in this study show that the closed-tube method of PCR and HRM-curve analysis provides an accurate, rapid, and robust genotyping technique for the identification of FAdV serotypes and can be used as a model for developing genotyping techniques for other pathogens. PMID:19036935

  3. Exploring the optimum step size for defocus curves.

    PubMed

    Wolffsohn, James S; Jinabhai, Amit N; Kingsnorth, Alec; Sheppard, Amy L; Naroo, Shehzad A; Shah, Sunil; Buckhurst, Phillip; Hall, Lee A; Young, Graeme

    2013-06-01

    To evaluate the effect of reducing the number of visual acuity measurements made in a defocus curve on the quality of data quantified. Midland Eye, Solihull, United Kingdom. Evaluation of a technique. Defocus curves were constructed by measuring visual acuity on a distance logMAR letter chart, randomizing the test letters between lens presentations. The lens powers evaluated ranged between +1.50 diopters (D) and -5.00 D in 0.50 D steps, which were also presented in a randomized order. Defocus curves were measured binocularly with the Tecnis diffractive, Rezoom refractive, Lentis rotationally asymmetric segmented (+3.00 D addition [add]), and Finevision trifocal multifocal intraocular lenses (IOLs) implanted bilaterally, and also for the diffractive IOL and refractive or rotationally asymmetric segmented (+3.00 D and +1.50 D adds) multifocal IOLs implanted contralaterally. Relative and absolute range of clear-focus metrics and area metrics were calculated for curves fitted using 0.50 D, 1.00 D, and 1.50 D steps and a near add-specific profile (ie, distance, half the near add, and the full near-add powers). A significant difference in simulated results was found in at least 1 of the relative or absolute range of clear-focus or area metrics for each of the multifocal designs examined when the defocus-curve step size was increased (P<.05). Faster methods of capturing defocus curves from multifocal IOL designs appear to distort the metric results and are therefore not valid. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  4. Diffuse interface modeling of three-phase contact line dynamics on curved boundaries: A lattice Boltzmann model for large density and viscosity ratios

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Bolster, Diogo

    2017-04-01

    We introduce a simple and efficient lattice Boltzmann method for immiscible multiphase flows, capable of handling large density and viscosity contrasts. The model is based on a diffuse-interface phase-field approach. Within this context we propose a new algorithm for specifying the three-phase contact angle on curved boundaries within the framework of structured Cartesian grids. The proposed method has superior computational accuracy compared with the common approach of approximating curved boundaries with stair cases. We test the model by applying it to four benchmark problems: (i) wetting and dewetting of a droplet on a flat surface and (ii) on a cylindrical surface, (iii) multiphase flow past a circular cylinder at an intermediate Reynolds number, and (iv) a droplet falling on hydrophilic and superhydrophobic circular cylinders under differing conditions. Where available, our results show good agreement with analytical solutions and/or existing experimental data, highlighting strengths of this new approach.

  5. [Value of sepsis single-disease manage system in predicting mortality in patients with sepsis].

    PubMed

    Chen, J; Wang, L H; Ouyang, B; Chen, M Y; Wu, J F; Liu, Y J; Liu, Z M; Guan, X D

    2018-04-03

    Objective: To observe the effect of sepsis single-disease manage system on the improvement of sepsis treatment and the value in predicting mortality in patients with sepsis. Methods: A retrospective study was conducted. Patients with sepsis admitted to the Department of Surgical Intensive Care Unit of Sun Yat-Sen University First Affiliated Hospital from September 22, 2013 to May 5, 2015 were enrolled in this study. Sepsis single-disease manage system (Rui Xin clinical data manage system, China data, China) was used to monitor 25 clinical quality parameters, consisting of timeliness, normalization and outcome parameters. Based on whether these quality parameters could be completed or not, the clinical practice was evaluated by the system. The unachieved quality parameter was defined as suspicious parameters, and these suspicious parameters were used to predict mortality of patients with receiver operating characteristic curve (ROC). Results: A total of 1 220 patients with sepsis were enrolled, included 805 males and 415 females. The mean age was (59±17) years, and acute physiology and chronic health evaluation (APACHE Ⅱ) scores was 19±8. The area under ROC curve of total suspicious numbers for predicting 28-day mortality was 0.70; when the suspicious parameters number was more than 6, the sensitivity was 68.0% and the specificity was 61.0% for predicting 28-day mortality. In addition, the area under ROC curve of outcome suspicious number for predicting 28-day mortality was 0.89; when the suspicious outcome parameters numbers was more than 1, the sensitivity was 88.0% and the specificity was 78.0% for predicting 28-day mortality. Moreover, the area under ROC curve of total suspicious number for predicting 90-day mortality was 0.73; when the total suspicious parameters number was more than 7, the sensitivity was 60.0% and the specificity was 74.0% for predicting 90-day mortality. Finally, the area under ROC curve of outcome suspicious numbers for predicting 90-day mortality was 0.92; when suspicious outcome parameters numbers was more than 1, the sensitivity was 88.0% and the specificity was 81.0% for predicting 90-day mortality. Conclusion: The single center study suggests that this sepsis single-disease manage system could be used to monitor the completion of clinical practice for intensivist in managing sepsis, and the number of quality parameters failed to complete could be used to predict the mortality of the patients.

  6. Analyser-based phase contrast image reconstruction using geometrical optics.

    PubMed

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-07-21

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 microm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.

  7. Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits.

    PubMed

    Gámez Serna, Citlalli; Ruichek, Yassine

    2017-06-14

    A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle's speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, 'curve analysis extraction' and 'speed limits database creation' are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger.

  8. Using Floquet periodicity to easily calculate dispersion curves and wave structures of homogeneous waveguides

    NASA Astrophysics Data System (ADS)

    Hakoda, Christopher; Rose, Joseph; Shokouhi, Parisa; Lissenden, Clifford

    2018-04-01

    Dispersion curves are essential to any guided-wave-related project. The Semi-Analytical Finite Element (SAFE) method has become the conventional way to compute dispersion curves for homogeneous waveguides. However, only recently has a general SAFE formulation for commercial and open-source software become available, meaning that until now SAFE analyses have been variable and more time consuming than desirable. Likewise, the Floquet boundary conditions enable analysis of waveguides with periodicity and have been an integral part of the development of metamaterials. In fact, we have found the use of Floquet boundary conditions to be an extremely powerful tool for homogeneous waveguides, too. The nuances of using periodic boundary conditions for homogeneous waveguides that do not exhibit periodicity are discussed. Comparisons between this method and SAFE are made for selected homogeneous waveguide applications. The COMSOL Multiphysics software is used for the results shown, but any standard finite element software that can implement Floquet periodicity (user-defined or built-in) should suffice. Finally, we identify a number of complex waveguides for which dispersion curves can be found with relative ease by using the periodicity inherent to the Floquet boundary conditions.

  9. Low-Impact Development Design—Integrating Suitability Analysis and Site Planning For Reduction Of Post-Development Stormwater Quantity

    EPA Science Inventory

    A land-suitability analysis (LSA) was integrated with open-space conservation principles, based on watershed physiographic and soil characteristics, to derive a low-impact development (LID) residential plan for a three hectare site in Coshocton OH, USA. The curve number method wa...

  10. Deploying the Win TR-20 computational engine as a web service

    USDA-ARS?s Scientific Manuscript database

    Despite its simplicity and limitations, the runoff curve number method remains a widely-used hydrologic modeling tool, and its use through the USDA Natural Resources Conservation Service (NRCS) computer application WinTR-20 is expected to continue for the foreseeable future. To facilitate timely up...

  11. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    PubMed

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  12. An experimental investigation of heat transfer to reusable surface insulation tile array gaps in a turbulent boundary layer with pressure gradient. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Throckmorton, D. A.

    1975-01-01

    An experimental investigation was performed to determine the effect of pressure gradient on the heat transfer to space shuttle reusable surface insulation (RSI) tile array gaps under thick, turbulent boundary layer conditions. Heat transfer and pressure measurements were obtained on a curved array of full-scale simulated RSI tiles in a tunnel wall boundary layer at a nominal freestream Mach number of 10.3 and freestream unit Reynolds numbers of 1.6, 3.3, and and 6.1 million per meter. Transverse pressure gradients were induced over the model surface by rotating the curved array with respect to the flow. Definition of the tunnel wall boundary layer flow was obtained by measurement of boundary layer pitot pressure profiles, and flat plate wall pressure and heat transfer. Flat plate wall heat transfer data were correlated and a method was derived for prediction of smooth, curved array heat transfer in the highly three-dimensional tunnel wall boundary layer flow and simulation of full-scale space shuttle vehicle pressure gradient levels was assessed.

  13. A simple procedure for synthesizing Charpy impact energy transition curves from limited test data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenfeld, M.J.

    1996-12-31

    The importance of Charpy V-notch testing of pipe has been well established in the pipeline industry. Until now, it has been necessary to perform a number of tests in order to develop the toughness transition curve. A method is described which makes possible forecasting the full-scale toughness transition from a single subsize test datum to an acceptable degree of accuracy. This is potentially useful where historical test results or material samples available for testing are limited in quantity. Worked examples illustrating the use of the relationships are given.

  14. Joint hypermobility in children with idiopathic scoliosis: SOSORT award 2011 winner

    PubMed Central

    2011-01-01

    Background Generalized joint hypermobility (JHM) refers to increased joint mobility with simultaneous absence of any other systemic disease. JHM involves proprioception impairment, increased frequency of pain within joints and tendency to injure soft tissues while performing physical activities. Children with idiopathic scoliosis (IS) often undergo intensive physiotherapy requiring good physical capacities. Further, some physiotherapy methods apply techniques that increase joint mobility and thus may be contraindicated. The aim of this paper was to assess JHM prevalence in children with idiopathic scoliosis and to analyze the relationship between JHM prevalence and the clinical and radiological parameters of scoliosis. The methods of assessment of generalized joint hypermobility were also described. Materials and methods This case-control study included 70 subjects with IS, aged 9-18 years (mean 13.2 ± 2.2), Cobb angle range 10°-53° (mean 24.3 ± 11.7), 34 presenting single curve thoracic scoliosis and 36 double curve thoracic and lumbar scoliosis. The control group included 58 children and adolescents aged 9-18 years (mean 12.6 ± 2.1) selected at random. The presence of JHM was determined using Beighton scale complemented with the questionnaire by Hakim and Grahame. The relationship between JHM and the following variables was evaluated: curve severity, axial rotation of the apical vertebra, number of curvatures (single versus double), number of vertebrae within the curvature (long versus short curves), treatment type (physiotherapy versus bracing) and age. Statistical analysis was performed with Statistica 8.1 (StatSoft, USA). The Kolmogorov-Smirnov test, U Mann-Whitney test, Chi2 test, Pearson and Spermann correlation rank were conducted. The value p = 0.05 was adopted as the level of significance. Results JHM was diagnosed in more than half of the subjects with idiopathic scoliosis (51.4%), whilst in the control group it was diagnosed in only 19% of cases (p = 0.00015). A significantly higher JHM prevalence was observed in both girls (p = 0.0054) and boys (p = 0.017) with IS in comparison with the corresponding controls. No significant relation was found between JHM prevalence and scoliosis angular value (p = 0.35), apical vertebra rotation (p = 0.86), the number of vertebrae within curvature (p = 0.8), the type of applied treatment (p = 0.55) and the age of subjects (p = 0.79). JHM prevalence was found to be higher in children with single curve scoliosis than in children with double curve scoliosis (p = 0.03). Conclusions JHM occurs more frequently in children with IS than in healthy sex and age matched controls. No relation of JHM with radiological parameters, treatment type and age was found. Systematically searched in IS children, JHM should be taken into account when physiotherapy is planned. PMID:21981906

  15. Application of derivative spectrophotometry under orthogonal polynomial at unequal intervals: determination of metronidazole and nystatin in their pharmaceutical mixture.

    PubMed

    Korany, Mohamed A; Abdine, Heba H; Ragab, Marwa A A; Aboras, Sara I

    2015-05-15

    This paper discusses a general method for the use of orthogonal polynomials for unequal intervals (OPUI) to eliminate interferences in two-component spectrophotometric analysis. In this paper, a new approach was developed by using first derivative D1 curve instead of absorbance curve to be convoluted using OPUI method for the determination of metronidazole (MTR) and nystatin (NYS) in their mixture. After applying derivative treatment of the absorption data many maxima and minima points appeared giving characteristic shape for each drug allowing the selection of different number of points for the OPUI method for each drug. This allows the specific and selective determination of each drug in presence of the other and in presence of any matrix interference. The method is particularly useful when the two absorption spectra have considerable overlap. The results obtained are encouraging and suggest that the method can be widely applied to similar problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Learning curves for urological procedures: a systematic review.

    PubMed

    Abboudi, Hamid; Khan, Mohammed Shamim; Guru, Khurshid A; Froghi, Saied; de Win, Gunter; Van Poppel, Hendrik; Dasgupta, Prokar; Ahmed, Kamran

    2014-10-01

    To determine the number of cases a urological surgeon must complete to achieve proficiency for various urological procedures. The MEDLINE, EMBASE and PsycINFO databases were systematically searched for studies published up to December 2011. Studies pertaining to learning curves of urological procedures were included. Two reviewers independently identified potentially relevant articles. Procedure name, statistical analysis, procedure setting, number of participants, outcomes and learning curves were analysed. Forty-four studies described the learning curve for different urological procedures. The learning curve for open radical prostatectomy ranged from 250 to 1000 cases and for laparoscopic radical prostatectomy from 200 to 750 cases. The learning curve for robot-assisted laparoscopic prostatectomy (RALP) has been reported to be 40 procedures as a minimum number. Robot-assisted radical cystectomy has a documented learning curve of 16-30 cases, depending on which outcome variable is measured. Irrespective of previous laparoscopic experience, there is a significant reduction in operating time (P = 0.008), estimated blood loss (P = 0.008) and complication rates (P = 0.042) after 100 RALPs. The available literature can act as a guide to the learning curves of trainee urologists. Although the learning curve may vary among individual surgeons, a consensus should exist for the minimum number of cases to achieve proficiency. The complexities associated with defining procedural competence are vast. The majority of learning curve trials have focused on the latest surgical techniques and there is a paucity of data pertaining to basic urological procedures. © 2013 The Authors. BJU International © 2013 BJU International.

  17. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  19. Waveform fitting and geometry analysis for full-waveform lidar feature extraction

    NASA Astrophysics Data System (ADS)

    Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu

    2016-10-01

    This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.

  20. Distribution Methods and End-Uses For Hardwood Face Veneer and Plywood Manufactured In Michigan and Wisconsin In 1964

    Treesearch

    Lewis T. Hendrics

    1966-01-01

    A number of distribution methods are currently used to market a wide variety of products manufactured by the hardwood face veneer and plywood industry in Michigan and Wisconsin. Wall paneling, door skins, and kitchen cabinet stock are major products, but specialty lines such as curved and molded plywood components for furniture, shoe heels, and golf club heads are...

  1. Optimal study design with identical power: an application of power equivalence to latent growth curve models.

    PubMed

    von Oertzen, Timo; Brandmaier, Andreas M

    2013-06-01

    Structural equation models have become a broadly applied data-analytic framework. Among them, latent growth curve models have become a standard method in longitudinal research. However, researchers often rely solely on rules of thumb about statistical power in their study designs. The theory of power equivalence provides an analytical answer to the question of how design factors, for example, the number of observed indicators and the number of time points assessed in repeated measures, trade off against each other while holding the power for likelihood-ratio tests on the latent structure constant. In this article, we present applications of power-equivalent transformations on a model with data from a previously published study on cognitive aging, and highlight consequences of participant attrition on power. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. The Prevalence of Idiopathic Scoliosis in Eleven Year-Old Korean Adolescents: A 3 Year Epidemiological Study

    PubMed Central

    Lee, Jin-Young; Moon, Seong-Hwan; Kim, Han Jo; Suh, Bo-Kyung; Nam, Ji Hoon; Jung, Jae Kyun; Lee, Hwan-Mo

    2014-01-01

    Purpose School screening allows for early detection and early treatment of scoliosis, with the purpose of reducing the number of patients requiring surgical treatment. Children between 10 and 14 years old are considered as good candidates for school screening tests of scoliosis. The purpose of the present study was to assess the epidemiological findings of idiopathic scoliosis in 11-year-old Korean adolescents. Materials and Methods A total of 37856 11-year-old adolescents were screened for scoliosis. There were 17110 girls and 20746 boys. Adolescents who were abnormal by Moiré topography were subsequently assessed by standardized clinical and radiological examinations. A scoliotic curve was defined as 10° or more. Results The prevalence of scoliosis was 0.19% and most of the curves were small (10° to 19°). The ratio of boys to girls was 1:5.5 overall. Sixty adolescents (84.5%) exhibited single curvature. Thoracolumbar curves were the most common type of curve identified, followed by thoracic and lumbar curves. Conclusion The prevalence of idiopathic scoliosis among 11-year-old Korean adolescents was 0.19%. PMID:24719147

  3. Environmental stress cracking of polymers

    NASA Technical Reports Server (NTRS)

    Mahan, K. I.

    1980-01-01

    A two point bending method for use in studying the environmental stress cracking and crazing phenomena is described and demonstrated for a variety of polymer/solvent systems. Critical strain values obtained from these curves are reported for various polymer/solvent systems including a considerable number of systems for which critical strain values have not been previously reported. Polymers studied using this technique include polycarbonate (PC), ABS, high impact styrene (HIS), polyphenylene oxide (PPO), and polymethyl methacrylate (PMMA). Critical strain values obtained using this method compared favorably with available existing data. The major advantage of the technique is the ability to obtain time vs. strain curves over a short period of time. The data obtained suggests that over a short period of time the transition in most of the polymer solvent systems is more gradual than previously believed.

  4. Community assessment techniques and the implications for rarefaction and extrapolation with Hill numbers.

    PubMed

    Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E

    2017-12-01

    Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.

  5. Simultaneous detection of Fusarium culmorum and F. graminearum in plant material by duplex PCR with melting curve analysis.

    PubMed

    Brandfass, Christoph; Karlovsky, Petr

    2006-01-23

    Fusarium head blight (FHB) is a disease of cereal crops, which has a severe impact on wheat and barley production worldwide. Apart from reducing the yield and impairing grain quality, FHB leads to contamination of grain with toxic secondary metabolites (mycotoxins), which pose a health risk to humans and livestock. The Fusarium species primarily involved in FHB are F. graminearum and F. culmorum. A key prerequisite for a reduction in the incidence of FHB is an understanding of its epidemiology. We describe a duplex-PCR-based method for the simultaneous detection of F. culmorum and F. graminearum in plant material. Species-specific PCR products are identified by melting curve analysis performed in a real-time thermocycler in the presence of the fluorescent dye SYBR Green I. In contrast to multiplex real-time PCR assays, the method does not use doubly labeled hybridization probes. PCR with product differentiation by melting curve analysis offers a cost-effective means of qualitative analysis for the presence of F. culmorum and F. graminearum in plant material. This method is particularly suitable for epidemiological studies involving a large number of samples.

  6. The hyperbolic chemical bond: Fourier analysis of ground and first excited state potential energy curves of HX (X = H-Ne).

    PubMed

    Harrison, John A

    2008-09-04

    RHF/aug-cc-pVnZ, UHF/aug-cc-pVnZ, and QCISD/aug-cc-pVnZ, n = 2-5, potential energy curves of H2 X (1) summation g (+) are analyzed by Fourier transform methods after transformation to a new coordinate system via an inverse hyperbolic cosine coordinate mapping. The Fourier frequency domain spectra are interpreted in terms of underlying mathematical behavior giving rise to distinctive features. There is a clear difference between the underlying mathematical nature of the potential energy curves calculated at the HF and full-CI levels. The method is particularly suited to the analysis of potential energy curves obtained at the highest levels of theory because the Fourier spectra are observed to be of a compact nature, with the envelope of the Fourier frequency coefficients decaying in magnitude in an exponential manner. The finite number of Fourier coefficients required to describe the CI curves allows for an optimum sampling strategy to be developed, corresponding to that required for exponential and geometric convergence. The underlying random numerical noise due to the finite convergence criterion is also a clearly identifiable feature in the Fourier spectrum. The methodology is applied to the analysis of MRCI potential energy curves for the ground and first excited states of HX (X = H-Ne). All potential energy curves exhibit structure in the Fourier spectrum consistent with the existence of resonances. The compact nature of the Fourier spectra following the inverse hyperbolic cosine coordinate mapping is highly suggestive that there is some advantage in viewing the chemical bond as having an underlying hyperbolic nature.

  7. High-resolution Land Cover Datasets, Composite Curve Numbers, and Storm Water Retention in the Tampa Bay, FL region

    EPA Science Inventory

    Policy makers need to understand how land cover change alters storm water regimes, yet existing methods do not fully utilize newly available datasets to quantify storm water changes at a landscape-scale. Here, we use high-resolution, remotely-sensed land cover, imperviousness, an...

  8. The learning curve in robotic distal pancreatectomy.

    PubMed

    Napoli, Niccolò; Kauffmann, Emanuele F; Perrone, Vittorio Grazio; Miccoli, Mario; Brozzetti, Stefania; Boggi, Ugo

    2015-09-01

    No data are available on the learning curve in robotic distal pancreatectomy (RADP). The learning curve in RADP was assessed in 55 consecutive patients using the cumulative sum method, based on operative time. Data were extracted from a prospectively maintained database and analyzed retrospectively considering all events occurring within 90 days of surgery. No operation was converted to laparoscopic or open surgery and no patient died. Post-operative complications occurred in 34 patients (61.8%), being of Clavien-Dindo grade I-II in 32 patients (58.1%), including pancreatic fistula in 29 patients (52.7%). No grade C pancreatic fistula occurred. Four patients received blood transfusions (7.2%), three were readmitted (5.4%) and one required repeat surgery (1.8%). Based on the reduction of operative times (421.1 ± 20.5 vs 248.9 ± 9.3 min; p < 0.0001), completion of the learning curve was achieved after ten operations. Operative time of the first 10 operations was associated with a positive slope (0.47 + 1.78* case number; R (2) 0.97; p < 0.0001*), while that of the following 45 procedures showed a negative slope (23.52 - 0.39* case number; R (2) 0.97; p < 0.0001*). After completion of the learning curve, more patients had a malignant histology (0 vs 35.6%; p = 0.002), accounting for both higher lymph node yields (11.1 ± 12.2 vs 20.9 ± 18.5) (p = 0.04) and lower rate of spleen preservation (90 vs 55.6%) (p = 0.04). RADP was safely feasible in selected patients and the learning curve was completed after ten operations. Improvement in clinical outcome was not demonstrated, probably because of the limited occurrence of outcome comparators.

  9. Classification of breast abnormalities using artificial neural network

    NASA Astrophysics Data System (ADS)

    Zaman, Nur Atiqah Kamarul; Rahman, Wan Eny Zarina Wan Abdul; Jumaat, Abdul Kadir; Yasiran, Siti Salmah

    2015-05-01

    Classification is the process of recognition, differentiation and categorizing objects into groups. Breast abnormalities are calcifications which are tumor markers that indicate the presence of cancer in the breast. The aims of this research are to classify the types of breast abnormalities using artificial neural network (ANN) classifier and to evaluate the accuracy performance using receiver operating characteristics (ROC) curve. The methods used in this research are ANN for breast abnormalities classifications and Canny edge detector as a feature extraction method. Previously the ANN classifier provides only the number of benign and malignant cases without providing information for specific cases. However in this research, the type of abnormality for each image can be obtained. The existing MIAS MiniMammographic database classified the mammogram images into three features only namely characteristic of background tissues, class of abnormality and radius of abnormality. However, in this research three other features are added-in. These three features are number of spots, area and shape of abnormalities. Lastly the performance of the ANN classifier is evaluated using ROC curve. It is found that ANN has an accuracy of 97.9% which is considered acceptable.

  10. Detection of Possible Quasi-periodic Oscillations in the Long-term Optical Light Curve of the BL Lac Object OJ 287

    NASA Astrophysics Data System (ADS)

    Bhatta, G.; Zola, S.; Stawarz, Ł.; Ostrowski, M.; Winiarski, M.; Ogłoza, W.; Dróżdż, M.; Siwak, M.; Liakos, A.; Kozieł-Wierzbowska, D.; Gazeas, K.; Debski, B.; Kundera, T.; Stachowski, G.; Paliya, V. S.

    2016-11-01

    The detection of periodicity in the broadband non-thermal emission of blazars has so far been proven to be elusive. However, there are a number of scenarios that could lead to quasi-periodic variations in blazar light curves. For example, an orbital or thermal/viscous period of accreting matter around central supermassive black holes could, in principle, be imprinted in the multi-wavelength emission of small-scale blazar jets, carrying such crucial information about plasma conditions within the jet launching regions. In this paper, we present the results of our time series analysis of the ˜9.2 yr long, and exceptionally well-sampled, optical light curve of the BL Lac object OJ 287. The study primarily used the data from our own observations performed at the Mt. Suhora and Kraków Observatories in Poland, and at the Athens Observatory in Greece. Additionally, SMARTS observations were used to fill some of the gaps in the data. The Lomb-Scargle periodogram and the weighted wavelet Z-transform methods were employed to search for possible quasi-periodic oscillations in the resulting optical light curve of the source. Both methods consistently yielded a possible quasi-periodic signal around the periods of ˜400 and ˜800 days, the former with a significance (over the underlying colored noise) of ≥slant 99 % . A number of likely explanations for this are discussed, with preference given to a modulation of the jet production efficiency by highly magnetized accretion disks. This supports previous findings and the interpretation reported recently in the literature for OJ 287 and other blazar sources.

  11. Derivation of flood frequency curves in poorly gauged Mediterranean catchments using a simple stochastic hydrological rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Aronica, G. T.; Candela, A.

    2007-12-01

    SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.

  12. RankExplorer: Visualization of Ranking Changes in Large Time Series Data.

    PubMed

    Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin

    2012-12-01

    For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.

  13. What's the point? Hole-ography in Poincaré AdS

    NASA Astrophysics Data System (ADS)

    Espíndola, Ricardo; Güijosa, Alberto; Landetta, Alberto; Pedraza, Juan F.

    2018-01-01

    In the context of the AdS/CFT correspondence, we study bulk reconstruction of the Poincaré wedge of AdS_3 via hole-ography, i.e., in terms of differential entropy of the dual CFT_2. Previous work had considered the reconstruction of closed or open spacelike curves in global AdS, and of infinitely extended spacelike curves in Poincaré AdS that are subject to a periodicity condition at infinity. Working first at constant time, we find that a closed curve in Poincaré is described in the CFT by a family of intervals that covers the spatial axis at least twice. We also show how to reconstruct open curves, points and distances, and obtain a CFT action whose extremization leads to bulk points. We then generalize all of these results to the case of curves that vary in time, and discover that generic curves have segments that cannot be reconstructed using the standard hole-ographic construction. This happens because, for the nonreconstructible segments, the tangent geodesics fail to be fully contained within the Poincaré wedge. We show that a previously discovered variant of the hole-ographic method allows us to overcome this challenge, by reorienting the geodesics touching the bulk curve to ensure that they all remain within the wedge. Our conclusion is that all spacelike curves in Poincaré AdS can be completely reconstructed with CFT data, and each curve has in fact an infinite number of representations within the CFT.

  14. IGBT Switching Characteristic Curve Embedded Half-Bridge MMC Modelling and Real Time Simulation Realization

    NASA Astrophysics Data System (ADS)

    Zhengang, Lu; Hongyang, Yu; Xi, Yang

    2017-05-01

    The Modular Multilevel Converter (MMC) is one of the most attractive topologies in recent years for medium or high voltage industrial applications, such as high voltage dc transmission (HVDC) and medium voltage varying speed motor drive. The wide adoption of MMCs in industry is mainly due to its flexible expandability, transformer-less configuration, common dc bus, high reliability from redundancy, and so on. But, when the sub module number of MMC is more, the test of MMC controller will cost more time and effort. Hardware in the loop test based on real time simulator will save a lot of time and money caused by the MMC test. And due to the flexible of HIL, it becomes more and more popular in the industry area. The MMC modelling method remains an important issue for the MMC HIL test. Specifically, the VSC model should realistically reflect the nonlinear device switching characteristics, switching and conduction losses, tailing current, and diode reverse recovery behaviour of a realistic converter. In this paper, an IGBT switching characteristic curve embedded half-bridge MMC modelling method is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the proposed method, a FPGA real time simulation is carried out with 200ns sample time. The real time simulation results show the proposed method is correct.

  15. Thermodynamic properties of methane hydrate in quartz powder.

    PubMed

    Voronov, Vitaly P; Gorodetskii, Evgeny E; Safonov, Sergey S

    2007-10-04

    Using the experimental method of precision adiabatic calorimetry, the thermodynamic (equilibrium) properties of methane hydrate in quartz sand with a grain size of 90-100 microm have been studied in the temperature range of 260-290 K and at pressures up to 10 MPa. The equilibrium curves for the water-methane hydrate-gas and ice-methane hydrate-gas transitions, hydration number, latent heat of hydrate decomposition along the equilibrium three-phase curves, and the specific heat capacity of the hydrate have been obtained. It has been experimentally shown that the equilibrium three-phase curves of the methane hydrate in porous media are shifted to the lower temperature and high pressure with respect to the equilibrium curves of the bulk hydrate. In these experiments, we have found that the specific heat capacity of the hydrate, within the accuracy of our measurements, coincides with the heat capacity of ice. The latent heat of the hydrate dissociation for the ice-hydrate-gas transition is equal to 143 +/- 10 J/g, whereas, for the transition from hydrate to water and gas, the latent heat is 415 +/- 15 J/g. The hydration number has been evaluated in the different hydrate conditions and has been found to be equal to n = 6.16 +/- 0.06. In addition, the influence of the water saturation of the porous media and its distribution over the porous space on the measured parameters has been experimentally studied.

  16. Estimating flow duration curve in the humid tropics: a disaggregation approach in Hawaiian catchments

    NASA Astrophysics Data System (ADS)

    Chris, Leong; Yoshiyuki, Yokoo

    2017-04-01

    Islands that are concentrated in developing countries have poor hydrological research data which contribute to stress on hydrological resources due to unmonitored human influence and negligence. As studies in islands are relatively young, there is a need to understand these stresses and influences by building block research specifically targeting islands. The flow duration curve (FDC) is a simple start up hydrological tool that can be used in initial studies of islands. This study disaggregates the FDC into three sections, top, middle and bottom and in each section runoff is estimated with simple hydrological models. The study is based on Hawaiian Islands, toward estimating runoff in ungauged island catchments in the humid tropics. Runoff estimations in the top and middle sections include using the Curve Number (CN) method and the Regime Curve (RC) respectively. The bottom section is presented as a separate study from this one. The results showed that for majority of the catchments the RC can be used for estimations in the middle section of the FDC. It also showed that in order for the CN method to make stable estimations, it had to be calibrated. This study identifies simple methodologies that can be useful for making runoff estimations in ungauged island catchments.

  17. Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits †

    PubMed Central

    Gámez Serna, Citlalli; Ruichek, Yassine

    2017-01-01

    A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle’s speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, ‘curve analysis extraction’ and ‘speed limits database creation’ are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger. PMID:28613251

  18. Boundary element analysis of corrosion problems for pumps and pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyasaka, M.; Amaya, K.; Kishimoto, K.

    1995-12-31

    Three-dimensional (3D) and axi-symmetric boundary element methods (BEM) were developed to quantitatively estimate cathodic protection and macro-cell corrosion. For 3D analysis, a multiple-region method (MRM) was developed in addition to a single-region method (SRM). The validity and usefulness of the BEMs were demonstrated by comparing numerical results with experimental data from galvanic corrosion systems of a cylindrical model and a seawater pipe, and from a cathodic protection system of an actual seawater pump. It was shown that a highly accurate analysis could be performed for fluid machines handling seawater with complex 3D fields (e.g. seawater pump) by taking account ofmore » flow rate and time dependencies of polarization curve. Compared to the 3D BEM, the axi-symmetric BEM permitted large reductions in numbers of elements and nodes, which greatly simplified analysis of axi-symmetric fields such as pipes. Computational accuracy and CPU time were compared between analyses using two approximation methods for polarization curves: a logarithmic-approximation method and a linear-approximation method.« less

  19. Comment on "Beyond the SCS-CN method: A theoretical framework for spatially lumped rainfall-runoff response" by M. S. Bartlett et al.

    NASA Astrophysics Data System (ADS)

    Ogden, Fred L.; Hawkins, Richard Pete; Walter, M. Todd; Goodrich, David C.

    2017-07-01

    Bartlett et al. (2016) performed a re-interpretation and modification of the space-time lumped USDA NRCS (formerly SCS) Curve Number (CN) method to extend its applicability to forested watersheds. We believe that the well documented limitations of the CN method severely constrains the applicability of the modifications proposed by Bartlett et al. (2016). This forward-looking comment urges the research communities in hydrologic science and engineering to consider the CN method as a stepping stone that has outlived its usefulness in research. The CN method fills a narrow niche in certain settings as a parsimonious method having utility as an empirical equation to estimate runoff from a given amount of rainfall, which originated as a static functional form that fits rainfall-runoff data sets. Sixty five years of use and multiple reinterpretations have not resulted in improved hydrological predictability using the method. We suggest that the research community should move forward by (1) identifying appropriate dynamic hydrological model formulations for different hydro-geographic settings, (2) specifying needed model capabilities for solving different classes of problems (e.g., flooding, erosion/sedimentation, nutrient transport, water management, etc.) in different hydro-geographic settings, and (3) expanding data collection and research programs to help ameliorate the so-called "overparameterization" problem in contemporary modeling. Many decades of advances in geo-spatial data and processing, computation, and understanding are being squandered on continued focus on the static CN regression method. It is time to truly "move beyond" the Curve Number method.

  20. Investigating the environmental Kuznets curve hypothesis: the role of tourism and ecological footprint.

    PubMed

    Ozturk, Ilhan; Al-Mulali, Usama; Saboori, Behnaz

    2016-01-01

    The main objective of this study is to examine the environmental Kuznets curve (EKC) hypothesis by utilizing the ecological footprint as an environment indicator and GDP from tourism as the economic indicator. To achieve this goal, an environmental degradation model is established during the period of 1988-2008 for 144 countries. The results from the time series generalized method of moments (GMM) and the system panel GMM revealed that the number of countries that have a negative relationship between the ecological footprint and its determinants (GDP growth from tourism, energy consumption, trade openness, and urbanization) is more existent in the upper middle- and high-income countries. Moreover, the EKC hypothesis is more present in the upper middle- and high-income countries than the other income countries. From the outcome of this research, a number of policy recommendations were provided for the investigated countries.

  1. Broadband turbulent spectra in gamma-ray burst light curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Putten, Maurice H. P. M.; Guidorzi, Cristiano; Frontera, Filippo, E-mail: mvp@sejong.ac.kr

    2014-05-10

    Broadband power density spectra offer a window to understanding turbulent behavior in the emission mechanism and, at the highest frequencies, in the putative inner engines powering long gamma-ray bursts (GRBs). We describe a chirp search method alongside Fourier analysis for signal detection in the Poisson noise-dominated, 2 kHz sampled, BeppoSAX light curves. An efficient numerical implementation is described in O(Nnlog n) operations, where N is the number of chirp templates and n is the length of the light-curve time series, suited for embarrassingly parallel processing. For the detection of individual chirps over a 1 s duration, the method is onemore » order of magnitude more sensitive in signal-to-noise ratio than Fourier analysis. The Fourier-chirp spectra of GRB 010408 and GRB 970816 show a continuation of the spectral slope with up to 1 kHz of turbulence identified in low-frequency Fourier analysis. The same continuation is observed in an average spectrum of 42 bright, long GRBs. An outlook on a similar analysis of upcoming gravitational wave data is included.« less

  2. Flow of nanofluid by nonlinear stretching velocity

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Rashid, Madiha; Alsaedi, Ahmed; Ahmad, Bashir

    2018-03-01

    Main objective in this article is to model and analyze the nanofluid flow induced by curved surface with nonlinear stretching velocity. Nanofluid comprises water and silver. Governing problem is solved by using homotopy analysis method (HAM). Induced magnetic field for low magnetic Reynolds number is not entertained. Development of convergent series solutions for velocity and skin friction coefficient is successfully made. Pressure in the boundary layer flow by curved stretching surface cannot be ignored. It is found that magnitude of power-law index parameter increases for pressure distibutions. Magnitude of radius of curvature reduces for pressure field while opposite trend can be observed for velocity.

  3. Fast dynamic ventilation MRI of hyperpolarized 129Xe using spiral imaging

    PubMed Central

    Matin, Tahreema N.; Mcintyre, Anthony; Burns, Brian; Schulte, Rolf F.; Gleeson, Fergus V.; Bulte, Daniel

    2017-01-01

    Purpose To develop and optimize a rapid dynamic hyperpolarized 129Xe ventilation (DXeV) MRI protocol and investigate the feasibility of capturing pulmonary signal‐time curves in human lungs. Theory and Methods Spiral k‐space trajectories were designed with the number of interleaves N int = 1, 2, 4, and 8 corresponding to voxel sizes of 8 mm, 5 mm, 4 mm, and 2.5 mm, respectively, for field of view = 15 cm. DXeV images were acquired from a gas‐flow phantom to investigate the ability of N int = 1, 2, 4, and 8 to capture signal‐time curves. A finite element model was constructed to investigate gas‐flow dynamics corroborating the experimental signal‐time curves. DXeV images were also carried out in six subjects (three healthy and three chronic obstructive pulmonary disease subjects). Results DXeV images and numerical modelling of signal‐time curves permitted the quantification of temporal and spatial resolutions for different numbers of spiral interleaves. The two‐interleaved spiral (N int = 2) was found to be the most time‐efficient to obtain DXeV images and signal‐time curves of whole lungs with a temporal resolution of 624 ms for 13 slices. Signal‐time curves were well matched in three healthy volunteers. The Spearman's correlations of chronic obstructive pulmonary disease subjects were statistically different from three healthy subjects (P < 0.05). Conclusion The N int = 2 spiral demonstrates the successful acquisition of DXeV images and signal‐time curves in healthy subjects and chronic obstructive pulmonary disease patients. Magn Reson Med 79:2597–2606, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28921655

  4. What is the most accurate lymph node staging method for perihilar cholangiocarcinoma? Comparison of UICC/AJCC pN stage, number of metastatic lymph nodes, lymph node ratio, and log odds of metastatic lymph nodes.

    PubMed

    Conci, S; Ruzzenente, A; Sandri, M; Bertuzzo, F; Campagnaro, T; Bagante, F; Capelli, P; D'Onofrio, M; Piccino, M; Dorna, A E; Pedrazzani, C; Iacono, C; Guglielmi, A

    2017-04-01

    We compared the prognostic performance of the International Union Against Cancer/American Joint Committee on Cancer (UICC/AJCC) 7th edition pN stage, number of metastatic LNs (MLNs), LN ratio (LNR), and log odds of MLNs (LODDS) in patients with perihilar cholangiocarcinoma (PCC) undergoing curative surgery in order to identify the best LN staging method. Ninety-nine patients who underwent surgery with curative intent for PCC in a single tertiary hepatobiliary referral center were included in the study. Two approaches were used to evaluate and compare the predictive power of the different LN staging methods: one based on the estimation of variable importance with prediction error rate and the other based on the calculation of the receiver operating characteristic (ROC) curve. LN dissection was performed in 92 (92.9%) patients; 49 were UICC/AJCC pN0 (49.5%), 33 pN1 (33.3%), and 10 pN2 (10.1%). The median number of LNs retrieved was 8. The prediction error rate ranged from 42.7% for LODDS to 47.1% for UICC/AJCC pN stage. Moreover, LODDS was the variable with the highest area under the ROC curve (AUC) for prediction of 3-year survival (AUC = 0.71), followed by LNR (AUC = 0.60), number of MLNs (AUC = 0.59), and UICC/AJCC pN stage (AUC = 0.54). The number of MLNs, LNR, and LODDS appear to better predict survival than the UICC/AJCC pN stage in patients undergoing curative surgery for PCC. Moreover, LODDS seems to be the most accurate and predictive LN staging method. Copyright © 2017 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  5. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    PubMed

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  6. Quantification of HER2/neu gene amplification by competitive pcr using fluorescent melting curve analysis.

    PubMed

    Lyon, E; Millson, A; Lowery, M C; Woods, R; Wittwer, C T

    2001-05-01

    Molecular detection methods for HER2/neu gene amplification include fluorescence in situ hybridization (FISH) and competitive PCR. We designed a quantitative PCR system utilizing fluorescent hybridization probes and a competitor that differed from the HER2/neu sequence by a single base change. Increasing twofold concentrations of competitor were coamplified with DNA from cell lines with various HER2/neu copy numbers at the HER2/neu locus. Competitor DNA was distinguished from the HER2/neu sequence by a fluorescent hybridization probe and melting curve analysis on a fluorescence-monitoring thermal cycler. The percentages of competitor to target peak areas on derivative fluorescence vs temperature curves were used to calculate copy number. Real-time monitoring of the PCR reaction showed comparable relative areas throughout the log phase and during the PCR plateau, indicating that only end-point detection is necessary. The dynamic range was over two logs (2000-250 000 competitor copies) with CVs < 20%. Three cell lines (MRC-5, T-47D, and SK-BR-3) were determined to have gene doses of 1, 3, and 11, respectively. Gene amplification was detected in 3 of 13 tumor samples and was correlated with conventional real-time PCR and FISH analysis. Use of relative peak areas allows gene copy numbers to be quantified against an internal competitive control in < 1 h.

  7. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    PubMed Central

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  8. Anatomical curve identification

    PubMed Central

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  9. Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions

    PubMed Central

    Joshi, Gaurav V.; Duan, Yuanyuan; Bona, Alvaro Della; Hill, Thomas J.; John, Kenneth St.; Griggs, Jason A.

    2013-01-01

    Objectives To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Materials and Methods Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. Results There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPa·m1/2 reaching a plateau at different critical flaw sizes based on loading method. Significance Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. PMID:24034441

  10. Determination of soil degradation from flooding for estimating ecosystem services in Slovakia

    NASA Astrophysics Data System (ADS)

    Hlavcova, Kamila; Szolgay, Jan; Karabova, Beata; Kohnova, Silvia

    2015-04-01

    Floods as natural hazards are related to soil health, land-use and land management. They not only represent threats on their own, but can also be triggered, controlled and amplified by interactions with other soil threats and soil degradation processes. Among the many direct impacts of flooding on soil health, including soil texture, structure, changes in the soil's chemical properties, deterioration of soil aggregation and water holding capacity, etc., are soil erosion, mudflows, depositions of sediment and debris. Flooding is initiated by a combination of predispositive and triggering factors and apart from climate drivers it is related to the physiographic conditions of the land, state of the soil, land use and land management. Due to the diversity and complexity of their potential interactions, diverse methodologies and approaches are needed for describing a particular type of event in a specific environment, especially in ungauged sites. In engineering studies and also in many rainfall-runoff models, the SCS-CN method has remained widely applied for soil and land use-based estimations of direct runoff and flooding potential. The SCS-CN method is an empirical rainfall-runoff model developed by the USDA Natural Resources Conservation Service (formerly called the Soil Conservation Service or SCS). The runoff curve number (CN) is based on the hydrological soil characteristics, land use, land management and antecedent saturation conditions of soil. Since the method and curve numbers were derived on the basis of an empirical analysis of rainfall-runoff events from small catchments and hillslope plots monitored by the USDA, the use of the method for the conditions of Slovakia raises uncertainty and can cause inaccurate results in determining direct runoff. The objective of the study presented (also within the framework of the EU-FP7 RECARE Project) was to develop the SCS - CN methodology for the flood conditions in Slovakia (and especially for the RECARE pilot site of Myjava), with an emphasis on the determination of soil degradation from flooding for estimating ecosystem services. The parameters of the SCS-CN methodology were regionalised empirically based on actual rainfall and discharge measurements. Since there has been no appropriate methodology provided for the regionalisation of SCS-CN method parameters in Slovakia, such as runoff curve numbers and initial abstraction coefficients (λ), the work presented is important for the correct application of the SCS-CN method in our conditions.

  11. OBS Data Denoising Based on Compressed Sensing Using Fast Discrete Curvelet Transform

    NASA Astrophysics Data System (ADS)

    Nan, F.; Xu, Y.

    2017-12-01

    OBS (Ocean Bottom Seismometer) data denoising is an important step of OBS data processing and inversion. It is necessary to get clearer seismic phases for further velocity structure analysis. Traditional methods for OBS data denoising include band-pass filter, Wiener filter and deconvolution etc. (Liu, 2015). Most of these filtering methods are based on Fourier Transform (FT). Recently, the multi-scale transform methods such as wavelet transform (WT) and Curvelet transform (CvT) are widely used for data denoising in various applications. The FT, WT and CvT could represent signal sparsely and separate noise in transform domain. They could be used in different cases. Compared with Curvelet transform, the FT has Gibbs phenomenon and it cannot handle points discontinuities well. WT is well localized and multi scale, but it has poor orientation selectivity and could not handle curves discontinuities well. CvT is a multiscale directional transform that could represent curves with only a small number of coefficients. It provide an optimal sparse representation of objects with singularities along smooth curves, which is suitable for seismic data processing. As we know, different seismic phases in OBS data are showed as discontinuous curves in time domain. Hence, we promote to analysis the OBS data via CvT and separate the noise in CvT domain. In this paper, our sparsity-promoting inversion approach is restrained by L1 condition and we solve this L1 problem by using modified iteration thresholding. Results show that the proposed method could suppress the noise well and give sparse results in Curvelet domain. Figure 1 compares the Curvelet denoising method with Wavelet method on the same iterations and threshold through synthetic example. a)Original data. b) Add-noise data. c) Denoised data using CvT. d) Denoised data using WT. The CvT can well eliminate the noise and has better result than WT. Further we applied the CvT denoise method for the OBS data processing. Figure 2a is a common receiver gather collected in the Bohai Sea, China. The whole profile is 120km long with 987 shots. The horizontal axis is shot number. The vertical axis is travel time reduced by 6km/s. We use our method to process the data and get a denoised profile figure 2b. After denoising, most of the high frequency noise was suppressed and the seismic phases were clearer.

  12. Assessment of watershed regionalization for the land use change parameterization

    NASA Astrophysics Data System (ADS)

    Randusová, Beata; Kohnová, Silvia; Studvová, Zuzana; Marková, Romana; Nosko, Radovan

    2016-04-01

    The estimation of design discharges and water levels of extreme floods is one of the most important parts of the design process for a large number of engineering projects and studies. Floods and other natural hazards initiated by climate, soil, and land use changes are highly important in the 21st century. Flood risks and design flood estimation is particularly challenging. Methods of design flood estimation can be applied either locally or regionally. To obtain the design values in such cases where no recorded data exist, many countries have adopted procedures that fit the local conditions and requirements. One of these methods is the Soil Conservation Service - Curve number (SCS-CN) method which is often used in design flood estimation for ungauged sites. The SCS-CN method is an empirical rainfall-runoff model developed by the USDA Natural Resources Conservation Service (formerly called the Soil Conservation Service or SCS). The runoff curve number (CN) is based on the hydrological soil characteristics, land use, land management and antecedent saturation conditions of soil. This study is focused on development of the SCS-CN methodology for the changing land use conditions in Slovak basins (with the pilot site of the Myjava catchment), which regionalize actual state of land use data and actual rainfall and discharge measurements of the selected river basins. In this study the state of the water erosion and sediment transport along with a subsequent proposal of erosion control measures was analyzed as well. The regionalized SCS-CN method was subsequently used for assessing the effectiveness of this control measure to reduce runoff from the selected basin. For the determination of the sediment transport from the control measure to the Myjava basin, the SDR (Sediment Delivery Ratio) model was used.

  13. Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves

    NASA Astrophysics Data System (ADS)

    Misra, R.; Bora, A.; Dewangan, G.

    2018-04-01

    Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.

  14. LCC: Light Curves Classifier

    NASA Astrophysics Data System (ADS)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  15. The average receiver operating characteristic curve in multireader multicase imaging studies

    PubMed Central

    Samuelson, F W

    2014-01-01

    Objective: In multireader, multicase (MRMC) receiver operating characteristic (ROC) studies for evaluating medical imaging systems, the area under the ROC curve (AUC) is often used as a summary metric. Owing to the limitations of AUC, plotting the average ROC curve to accompany the rigorous statistical inference on AUC is recommended. The objective of this article is to investigate methods for generating the average ROC curve from ROC curves of individual readers. Methods: We present both a non-parametric method and a parametric method for averaging ROC curves that produce a ROC curve, the area under which is equal to the average AUC of individual readers (a property we call area preserving). We use hypothetical examples, simulated data and a real-world imaging data set to illustrate these methods and their properties. Results: We show that our proposed methods are area preserving. We also show that the method of averaging the ROC parameters, either the conventional bi-normal parameters (a, b) or the proper bi-normal parameters (c, da), is generally not area preserving and may produce a ROC curve that is intuitively not an average of multiple curves. Conclusion: Our proposed methods are useful for making plots of average ROC curves in MRMC studies as a companion to the rigorous statistical inference on the AUC end point. The software implementing these methods is freely available from the authors. Advances in knowledge: Methods for generating the average ROC curve in MRMC ROC studies are formally investigated. The area-preserving criterion we defined is useful to evaluate such methods. PMID:24884728

  16. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  17. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles

    PubMed Central

    2016-01-01

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  18. The heat rate index indicator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lasasso, M.; Runyan, B.; Napoli, J.

    1995-06-01

    This paper describes a method of tracking unit performance through the use of a reference number called the Heat Rate Index Indicator. The ABB Power Plant Controls OTIS performance monitor is used to determine when steady load conditions exist and then to collect controllable and equipment loss data which significantly impact thermal efficiency. By comparing these loss parameters to those found during the previous heat balance, it is possible to develop a new adjusted heat rate curve. These impacts on heat rate are used to changes the shape of the tested heat rate curve by the appropriate percentages over amore » specified load range. Mathcad is used to determine the Heat Rate Index by integrating for the areas beneath the adjusted heat rate curve and a heat rate curve that represents the unit`s ideal heat rate curve is the Heat Rate Index. An index of 1.0 indicates that the unit is operating at an ideal efficiency, while an index of less than 1.0 indicates that the unit is operating at less than ideal conditions. A one per cent change in the Heat Rate Index is equivalent to a one percent change in heat rate. The new shape of the adjusted heat rate curve and the individual curves generated from the controllable and equipment loss parameters are useful for determining performance problems in specific load ranges.« less

  19. Dielectric Cytometry with Three-Dimensional Cellular Modeling

    PubMed Central

    Katsumoto, Yoichi; Hayashi, Yoshihito; Oshige, Ikuya; Omori, Shinji; Kishii, Noriyuki; Yasuda, Akio; Asami, Koji

    2008-01-01

    We have developed what we believe is an efficient method to determine the electric parameters (the specific membrane capacitance Cm and the cytoplasm conductivity κi) of cells from their dielectric dispersion. First, a limited number of dispersion curves are numerically calculated for a three-dimensional cell model by changing Cm and κi, and their amplitudes Δɛ and relaxation times τ are determined by assuming a Cole-Cole function. Second, regression formulas are obtained from the values of Δɛ and τ and then used for the determination of Cm and κi from the experimental Δɛ and τ. This method was applied to the dielectric dispersion measured for rabbit erythrocytes (discocytes and echinocytes) and human erythrocytes (normocytes), and provided reasonable Cm and κi of the erythrocytes and excellent agreement between the theoretical and experimental dispersion curves. PMID:18567636

  20. Dielectric cytometry with three-dimensional cellular modeling.

    PubMed

    Katsumoto, Yoichi; Hayashi, Yoshihito; Oshige, Ikuya; Omori, Shinji; Kishii, Noriyuki; Yasuda, Akio; Asami, Koji

    2008-09-15

    We have developed what we believe is an efficient method to determine the electric parameters (the specific membrane capacitance C(m) and the cytoplasm conductivity kappa(i)) of cells from their dielectric dispersion. First, a limited number of dispersion curves are numerically calculated for a three-dimensional cell model by changing C(m) and kappa(i), and their amplitudes Deltaepsilon and relaxation times tau are determined by assuming a Cole-Cole function. Second, regression formulas are obtained from the values of Deltaepsilon and tau and then used for the determination of C(m) and kappa(i) from the experimental Deltaepsilon and tau. This method was applied to the dielectric dispersion measured for rabbit erythrocytes (discocytes and echinocytes) and human erythrocytes (normocytes), and provided reasonable C(m) and kappa(i) of the erythrocytes and excellent agreement between the theoretical and experimental dispersion curves.

  1. Predicting the effects of magnesium oxide nanoparticles and temperature on the thermal conductivity of water using artificial neural network and experimental data

    NASA Astrophysics Data System (ADS)

    Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid

    2017-03-01

    The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.

  2. Reduction of Topographic Effect for Curve Number Estimated from Remotely Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Yan; Lin, Chao-Yuan

    2016-04-01

    The Soil Conservation Service Curve Number (SCS-CN) method is commonly used in hydrology to estimate direct runoff volume. The CN is the empirical parameter which corresponding to land use/land cover, hydrologic soil group and antecedent soil moisture condition. In large watersheds with complex topography, satellite remote sensing is the appropriate approach to acquire the land use change information. However, the topographic effect have been usually found in the remotely sensed imageries and resulted in land use classification. This research selected summer and winter scenes of Landsat-5 TM during 2008 to classified land use in Chen-You-Lan Watershed, Taiwan. The b-correction, the empirical topographic correction method, was applied to Landsat-5 TM data. Land use were categorized using K-mean classification into 4 groups i.e. forest, grassland, agriculture and river. Accuracy assessment of image classification was performed with national land use map. The results showed that after topographic correction, the overall accuracy of classification was increased from 68.0% to 74.5%. The average CN estimated from remotely sensed imagery decreased from 48.69 to 45.35 where the average CN estimated from national LULC map was 44.11. Therefore, the topographic correction method was recommended to normalize the topographic effect from the satellite remote sensing data before estimating the CN.

  3. Model of Numerical Spatial Classification for Sustainable Agriculture in Badung Regency and Denpasar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Trigunasih, N. M.; Lanya, I.; Subadiyasa, N. N.; Hutauruk, J.

    2018-02-01

    Increasing number and activity of the population to meet the needs of their lives greatly affect the utilization of land resources. Land needs for activities of the population continue to grow, while the availability of land is limited. Therefore, there will be changes in land use. As a result, the problems faced by land degradation and conversion of agricultural land become non-agricultural. The objectives of this research are: (1) to determine parameter of spatial numerical classification of sustainable food agriculture in Badung Regency and Denpasar City (2) to know the projection of food balance in Badung Regency and Denpasar City in 2020, 2030, 2040, and 2050 (3) to specify of function of spatial numerical classification in the making of zonation model of sustainable agricultural land area in Badung regency and Denpasar city (4) to determine the appropriate model of the area to protect sustainable agricultural land in spatial and time scale in Badung and Denpasar regencies. The method used in this research was quantitative method include: survey, soil analysis, spatial data development, geoprocessing analysis (spatial analysis of overlay and proximity analysis), interpolation of raster digital elevation model data, and visualization (cartography). Qualitative methods consisted of literature studies, and interviews. The parameters observed for a total of 11 parameters Badung regency and Denpasar as much as 9 parameters. Numerical classification parameter analysis results used the standard deviation and the mean of the population data and projections relationship rice field in the food balance sheet by modelling. The result of the research showed that, the number of different numerical classification parameters in rural areas (Badung) and urban areas (Denpasar), in urban areas the number of parameters is less than the rural areas. The based on numerical classification weighting and scores generate population distribution parameter analysis results of a standard deviation and average value. Numerical classification produced 5 models, which was divided into three zones are sustainable neighbourhood, buffer and converted in Denpasar and Badung. The results of Population curve parameter analysis in Denpasar showed normal curve, in contrast to the Badung regency showed abnormal curve, therefore Denpasar modeling carried out throughout the region, while in the Badung regency modeling done in each district. Relationship modelling and projections lands role in food balance in Badung views of sustainable land area whereas in Denpasar seen from any connection to the green open spaces in the spatial plan Denpasar 2011-2031. Modelling in Badung (rural) is different in Denpasar (urban), as well as population curve parameter analysis results in Badung showed abnormal curve while in Denpasar showed normal curve. Relationship modelling and projections lands role in food balance in the Badung regency sustainable in terms of land area, while in Denpasar in terms of linkages with urban green space in Denpasar City’s regional landuse plan of 2011-2031.

  4. Performance Prediction Relationships for AM2 Airfield Matting Developed from Full-Scale Accelerated Testing and Laboratory Experimentation

    DTIC Science & Technology

    2018-01-01

    work, the prevailing methods used to predict the performance of AM2 were based on the CBR design procedure for flexible pavements using a small number...suitable for design and evaluation frameworks currently used for airfield pavements and matting systems. DISCLAIMER: The contents of this report...methods used to develop the equivalency curves equated the mat-surfaced area to an equivalent thickness of flexible pavement using the CBR design

  5. Implementation and Validation of an Impedance Eduction Technique

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Gerhold, Carl H.

    2011-01-01

    Implementation of a pressure gradient method of impedance eduction in two NASA Langley flow ducts is described. The Grazing Flow Impedance Tube only supports plane-wave sources, while the Curved Duct Test Rig supports sources that contain higher-order modes. Multiple exercises are used to validate this new impedance eduction method. First, synthesized data for a hard wall insert and a conventional liner mounted in the Grazing Flow Impedance Tube are used as input to the two impedance eduction methods, the pressure gradient method and a previously validated wall pressure method. Comparisons between the two results are excellent. Next, data measured in the Grazing Flow Impedance Tube are used as input to both methods. Results from the two methods compare quite favorably for sufficiently low Mach numbers but this comparison degrades at Mach 0.5, especially when the hard wall insert is used. Finally, data measured with a hard wall insert mounted in the Curved Duct Test Rig are used as input to the pressure gradient method. Significant deviation from the known solution is observed, which is believed to be largely due to 3-D effects in this flow duct. Potential solutions to this issue are currently being explored.

  6. Assessment of Shape Changes of Mistletoe Berries: A New Software Approach to Automatize the Parameterization of Path Curve Shaped Contours

    PubMed Central

    Derbidge, Renatus; Feiten, Linus; Conradt, Oliver; Heusser, Peter; Baumgartner, Stephan

    2013-01-01

    Photographs of mistletoe (Viscum album L.) berries taken by a permanently fixed camera during their development in autumn were subjected to an outline shape analysis by fitting path curves using a mathematical algorithm from projective geometry. During growth and maturation processes the shape of mistletoe berries can be described by a set of such path curves, making it possible to extract changes of shape using one parameter called Lambda. Lambda describes the outline shape of a path curve. Here we present methods and software to capture and measure these changes of form over time. The present paper describes the software used to automatize a number of tasks including contour recognition, optimization of fitting the contour via hill-climbing, derivation of the path curves, computation of Lambda and blinding the pictures for the operator. The validity of the program is demonstrated by results from three independent measurements showing circadian rhythm in mistletoe berries. The program is available as open source and will be applied in a project to analyze the chronobiology of shape in mistletoe berries and the buds of their host trees. PMID:23565255

  7. Thermoluminescence dosimetry properties of new Cu doped CaF(2) nanoparticles.

    PubMed

    Zahedifar, M; Sadeghi, E

    2013-12-01

    Nanoparticles of Cu-doped calcium fluoride were synthesised by using the hydrothermal method. The structure of the prepared nanomaterial was characterised by the X-ray diffraction (XRD) pattern and energy dispersive spectrometer. The particle size of 36 nm was calculated from the XRD data. Its shape and size were also observed by scanning electron microscope. Thermoluminescence (TL) and photoluminescence of the produced phosphor were also considered. The computerised glow curve deconvolution procedure was used to identify the number of glow peaks included in the TL glow curve of the CaF2:Cu nanoparticles. The TL glow curve contains two overlapping glow peaks at ∼413 and 451 K. The TL response of this phosphor was studied for different Cu concentrations and the maximum sensitivity was found at 1 mol% of Cu impurity. Other dosimetric characteristics of the synthesised nanophosphor are also presented and discussed.

  8. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions

    PubMed Central

    Verdam, Mathilde G. E.; Oort, Frans J.

    2014-01-01

    Highlights Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data. A method for the investigation of measurement bias with Kronecker product restricted models. Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions. The use of curves to facilitate substantive interpretation of apparent measurement bias. Assessment of change in common factor means, after accounting for apparent measurement bias. Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks. PMID:25295016

  9. Measurement bias detection with Kronecker product restricted models for multivariate longitudinal data: an illustration with health-related quality of life data from thirteen measurement occasions.

    PubMed

    Verdam, Mathilde G E; Oort, Frans J

    2014-01-01

    Application of Kronecker product to construct parsimonious structural equation models for multivariate longitudinal data.A method for the investigation of measurement bias with Kronecker product restricted models.Application of these methods to health-related quality of life data from bone metastasis patients, collected at 13 consecutive measurement occasions.The use of curves to facilitate substantive interpretation of apparent measurement bias.Assessment of change in common factor means, after accounting for apparent measurement bias.Longitudinal measurement invariance is usually investigated with a longitudinal factor model (LFM). However, with multiple measurement occasions, the number of parameters to be estimated increases with a multiple of the number of measurement occasions. To guard against too low ratios of numbers of subjects and numbers of parameters, we can use Kronecker product restrictions to model the multivariate longitudinal structure of the data. These restrictions can be imposed on all parameter matrices, including measurement invariance restrictions on factor loadings and intercepts. The resulting models are parsimonious and have attractive interpretation, but require different methods for the investigation of measurement bias. Specifically, additional parameter matrices are introduced to accommodate possible violations of measurement invariance. These additional matrices consist of measurement bias parameters that are either fixed at zero or free to be estimated. In cases of measurement bias, it is also possible to model the bias over time, e.g., with linear or non-linear curves. Measurement bias detection with Kronecker product restricted models will be illustrated with multivariate longitudinal data from 682 bone metastasis patients whose health-related quality of life (HRQL) was measured at 13 consecutive weeks.

  10. A statistical method for estimating rates of soil development and ages of geologic deposits: A design for soil-chronosequence studies

    USGS Publications Warehouse

    Switzer, P.; Harden, J.W.; Mark, R.K.

    1988-01-01

    A statistical method for estimating rates of soil development in a given region based on calibration from a series of dated soils is used to estimate ages of soils in the same region that are not dated directly. The method is designed specifically to account for sampling procedures and uncertainties that are inherent in soil studies. Soil variation and measurement error, uncertainties in calibration dates and their relation to the age of the soil, and the limited number of dated soils are all considered. Maximum likelihood (ML) is employed to estimate a parametric linear calibration curve, relating soil development to time or age on suitably transformed scales. Soil variation on a geomorphic surface of a certain age is characterized by replicate sampling of soils on each surface; such variation is assumed to have a Gaussian distribution. The age of a geomorphic surface is described by older and younger bounds. This technique allows age uncertainty to be characterized by either a Gaussian distribution or by a triangular distribution using minimum, best-estimate, and maximum ages. The calibration curve is taken to be linear after suitable (in certain cases logarithmic) transformations, if required, of the soil parameter and age variables. Soil variability, measurement error, and departures from linearity are described in a combined fashion using Gaussian distributions with variances particular to each sampled geomorphic surface and the number of sample replicates. Uncertainty in age of a geomorphic surface used for calibration is described using three parameters by one of two methods. In the first method, upper and lower ages are specified together with a coverage probability; this specification is converted to a Gaussian distribution with the appropriate mean and variance. In the second method, "absolute" older and younger ages are specified together with a most probable age; this specification is converted to an asymmetric triangular distribution with mode at the most probable age. The statistical variability of the ML-estimated calibration curve is assessed by a Monte Carlo method in which simulated data sets repeatedly are drawn from the distributional specification; calibration parameters are reestimated for each such simulation in order to assess their statistical variability. Several examples are used for illustration. The age of undated soils in a related setting may be estimated from the soil data using the fitted calibration curve. A second simulation to assess age estimate variability is described and applied to the examples. ?? 1988 International Association for Mathematical Geology.

  11. EXTRACTING PERIODIC TRANSIT SIGNALS FROM NOISY LIGHT CURVES USING FOURIER SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samsing, Johan

    We present a simple and powerful method for extracting transit signals associated with a known transiting planet from noisy light curves. Assuming the orbital period of the planet is known and the signal is periodic, we illustrate that systematic noise can be removed in Fourier space at all frequencies by only using data within a fixed time frame with a width equal to an integer number of orbital periods. This results in a reconstruction of the full transit signal, which on average is unbiased despite no prior knowledge of either the noise or the transit signal itself being used inmore » the analysis. The method therefore has clear advantages over standard phase folding, which normally requires external input such as nearby stars or noise models for removing systematic components. In addition, we can extract the full orbital transit signal (360°) simultaneously, and Kepler-like data can be analyzed in just a few seconds. We illustrate the performance of our method by applying it to a dataset composed of light curves from Kepler with a fake injected signal emulating a planet with rings. For extracting periodic transit signals, our presented method is in general the optimal and least biased estimator and could therefore lead the way toward the first detections of, e.g., planet rings and exo-trojan asteroids.« less

  12. Quality assessment of SPR sensor chips; case study on L1 chips.

    PubMed

    Olaru, Andreea; Gheorghiu, Mihaela; David, Sorin; Polonschii, Cristina; Gheorghiu, Eugen

    2013-07-15

    Surface quality of the Surface Plasmon Resonance (SPR) chips is a major limiting issue in most SPR analyses, even more for supported lipid membranes experiments, where both the organization of the lipid matrix and the subsequent incorporation of the target molecule depend on the surface quality. A novel quantitative method to characterize the quality of SPR sensors chips is described for L1 chips subject to formation of lipid films, injection of membrane disrupting compounds, followed by appropriate regeneration procedures. The method consists in analysis of the SPR reflectivity curves for several standard solutions (e.g. PBS, HEPES or deionized water). This analysis reveals the decline of sensor surface as a function of the number of experimental cycles (consisting in biosensing assay and regeneration step) and enables active control of surface regeneration for enhanced reproducibility. We demonstrate that quantitative evaluation of the changes in reflectivity curves (shape of the SPR dip) and of the slope of the calibration curve provides a rapid and effective procedure for surface quality assessment. Whereas the method was tested on L1 SPR sensors chips, we stress on its amenability to assess the quality of other types of SPR chips, as well. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Development and Interlaboratory Validation of a Simple Screening Method for Genetically Modified Maize Using a ΔΔC(q)-Based Multiplex Real-Time PCR Assay.

    PubMed

    Noguchi, Akio; Nakamura, Kosuke; Sakata, Kozue; Sato-Fukuda, Nozomi; Ishigaki, Takumi; Mano, Junichi; Takabatake, Reona; Kitta, Kazumi; Teshima, Reiko; Kondo, Kazunari; Nishimaki-Mogami, Tomoko

    2016-04-19

    A number of genetically modified (GM) maize events have been developed and approved worldwide for commercial cultivation. A screening method is needed to monitor GM maize approved for commercialization in countries that mandate the labeling of foods containing a specified threshold level of GM crops. In Japan, a screening method has been implemented to monitor approved GM maize since 2001. However, the screening method currently used in Japan is time-consuming and requires generation of a calibration curve and experimental conversion factor (C(f)) value. We developed a simple screening method that avoids the need for a calibration curve and C(f) value. In this method, ΔC(q) values between the target sequences and the endogenous gene are calculated using multiplex real-time PCR, and the ΔΔC(q) value between the analytical and control samples is used as the criterion for determining analytical samples in which the GM organism content is below the threshold level for labeling of GM crops. An interlaboratory study indicated that the method is applicable independently with at least two models of PCR instruments used in this study.

  14. Numerical Solution of the Flow of a Perfect Gas Over A Circular Cylinder at Infinite Mach Number

    NASA Technical Reports Server (NTRS)

    Hamaker, Frank M.

    1959-01-01

    A solution for the two-dimensional flow of an inviscid perfect gas over a circular cylinder at infinite Mach number is obtained by numerical methods of analysis. Nonisentropic conditions of curved shock waves and vorticity are included in the solution. The analysis is divided into two distinct regions, the subsonic region which is analyzed by the relaxation method of Southwell and the supersonic region which was treated by the method of characteristics. Both these methods of analysis are inapplicable on the sonic line which is therefore considered separately. The shapes of the sonic line and the shock wave are obtained by iteration techniques. The striking result of the solution is the strong curvature of the sonic line and of the other lines of constant Mach number. Because of this the influence of the supersonic flow on the sonic line is negligible. On comparison with Newtonian flow methods, it is found that the approximate methods show a larger variation of surface pressure than is given by the present solution.

  15. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  16. GPU computing of compressible flow problems by a meshless method with space-filling curves

    NASA Astrophysics Data System (ADS)

    Ma, Z. H.; Wang, H.; Pu, S. H.

    2014-04-01

    A graphic processing unit (GPU) implementation of a meshless method for solving compressible flow problems is presented in this paper. Least-square fit is used to discretize the spatial derivatives of Euler equations and an upwind scheme is applied to estimate the flux terms. The compute unified device architecture (CUDA) C programming model is employed to efficiently and flexibly port the meshless solver from CPU to GPU. Considering the data locality of randomly distributed points, space-filling curves are adopted to re-number the points in order to improve the memory performance. Detailed evaluations are firstly carried out to assess the accuracy and conservation property of the underlying numerical method. Then the GPU accelerated flow solver is used to solve external steady flows over aerodynamic configurations. Representative results are validated through extensive comparisons with the experimental, finite volume or other available reference solutions. Performance analysis reveals that the running time cost of simulations is significantly reduced while impressive (more than an order of magnitude) speedups are achieved.

  17. Real space mapping of oxygen vacancy diffusion and electrochemical transformations by hysteretic current reversal curve measurements

    DOEpatents

    Kalinin, Sergei V.; Balke, Nina; Borisevich, Albina Y.; Jesse, Stephen; Maksymovych, Petro; Kim, Yunseok; Strelcov, Evgheni

    2014-06-10

    An excitation voltage biases an ionic conducting material sample over a nanoscale grid. The bias sweeps a modulated voltage with increasing maximal amplitudes. A current response is measured at grid locations. Current response reversal curves are mapped over maximal amplitudes of the bias cycles. Reversal curves are averaged over the grid for each bias cycle and mapped over maximal bias amplitudes for each bias cycle. Average reversal curve areas are mapped over maximal amplitudes of the bias cycles. Thresholds are determined for onset and ending of electrochemical activity. A predetermined number of bias sweeps may vary in frequency where each sweep has a constant number of cycles and reversal response curves may indicate ionic diffusion kinetics.

  18. Modeling of a Micro-Electronic Mechanical Systems (MEMS) Deformable Mirror for Simulation and Characterization

    DTIC Science & Technology

    2016-09-01

    1  II.  MODEL DESIGN ...Figure 10.  Experimental Optical Layout for the Boston DM Characterization ..........13  Figure 11.  Side View Showing the Curved Surface on a DM...of different methods for deposition, patterning, and etching until the desired design of the device is achieved. While a large number of devices

  19. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data.

    PubMed

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; van de Kamp, Thomas; dos Santos Rolo, Tomy; Xiao, Xianghui; Moosmann, Julian; Kashef, Jubin; Stotzka, Rainer

    2015-03-09

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration of in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce the number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.

  20. TV-based conjugate gradient method and discrete L-curve for few-view CT reconstruction of X-ray in vivo data

    DOE PAGES

    Yang, Xiaoli; Hofmann, Ralf; Dapp, Robin; ...

    2015-01-01

    High-resolution, three-dimensional (3D) imaging of soft tissues requires the solution of two inverse problems: phase retrieval and the reconstruction of the 3D image from a tomographic stack of two-dimensional (2D) projections. The number of projections per stack should be small to accommodate fast tomography of rapid processes and to constrain X-ray radiation dose to optimal levels to either increase the duration o f in vivo time-lapse series at a given goal for spatial resolution and/or the conservation of structure under X-ray irradiation. In pursuing the 3D reconstruction problem in the sense of compressive sampling theory, we propose to reduce themore » number of projections by applying an advanced algebraic technique subject to the minimisation of the total variation (TV) in the reconstructed slice. This problem is formulated in a Lagrangian multiplier fashion with the parameter value determined by appealing to a discrete L-curve in conjunction with a conjugate gradient method. The usefulness of this reconstruction modality is demonstrated for simulated and in vivo data, the latter acquired in parallel-beam imaging experiments using synchrotron radiation.« less

  1. A new concept for stainless steels ranking upon the resistance to cavitation erosion

    NASA Astrophysics Data System (ADS)

    Bordeasu, I.; Popoviciu, M. O.; Salcianu, L. C.; Ghera, C.; Micu, L. M.; Badarau, R.; Iosif, A.; Pirvulescu, L. D.; Podoleanu, C. E.

    2017-01-01

    In present, the ranking of materials as their resistance to cavitation erosion is obtained by using laboratory tests finalized with the characteristic curves mean depth erosion against time MDE(t) and mean depth erosion rate against time MDER(t). In some previous papers, Bordeasu and co-workers give procedures to establish exponential equation representing the curves, with minimum scatter of the experimental obtained results. For a given material, both exponential equations MDE(t) and MDER(t) have the same values for the parameters of scale and for the shape one. For the ranking of materials is sometimes important to establish single figure. Till now in Timisoara Polytechnic University Cavitation Laboratory were used three such numbers: the stable value of the curve MDER(t), the resistance to cavitation erosion (Rcav ≡ 1/MDERstable) and the normalized cavitation resistance Rns which is the rate between vs = MDERstable for the analyzed material and vse= MDERse the mean depth erosion rate for the steel OH12NDL (Rns = vs/vse ). OH12NDL is a material used for manufacturing the blades of numerous Kaplan turbines in Romania for which both cavitation erosion laboratory tests and field measurements of cavitation erosions are available. In the present paper we recommend a new method for ranking the materials upon cavitation erosion resistance. This method uses the scale and shape parameters of the exponential equations which represents the characteristic cavitation erosion curves. Till now the method was applied only for stainless steels. The experimental results show that the scale parameter represents an excellent method for ranking the stainless steels. In the future this kind of ranking will be tested also for other materials especially for bronzes used for manufacturing ship propellers.

  2. A method of plane geometry primitive presentation

    NASA Astrophysics Data System (ADS)

    Jiao, Anbo; Luo, Haibo; Chang, Zheng; Hui, Bin

    2014-11-01

    Point feature and line feature are basic elements in object feature sets, and they play an important role in object matching and recognition. On one hand, point feature is sensitive to noise; on the other hand, there are usually a huge number of point features in an image, which makes it complex for matching. Line feature includes straight line segment and curve. One difficulty in straight line segment matching is the uncertainty of endpoint location, the other is straight line segment fracture problem or short straight line segments joined to form long straight line segment. While for the curve, in addition to the above problems, there is another difficulty in how to quantitatively describe the shape difference between curves. Due to the problems of point feature and line feature, the robustness and accuracy of target description will be affected; in this case, a method of plane geometry primitive presentation is proposed to describe the significant structure of an object. Firstly, two types of primitives are constructed, they are intersecting line primitive and blob primitive. Secondly, a line segment detector (LSD) is applied to detect line segment, and then intersecting line primitive is extracted. Finally, robustness and accuracy of the plane geometry primitive presentation method is studied. This method has a good ability to obtain structural information of the object, even if there is rotation or scale change of the object in the image. Experimental results verify the robustness and accuracy of this method.

  3. Choice of no-slip curved boundary condition for lattice Boltzmann simulations of high-Reynolds-number flows.

    PubMed

    Sanjeevi, Sathish K P; Zarghami, Ahad; Padding, Johan T

    2018-04-01

    Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.

  4. Choice of no-slip curved boundary condition for lattice Boltzmann simulations of high-Reynolds-number flows

    NASA Astrophysics Data System (ADS)

    Sanjeevi, Sathish K. P.; Zarghami, Ahad; Padding, Johan T.

    2018-04-01

    Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.

  5. Observations from varying the lift and drag inputs to a noise prediction method for supersonic helical tip speed propellers

    NASA Technical Reports Server (NTRS)

    Dittmar, J. H.

    1984-01-01

    Previous comparisons between calculated and measured supersonic helical tip speed propeller noise show them to have different trends of peak blade passing tone versus helical tip Mach number. It was postulated that improvements in this comparison could be made first by including the drag force terms in the prediction and then by reducing the blade lift terms at the tip to allow the drag forces to dominate the noise prediction. Propeller hub to tip lift distributions were varied, but they did not yield sufficient change in the predicted lift noise to improve the comparison. This result indicates that some basic changes in the theory may be needed. In addition, the noise predicted by the drag forces did not exhibit the same curve shape as the measured data. So even if the drag force terms were to dominate, the trends with helical tip Mach number for theory and experiment would still not be the same. The effect of the blade shock wave pressure rise was approxmated by increasing the drag coefficient at the blade tip. Predictions using this shock wdave approximation did have a curve shape similar to the measured data. This result indicates that the shock pressure rise probably controls the noise at supersonic tip speed and that the linear prediction method can give the proper noise trend with Mach number.

  6. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  7. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  8. Response of mouse epidermal cells to single doses of heavy-particles

    NASA Technical Reports Server (NTRS)

    Leith, J. T.; Schilling, W. A.; Welch, G. P.

    1972-01-01

    The survival of mouse epidermal cells to heavy-particles has been studied In Vivo by the Withers clone technique. Experiments with accelerated helium, lithium and carbon ions were performed. The survival curve for the helium ion irradiations used a modified Bragg curve method with a maximum tissue penetration of 465 microns, and indicated that the dose needed to reduce the original cell number to 1 surviving cell/square centimeters was 1525 rads with a D sub o of 95 rads. The LET at the basal cell layer was 28.6 keV per micron. Preliminary experiments with lithium and carbon used treatment doses of 1250 rads with LET's at the surface of the skin of 56 and 193 keV per micron respectively. Penetration depths in skin were 350 and 530 microns for the carbon and lithium ions whose Bragg curves were unmodified. Results indicate a maximum RBE for skin of about 2 using the skin cloning technique. An attempt has been made to relate the epidermal cell survival curve to mortality of the whole animal for helium ions.

  9. Theoretical study on the dispersion curves of Lamb waves in piezoelectric-semiconductor sandwich plates GaAs-FGPM-AlAs: Legendre polynomial series expansion

    NASA Astrophysics Data System (ADS)

    Othmani, Cherif; Takali, Farid; Njeh, Anouar

    2017-06-01

    In this paper, the propagation of the Lamb waves in the GaAs-FGPM-AlAs sandwich plate is studied. Based on the orthogonal function, Legendre polynomial series expansion is applied along the thickness direction to obtain the Lamb dispersion curves. The convergence and accuracy of this polynomial method are discussed. In addition, the influences of the volume fraction p and thickness hFGPM of the FGPM middle layer on the Lamb dispersion curves are developed. The numerical results also show differences between the characteristics of Lamb dispersion curves in the sandwich plate for various gradient coefficients of the FGPM middle layer. In fact, if the volume fraction p increases the phase velocity will increases and the number of modes will decreases at a given frequency range. All the developments performed in this paper were implemented in Matlab software. The corresponding results presented in this work may have important applications in several industry areas and developing novel acoustic devices such as sensors, electromechanical transducers, actuators and filters.

  10. Fetal head detection and measurement in ultrasound images by a direct inverse randomized Hough transform

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Tan, Jinglu; Floyd, Randall C.

    2005-04-01

    Object detection in ultrasound fetal images is a challenging task for the relatively low resolution and low signal-to-noise ratio. A direct inverse randomized Hough transform (DIRHT) is developed for filtering and detecting incomplete curves in images with strong noise. The DIRHT combines the advantages of both the inverse and the randomized Hough transforms. In the reverse image, curves are highlighted while a large number of unrelated pixels are removed, demonstrating a "curve-pass filtering" effect. Curves are detected by iteratively applying the DIRHT to the filtered image. The DIRHT was applied to head detection and measurement of the biparietal diameter (BPD) and head circumference (HC). No user input or geometric properties of the head were required for the detection. The detection and measurement took 2 seconds for each image on a PC. The inter-run variations and the differences between the automatic measurements and sonographers" manual measurements were small compared with published inter-observer variations. The results demonstrated that the automatic measurements were consistent and accurate. This method provides a valuable tool for fetal examinations.

  11. STR melting curve analysis as a genetic screening tool for crime scene samples.

    PubMed

    Nguyen, Quang; McKinney, Jason; Johnson, Donald J; Roberts, Katherine A; Hardy, Winters R

    2012-07-01

    In this proof-of-concept study, high-resolution melt curve (HRMC) analysis was investigated as a postquantification screening tool to discriminate human CSF1PO and THO1 genotypes amplified with mini-STR primers in the presence of SYBR Green or LCGreen Plus dyes. A total of 12 CSF1PO and 11 HUMTHO1 genotypes were analyzed on the LightScanner HR96 and LS-32 systems and were correctly differentiated based upon their respective melt profiles. Short STR amplicon melt curves were affected by repeat number, and single-source and mixed DNA samples were additionally differentiated by the formation of heteroduplexes. Melting curves were shown to be unique and reproducible from DNA quantities ranging from 20 to 0.4 ng and distinguished identical from nonidentical genotypes from DNA derived from different biological fluids and compromised samples. Thus, a method is described which can assess both the quantity and the possible probative value of samples without full genotyping. 2012 American Academy of Forensic Sciences. Published 2012. This article is a U.S. Government work and is in the public domain in the U.S.A.

  12. A six-parameter Iwan model and its application

    NASA Astrophysics Data System (ADS)

    Li, Yikun; Hao, Zhiming

    2016-02-01

    Iwan model is a practical tool to describe the constitutive behaviors of joints. In this paper, a six-parameter Iwan model based on a truncated power-law distribution with two Dirac delta functions is proposed, which gives a more comprehensive description of joints than the previous Iwan models. Its analytical expressions including backbone curve, unloading curves and energy dissipation are deduced. Parameter identification procedures and the discretization method are also provided. A model application based on Segalman et al.'s experiment works with bolted joints is carried out. Simulation effects of different numbers of Jenkins elements are discussed. The results indicate that the six-parameter Iwan model can be used to accurately reproduce the experimental phenomena of joints.

  13. Understanding the Scalability of Bayesian Network Inference Using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.

    2010-01-01

    One of the main approaches to performing computation in Bayesian networks (BNs) is clique tree clustering and propagation. The clique tree approach consists of propagation in a clique tree compiled from a Bayesian network, and while it was introduced in the 1980s, there is still a lack of understanding of how clique tree computation time depends on variations in BN size and structure. In this article, we improve this understanding by developing an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN s non-root nodes to the number of root nodes, and (ii) the expected number of moral edges in their moral graphs. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for the total size of each set. For the special case of bipartite BNs, there are two sets and two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, where random bipartite BNs generated using the BPART algorithm are studied, we systematically increase the out-degree of the root nodes in bipartite Bayesian networks, by increasing the number of leaf nodes. Surprisingly, root clique growth is well-approximated by Gompertz growth curves, an S-shaped family of curves that has previously been used to describe growth processes in biology, medicine, and neuroscience. We believe that this research improves the understanding of the scaling behavior of clique tree clustering for a certain class of Bayesian networks; presents an aid for trade-off studies of clique tree clustering using growth curves; and ultimately provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms.

  14. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less

  15. Microstructure based simulations for prediction of flow curves and selection of process parameters for inter-critical annealing in DP steel

    NASA Astrophysics Data System (ADS)

    Deepu, M. J.; Farivar, H.; Prahl, U.; Phanikumar, G.

    2017-04-01

    Dual phase steels are versatile advanced high strength steels that are being used for sheet metal applications in automotive industry. It also has the potential for application in bulk components like gear. The inter-critical annealing in dual phase steels is one of the crucial steps that determine the mechanical properties of the material. Selection of the process parameters for inter-critical annealing, in particular, the inter-critical annealing temperature and time is important as it plays a major role in determining the volume fractions of ferrite and martensite, which in turn determines the mechanical properties. Selection of these process parameters to obtain a particular required mechanical property requires large number of experimental trials. Simulation of microstructure evolution and virtual compression/tensile testing can help in reducing the number of such experimental trials. In the present work, phase field modeling implemented in the commercial software Micress® is used to predict the microstructure evolution during inter-critical annealing. Virtual compression tests are performed on the simulated microstructure using finite element method implemented in the commercial software, to obtain the effective flow curve of the macroscopic material. The flow curves obtained by simulation are experimentally validated with physical simulation in Gleeble® and compared with that obtained using linear rule of mixture. The methodology could be used in determining the inter-critical annealing process parameters required for achieving a particular flow curve.

  16. A Review and Comparison of Methods for Recreating Individual Patient Data from Published Kaplan-Meier Survival Curves for Economic Evaluations: A Simulation Study

    PubMed Central

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659

  17. Renormalized asymptotic enumeration of Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Borinsky, Michael

    2017-10-01

    A method to obtain all-order asymptotic results for the coefficients of perturbative expansions in zero-dimensional quantum field is described. The focus is on the enumeration of the number of skeleton or primitive diagrams of a certain QFT and its asymptotics. The procedure heavily applies techniques from singularity analysis. To utilize singularity analysis, a representation of the zero-dimensional path integral as a generalized hyperelliptic curve is deduced. As applications the full asymptotic expansions of the number of disconnected, connected, 1PI and skeleton Feynman diagrams in various theories are given.

  18. Prediction of seismic collapse risk of steel moment frame mid-rise structures by meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Jough, Fooad Karimi Ghaleh; Şensoy, Serhan

    2016-12-01

    Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.

  19. Methods to assess an exercise intervention trial based on 3-level functional data.

    PubMed

    Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J

    2015-10-01

    Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    PubMed Central

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-01-01

    Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144

  1. A review and comparison of methods for recreating individual patient data from published Kaplan-Meier survival curves for economic evaluations: a simulation study.

    PubMed

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.

  2. Using ROC Curves to Choose Minimally Important Change Thresholds when Sensitivity and Specificity Are Valued Equally: The Forgotten Lesson of Pythagoras. Theoretical Considerations and an Example Application of Change in Health Status

    PubMed Central

    Froud, Robert; Abel, Gary

    2014-01-01

    Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472

  3. A simple and robust method for artifacts correction on X-ray microtomography images

    NASA Astrophysics Data System (ADS)

    Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Irina, Bayuk; Kirill, Gerke

    2017-04-01

    X-ray microtomography images of rock material often have some kinds of distortion due to different reasons such as X-ray attenuation, beam hardening, irregularity of distribution of liquid/solid phases. Several kinds of distortion can arise from further image processing and stitching of images from different measurements. Beam-hardening is a well-known and studied distortion which is relative easy to be described, fitted and corrected using a number of equations. However, this is not the case for other grey scale intensity distortions. Shading by irregularity of distribution of liquid phases, incorrect scanner operating/parameters choosing, as well as numerous artefacts from mathematical reconstructions from projections, including stitching from separate scans cannot be described using single mathematical model. To correct grey scale intensities on large 3D images we developed a package Traditional method for removing the beam hardening [1] has been modified in order to find the center of distortion. The main contribution of this work is in development of a method for arbitrary image correction. This method is based on fitting the distortion by Bezier curve using image histogram. The distortion along the image is represented by a number of Bezier curves and one base line that characterizes the natural distribution of gray value along the image. All of these curves are set manually by the operator. We have tested our approaches on different X-ray microtomography images of porous media. Arbitrary correction removes all principal distortion. After correction the images has been binarized with subsequent pore-network extracted. Equal distribution of pore-network elements along the image was the criteria to verify the proposed technique to correct grey scale intensities. [1] Iassonov, P. and Tuller, M., 2010. Application of segmentation for correction of intensity bias in X-ray computed tomography images. Vadose Zone Journal, 9(1), pp.187-191.

  4. Field curvature correction method for ultrashort throw ratio projection optics design using an odd polynomial mirror surface.

    PubMed

    Zhuang, Zhenfeng; Chen, Yanting; Yu, Feihong; Sun, Xiaowei

    2014-08-01

    This paper presents a field curvature correction method of designing an ultrashort throw ratio (TR) projection lens for an imaging system. The projection lens is composed of several refractive optical elements and an odd polynomial mirror surface. A curved image is formed in a direction away from the odd polynomial mirror surface by the refractive optical elements from the image formed on the digital micromirror device (DMD) panel, and the curved image formed is its virtual image. Then the odd polynomial mirror surface enlarges the curved image and a plane image is formed on the screen. Based on the relationship between the chief ray from the exit pupil of each field of view (FOV) and the corresponding predescribed position on the screen, the initial profile of the freeform mirror surface is calculated by using segments of the hyperbolic according to the laws of reflection. For further optimization, the value of the high-order odd polynomial surface is used to express the freeform mirror surface through a least-squares fitting method. As an example, an ultrashort TR projection lens that realizes projection onto a large 50 in. screen at a distance of only 510 mm is presented. The optical performance for the designed projection lens is analyzed by ray tracing method. Results show that an ultrashort TR projection lens modulation transfer function of over 60% at 0.5 cycles/mm for all optimization fields is achievable with f-number of 2.0, 126° full FOV, <1% distortion, and 0.46 TR. Moreover, in comparing the proposed projection lens' optical specifications to that of traditional projection lenses, aspheric mirror projection lenses, and conventional short TR projection lenses, results indicate that this projection lens has the advantages of ultrashort TR, low f-number, wide full FOV, and small distortion.

  5. Dealing with non-unique and non-monotonic response in particle sizing instruments

    NASA Astrophysics Data System (ADS)

    Rosenberg, Phil

    2017-04-01

    A number of instruments used as de-facto standards for measuring particle size distributions are actually incapable of uniquely determining the size of an individual particle. This is due to non-unique or non-monotonic response functions. Optical particle counters have non monotonic response due to oscillations in the Mie response curves, especially for large aerosol and small cloud droplets. Scanning mobility particle sizers respond identically to two particles where the ratio of particle size to particle charge is approximately the same. Images of two differently sized cloud or precipitation particles taken by an optical array probe can have similar dimensions or shadowed area depending upon where they are in the imaging plane. A number of methods exist to deal with these issues, including assuming that positive and negative errors cancel, smoothing response curves, integrating regions in measurement space before conversion to size space and matrix inversion. Matrix inversion (also called kernel inversion) has the advantage that it determines the size distribution which best matches the observations, given specific information about the instrument (a matrix which specifies the probability that a particle of a given size will be measured in a given instrument size bin). In this way it maximises use of the information in the measurements. However this technique can be confused by poor counting statistics which can cause erroneous results and negative concentrations. Also an effective method for propagating uncertainties is yet to be published or routinely implemented. Her we present a new alternative which overcomes these issues. We use Bayesian methods to determine the probability that a given size distribution is correct given a set of instrument data and then we use Markov Chain Monte Carlo methods to sample this many dimensional probability distribution function to determine the expectation and (co)variances - hence providing a best guess and an uncertainty for the size distribution which includes contributions from the non-unique response curve, counting statistics and can propagate calibration uncertainties.

  6. Initial Abstraction and Curve Numbers in a Semiarid Watershed in Southeastern Arizona

    EPA Science Inventory

    The Soil Conservation Service curve number estimates of direct runoff from rainfall for semiarid catchments can be inaccurate. Investigation of the Walnut Gulch Experimental Watershed (Southeastern Arizona) and its 10 nested catchments determined that the inaccuracy is due to an ...

  7. 50 CFR 665.799 - Area restrictions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (fm) (91.5-m) curve at Jarvis, Howland, and Baker Islands, and Kingman Reef; as depicted on National Ocean Survey Chart Numbers 83116 and 83153; (2) Landward of the 50-fm (91.5-m) curve around Rose Atoll, as depicted on National Ocean Survey Chart Number 83484. ...

  8. 50 CFR 665.799 - Area restrictions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (fm) (91.5-m) curve at Jarvis, Howland, and Baker Islands, and Kingman Reef; as depicted on National Ocean Survey Chart Numbers 83116 and 83153; (2) Landward of the 50-fm (91.5-m) curve around Rose Atoll, as depicted on National Ocean Survey Chart Number 83484. ...

  9. Learning curve evaluation using cumulative summation analysis-a clinical example of pediatric robot-assisted laparoscopic pyeloplasty.

    PubMed

    Cundy, Thomas P; Gattas, Nicholas E; White, Alan D; Najmaldin, Azad S

    2015-08-01

    The cumulative summation (CUSUM) method for learning curve analysis remains under-utilized in the surgical literature in general, and is described in only a small number of publications within the field of pediatric surgery. This study introduces the CUSUM analysis technique and applies it to evaluate the learning curve for pediatric robot-assisted laparoscopic pyeloplasty (RP). Clinical data were prospectively recorded for consecutive pediatric RP cases performed by a single-surgeon. CUSUM charts and tests were generated for set-up time, docking time, console time, operating time, total operating room time, and postoperative complications. Conversions and avoidable operating room delay were separately evaluated with respect to case experience. Comparisons between case experience and time-based outcomes were assessed using the Student's t-test and ANOVA for bi-phasic and multi-phasic learning curves respectively. Comparison between case experience and complication frequency was assessed using the Kruskal-Wallis test. A total of 90 RP cases were evaluated. The learning curve transitioned beyond the learning phase at cases 10, 15, 42, 57, and 58 for set-up time, docking time, console time, operating time, and total operating room time respectively. All comparisons of mean operating times between the learning phase and subsequent phases were statistically significant (P=<0.001-0.01). No significant difference was observed between case experience and frequency of post-operative complications (P=0.125), although the CUSUM chart demonstrated a directional change in slope for the last 12 cases in which there were high proportions of re-do cases and patients <6 months of age. The CUSUM method has a valuable role for learning curve evaluation and outcome quality monitoring. In applying this statistical technique to the largest reported single surgeon series of pediatric RP, we demonstrate numerous distinctly shaped learning curves and well-defined learning phase transition points. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. CRITICAL CURVES AND CAUSTICS OF TRIPLE-LENS MODELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daněk, Kamil; Heyrovský, David, E-mail: kamil.danek@utf.mff.cuni.cz, E-mail: heyrovsky@utf.mff.cuni.cz

    2015-06-10

    Among the 25 planetary systems detected up to now by gravitational microlensing, there are two cases of a star with two planets, and two cases of a binary star with a planet. Other, yet undetected types of triple lenses include triple stars or stars with a planet with a moon. The analysis and interpretation of such events is hindered by the lack of understanding of essential characteristics of triple lenses, such as their critical curves and caustics. We present here analytical and numerical methods for mapping the critical-curve topology and caustic cusp number in the parameter space of n-point-mass lenses.more » We apply the methods to the analysis of four symmetric triple-lens models, and obtain altogether 9 different critical-curve topologies and 32 caustic structures. While these results include various generic types, they represent just a subset of all possible triple-lens critical curves and caustics. Using the analyzed models, we demonstrate interesting features of triple lenses that do not occur in two-point-mass lenses. We show an example of a lens that cannot be described by the Chang–Refsdal model in the wide limit. In the close limit we demonstrate unusual structures of primary and secondary caustic loops, and explain the conditions for their occurrence. In the planetary limit we find that the presence of a planet may lead to a whole sequence of additional caustic metamorphoses. We show that a pair of planets may change the structure of the primary caustic even when placed far from their resonant position at the Einstein radius.« less

  11. Similarity curve in solidification process of latent-heat energy storage unit with longitudinally straight fins; Part 1, Effect of Stefan number on applicability of similarity rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaino, Koji

    1994-09-01

    Similarity curves for different Biot numbers are known to become indistinguishable with decreasing Stefan number; in other words, the similarity rule becomes more applicable for smaller Stefan number. In such a finned-tube-type storage unit as treated in this study, it has been found that the effect of Stefan number on the similarity curve varies with the number of fins. Sensible heat liberated during the solidification process has been calculated individually in a phase-change material with a heat-transfer tube and fins, and represented as a function of the frozen fraction for two specified values of Biot number, 0.1 and 1000, undermore » specified conditions of Stefan number and the number on fins. The latent-heat contribution to heat flow out of the storage unit has been examined in comparison with the sensible-heat contribution. The latent- and sensible-heat contributions are almost inversely related. This inverse relationship reduces the effect of the Stefan number on the applicability of the similarity rule.« less

  12. Holomorphic curves in surfaces of general type.

    PubMed Central

    Lu, S S; Yau, S T

    1990-01-01

    This note answers some questions on holomorphic curves and their distribution in an algebraic surface of positive index. More specifically, we exploit the existence of natural negatively curved "pseudo-Finsler" metrics on a surface S of general type whose Chern numbers satisfy c(2)1>2c2 to show that a holomorphic map of a Riemann surface to S whose image is not in any rational or elliptic curve must satisfy a distance decreasing property with respect to these metrics. We show as a consequence that such a map extends over isolated punctures. So assuming that the Riemann surface is obtained from a compact one of genus q by removing a finite number of points, then the map is actually algebraic and defines a compact holomorphic curve in S. Furthermore, the degree of the curve with respect to a fixed polarization is shown to be bounded above by a multiple of q - 1 irrespective of the map. PMID:11607050

  13. Poster — Thur Eve — 69: Computational Study of DVH-guided Cancer Treatment Planning Optimization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghomi, Pooyan Shirvani; Zinchenko, Yuriy

    2014-08-15

    Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less

  14. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    PubMed

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  15. Estimation of the Number of Compartments Associated With the Apparent Diffusion Coefficient in MRI: The Theoretical and Experimental Investigations.

    PubMed

    Ashoor, Mansour; Khorshidi, Abdollah

    2016-03-01

    The goal of the present study was to estimate the number of compartments and the mean apparent diffusion coefficient (ADC) value with the use of the DWI signal curve. A useful new mathematic model that includes internal correlation among subcompartments with a distinct number of compartments was proposed. The DWI signal was simulated to estimate the approximate association between the number of subcompartments and the molecular density, with density corresponding to the ratio of the ADC values of the compartments, as determined using the Monte Carlo method. Various factors, such as energy depletion, temperature, intracellular water accumulation, changes in the tortuosity of the extracellular diffusion paths, and changes in cell membrane permeability, have all been implicated as factors contributing to changes in the ADC of water (ADCw); therefore, one may consider them as pseudocompartments in the new model proposed in this study. The lower the coefficient is, the lower the contribution of the compartment to the net signal will be. The results of the simulation indicate that when the number of compartments increases, the signal will become significantly lower, because the gradient factor (i.e., the b value) will increase. In other words, the signal curve is approximately linear at all b values when the number of compartments in which the tissues have been severely damaged is low; however, when the number of compartments is high, the curve will become constant at high b values, and the perfusion parameters will prevail on the diffusion parameters at low b values. Therefore, normal tissues will be investigated when the number of compartments and the ADC values are high and the b values are low, whereas damaged tissues will be evaluated when the number of compartments and the ADC values are low and the b values are high. The present study investigates damaged tissues at high b values for which the effect of eddy currents will also be compensated. These b values will probably be used in functional MRI.

  16. Inferring heat recirculation and albedo for exoplanetary atmospheres: Comparing optical phase curves and secondary eclipse data

    NASA Astrophysics Data System (ADS)

    von Paris, P.; Gratier, P.; Bordé, P.; Selsis, F.

    2016-03-01

    Context. Basic atmospheric properties, such as albedo and heat redistribution between day- and nightsides, have been inferred for a number of planets using observations of secondary eclipses and thermal phase curves. Optical phase curves have not yet been used to constrain these atmospheric properties consistently. Aims: We model previously published phase curves of CoRoT-1b, TrES-2b, and HAT-P-7b, and infer albedos and recirculation efficiencies. These are then compared to previous estimates based on secondary eclipse data. Methods: We use a physically consistent model to construct optical phase curves. This model takes Lambertian reflection, thermal emission, ellipsoidal variations, and Doppler boosting, into account. Results: CoRoT-1b shows a non-negligible scattering albedo (0.11 < AS < 0.3 at 95% confidence) as well as small day-night temperature contrasts, which are indicative of moderate to high re-distribution of energy between dayside and nightside. These values are contrary to previous secondary eclipse and phase curve analyses. In the case of HAT-P-7b, model results suggest a relatively high scattering albedo (AS ≈ 0.3). This confirms previous phase curve analysis; however, it is in slight contradiction to values inferred from secondary eclipse data. For TrES-2b, both approaches yield very similar estimates of albedo and heat recirculation. Discrepancies between recirculation and albedo values as inferred from secondary eclipse and optical phase curve analyses might be interpreted as a hint that optical and IR observations probe different atmospheric layers, hence temperatures.

  17. How are flood risk estimates affected by the choice of return-periods?

    NASA Astrophysics Data System (ADS)

    Ward, P. J.; de Moel, H.; Aerts, J. C. J. H.

    2011-12-01

    Flood management is more and more adopting a risk based approach, whereby flood risk is the product of the probability and consequences of flooding. One of the most common approaches in flood risk assessment is to estimate the damage that would occur for floods of several exceedance probabilities (or return periods), to plot these on an exceedance probability-loss curve (risk curve) and to estimate risk as the area under the curve. However, there is little insight into how the selection of the return-periods (which ones and how many) used to calculate risk actually affects the final risk calculation. To gain such insights, we developed and validated an inundation model capable of rapidly simulating inundation extent and depth, and dynamically coupled this to an existing damage model. The method was applied to a section of the River Meuse in the southeast of the Netherlands. Firstly, we estimated risk based on a risk curve using yearly return periods from 2 to 10 000 yr (€ 34 million p.a.). We found that the overall risk is greatly affected by the number of return periods used to construct the risk curve, with over-estimations of annual risk between 33% and 100% when only three return periods are used. In addition, binary assumptions on dike failure can have a large effect (a factor two difference) on risk estimates. Also, the minimum and maximum return period considered in the curve affects the risk estimate considerably. The results suggest that more research is needed to develop relatively simple inundation models that can be used to produce large numbers of inundation maps, complementary to more complex 2-D-3-D hydrodynamic models. It also suggests that research into flood risk could benefit by paying more attention to the damage caused by relatively high probability floods.

  18. Spectral Measurement of Watershed Coefficients in the Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Blanchard, B. J. (Principal Investigator); Bausch, W.

    1978-01-01

    The author has identified the following significant results. It was apparent that the spectra calibration of runoff curve numbers cannot be achieved on watersheds where significant areas of timber were within the drainage area. The absorption of light by wet soil conditions restricts differentiation of watersheds with regard to watershed runoff curve numbers. It appeared that the predominant factor influencing the classification of watershed runoff curve numbers was the difference in soil color and its associated reflectance when dry. In regions where vegetation grown throughout the year, where wet surface conditions prevail or where watersheds are timbered, there is little hope of classifying runoff potential with visible light alone.

  19. Finding Exoplanets Using Point Spread Function Photometry on Kepler Data

    NASA Astrophysics Data System (ADS)

    Amaro, Rachael Christina; Scolnic, Daniel; Montet, Ben

    2018-01-01

    The Kepler Mission has been able to identify over 5,000 exoplanet candidates using mostly aperture photometry. Despite the impressive number of discoveries, a large portion of Kepler’s data set is neglected due to limitations using aperture photometry on faint sources in crowded fields. We present an alternate method that overcomes those restrictions — Point Spread Function (PSF) photometry. This powerful tool, which is already used in supernova astronomy, was used for the first time on Kepler Full Frame Images, rather than just looking at the standard light curves. We present light curves for stars in our data set and demonstrate that PSF photometry can at least get down to the same photometric precision as aperture photometry. As a check for the robustness of this method, we change small variables (stamp size, interpolation amount, and noise correction) and show that the PSF light curves maintain the same repeatability across all combinations for one of our models. We also present our progress in the next steps of this project, including the creation of a PSF model from the data itself and applying the model across the entire data set at once.

  20. Curve numbers for no-till: field data versus standard tables

    USDA-ARS?s Scientific Manuscript database

    The Curve Number procedure developed by Soil Conservation Service (Now Natural Resources Conservation Service) in the mid-1950s for estimating direct runoff from rainstorms has not been extensively tested in cropping systems under no-till. Analysis of CNs using the frequency matching and asymptotic ...

  1. A stage-normalized function for the synthesis of stage-discharge relations for the Colorado River in Grand Canyon, Arizona

    USGS Publications Warehouse

    Wiele, Stephen M.; Torizzo, Margaret

    2003-01-01

    A method was developed to construct stage-discharge rating curves for the Colorado River in Grand Canyon, Arizona, using two stage-discharge pairs and a stage-normalized rating curve. Stage-discharge rating curves formulated with the stage-normalized curve method are compared to (1) stage-discharge rating curves for six temporary stage gages and two streamflow-gaging stations developed by combining stage records with modeled unsteady flow; (2) stage-discharge rating curves developed from stage records and discharge measurements at three streamflow-gaging stations; and (3) stages surveyed at known discharges at the Northern Arizona Sand Bar Studies sites. The stage-normalized curve method shows good agreement with field data when the discharges used in the construction of the rating curves are at least 200 cubic meters per second apart. Predictions of stage using the stage-normalized curve method are also compared to predictions of stage from a steady-flow model.

  2. Evaluating the need for surface treatments to reduce crash frequency on horizontal curves.

    DOT National Transportation Integrated Search

    2013-10-01

    The application of high-friction surface treatments at appropriate horizontal curve locations throughout the : state has the potential to improve driver performance and reduce the number of crashes experienced at : horizontal curves. These treatments...

  3. Velocimetry modalities for secondary flows in a curved artery test section

    NASA Astrophysics Data System (ADS)

    Bulusu, Kartik V.; Elkins, Christopher J.; Banko, Andrew J.; Plesniak, Michael W.; Eaton, John K.

    2014-11-01

    Secondary flow structures arise due to curvature-related centrifugal forces and pressure imbalances. These flow structures influence wall shear stress and alter blood particle residence times. Magnetic resonance velocimetry (MRV) and particle image velocimetry (PIV) techniques were implemented independently, under the same physiological inflow conditions (Womersley number = 4.2). A 180-degree curved artery test section with curvature ratio (1/7) was used as an idealized geometry for curved arteries. Newtonian blood analog fluids were used for both MRV and PIV experiments. The MRV-technique offers the advantage of three-dimensional velocity field acquisition without requiring optical access or flow markers. Phase-averaged, two-dimensional, PIV-data at certain cross-sectional planes and inflow phases were compared to phase-averaged MRV-data to facilitate the characterization of large-scale, Dean-type vortices. Coherent structures detection methods that included a novel wavelet decomposition-based approach to characterize these flow structures was applied to both PIV- and MRV-data. The overarching goal of this study is the detection of motific, three-dimensional shapes of secondary flow structures using MRV techniques with guidance obtained from high fidelity, 2D-PIV measurements. This material is based in part upon work supported by the National Science Foundation under Grant Number CBET-0828903, and GW Center for Biomimetics and Bioinspired Engineering (COBRE).

  4. Pressure gradient effects on heat transfer to reusable surface insulation tile-array gaps

    NASA Technical Reports Server (NTRS)

    Throckmorton, D. A.

    1975-01-01

    An experimental investigation was performed to determine the effect of pressure gradient on the heat transfer within space shuttle reusable surface insulation (RSI) tile-array gaps under thick, turbulent boundary-layer conditions. Heat-transfer and pressure measurements were obtained on a curved array of full-scale simulated RSI tiles in a tunnel-wall boundary layer at a nominal free-stream Mach number and free-stream Reynolds numbers. Transverse pressure gradients of varying degree were induced over the model surface by rotating the curved array with respect to the flow. Definition of the tunnel-wall boundary-layer flow was obtained by measurement of boundary-layer pitot pressure profiles, wall pressure, and heat transfer. Flat-plate heat-transfer data were correlated and a method was derived for prediction of heat transfer to a smooth curved surface in the highly three-dimensional tunnel-wall boundary-layer flow. Pressure on the floor of the RSI tile-array gap followed the trends of the external surface pressure. Heat transfer to the surface immediately downstream of a transverse gap is higher than that for a smooth surface at the same location. Heating to the wall of a transverse gap, and immediately downstream of it, at its intersection with a longitudinal gap is significantly greater than that for the simple transverse gap.

  5. On the reduction of occultation light curves. [stellar occultations by planets

    NASA Technical Reports Server (NTRS)

    Wasserman, L.; Veverka, J.

    1973-01-01

    The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.

  6. Applying Active Learning to Assertion Classification of Concepts in Clinical Text

    PubMed Central

    Chen, Yukun; Mani, Subramani; Xu, Hua

    2012-01-01

    Supervised machine learning methods for clinical natural language processing (NLP) research require a large number of annotated samples, which are very expensive to build because of the involvement of physicians. Active learning, an approach that actively samples from a large pool, provides an alternative solution. Its major goal in classification is to reduce the annotation effort while maintaining the quality of the predictive model. However, few studies have investigated its uses in clinical NLP. This paper reports an application of active learning to a clinical text classification task: to determine the assertion status of clinical concepts. The annotated corpus for the assertion classification task in the 2010 i2b2/VA Clinical NLP Challenge was used in this study. We implemented several existing and newly developed active learning algorithms and assessed their uses. The outcome is reported in the global ALC score, based on the Area under the average Learning Curve of the AUC (Area Under the Curve) score. Results showed that when the same number of annotated samples was used, active learning strategies could generate better classification models (best ALC – 0.7715) than the passive learning method (random sampling) (ALC – 0.7411). Moreover, to achieve the same classification performance, active learning strategies required fewer samples than the random sampling method. For example, to achieve an AUC of 0.79, the random sampling method used 32 samples, while our best active learning algorithm required only 12 samples, a reduction of 62.5% in manual annotation effort. PMID:22127105

  7. Viscoelastic flow in rotating curved pipes

    NASA Astrophysics Data System (ADS)

    Chen, Yitung; Chen, Huajun; Zhang, Jinsuo; Zhang, Benzhao

    2006-08-01

    Fully developed viscoelastic flows in rotating curved pipes with circular cross section are investigated theoretically and numerically employing the Oldroyd-B fluid model. Based on Dean's approximation, a perturbation solution up to the secondary order is obtained. The governing equations are also solved numerically by the finite volume method. The theoretical and numerical solutions agree with each other very well. The results indicate that the rotation, as well as the curvature and elasticity, plays an important role in affecting the friction factor, the secondary flow pattern and intensity. The co-rotation enhances effects of curvature and elasticity on the secondary flow. For the counter-rotation, there is a critical rotational number RΩ', which can make the effect of rotation counteract the effect of curvature and elasticity. Complicated flow behaviors are found at this value. For the relative creeping flow, RΩ' can be estimated according to the expression RΩ'=-4Weδ. Effects of curvature and elasticity at different rotational numbers on both relative creeping flow and inertial flow are also analyzed and discussed.

  8. THE INFLUENCE OF RADIOPHOSPHORUS THERAPEUTICS ON THE PERIPHERAL BLOOD IN THE CASE OF POLYCYTHEMIA AND EARLY IDENTIFICATON OF BLOOD-PICTURE ALTERATIONS (in German)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graul, E.H.; Damminger, K.

    1961-10-01

    The alterations of the peripheral blood picture of 17 patients, who were treated with radiophosphorus (P/sup 32/) for polycythemia are described. Within the first 24 hours after the intravenous injection of 5 mc of P/sup 32/ the cell- numbers in the capillary-blood diminish. The effect is most obvious for the thrombocyte-number. By electronic counting and measuring, this radiation effect on the blood cells can be represented by a curve, which is obtained in the short time of 10 sec with a precision better than 1%. Striking is the alteration of the distribution curve of the erythrocytes, which seems to speakmore » for an elimination of microcytous forms out of the peripheral blood and, by that, for a normalization. The importance of the method, with regard to its use in times of a catastrophe which allows to detect a radiation exposition of less than 10 r, is pointed out. (auth)« less

  9. Identification of trace additives in polymer materials by attenuated total reflection Fourier transform infrared mapping coupled with multivariate curve resolution

    NASA Astrophysics Data System (ADS)

    Li, Qian; Tang, Yongjiao; Yan, Zhiwei; Zhang, Pudun

    2017-06-01

    Although multivariate curve resolution (MCR) has been applied to the analysis of Fourier transform infrared (FTIR) imaging, it is still problematic to determine the number of components. The reported methods at present tend to cause the components of low concentration missed. In this paper a new idea was proposed to resolve this problem. First, MCR calculation was repeated by increasing the number of components sequentially, then each retrieved pure spectrum of as-resulted MCR component was directly compared with a real-world pixel spectrum of the local high concentration in the corresponding MCR map. One component was affirmed only if the characteristic bands of the MCR component had been included in its pixel spectrum. This idea was applied to attenuated total reflection (ATR)/FTIR mapping for identifying the trace additives in blind polymer materials and satisfactory results were acquired. The successful demonstration of this novel approach opens up new possibilities for analyzing additives in polymer materials.

  10. Testing for nonrandom shape similarity between sister cells using automated shape comparison

    NASA Astrophysics Data System (ADS)

    Guo, Monica; Marshall, Wallace F.

    2009-02-01

    Several reports in the biological literature have indicated that when a living cell divides, the two daughter cells have a tendency to be mirror images of each other in terms of their overall cell shape. This phenomenon would be consistent with inheritance of spatial organization from mother cell to daughters. However the published data rely on a small number of examples that were visually chosen, raising potential concerns about inadvertent selection bias. We propose to revisit this issue using automated quantitative shape comparison methods which would have no contribution from the observer and which would allow statistical testing of similarity in large numbers of cells. In this report we describe a first order approach to the problem using rigid curve matching. Using test images, we compare a pointwise correspondence based distance metric with a chamfer matching strategy and find that the latter provides better correspondence and smaller distances between aligned curves, especially when we allow nonrigid deformation of the outlines in addition to rotation.

  11. Curve Number and Peakflow Responses Following the Cerro Grande Fire on a Small Watershed.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springer, E. P.; Hawkins, Richard H.

    The Curve Number (CN) method is routinely used to estimate runoff and peakflows following forest fires, but there has been essentially no literature on the estimated value and temporal variation of CNs following wildland fires. In May 2000, the Cerro Grande Fire burned the headwaters of the major watersheds that cross Los Alamos National Laboratory, and a stream gauging network presented an opportunity to assess CNs following the fire. Analysis of rainfall-runoff events indicated that the pre-fire watershed response was complacent or limited watershed area contributed to runoff. The post-fire response indicated that the complacent behavior continued so the watershedmore » response was not dramatically changed. Peakflows did increase by 2 orders of magnitude following the fire, and this was hypothesized to be a function of increase in runoff volume and changes in watershed network allowing more efficient delivery of runoff. More observations and analyses following fires are needed to support definition of CNs for post-fire response and mitigation efforts.« less

  12. A Laboratory Simulation of Urban Runoff and the Potential for Hydrograph Prediction with Curve Numbers

    USDA-ARS?s Scientific Manuscript database

    Urban drainages are mosaics of pervious and impervious surfaces, and prediction of runoff hydrology with a lumped modeling approach using the NRCS curve number may be appropriate. However, the prognostic capability of such a lumped approach is complicated by routing and connectivity amongst infiltra...

  13. Uncertainty Analysis Principles and Methods

    DTIC Science & Technology

    2007-09-01

    error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden

  14. Stormwater runoff in watersheds: a system for prediciting impacts of development and climate change

    Treesearch

    Ann Blair; Denise Sanger; Susan Lovelace

    2016-01-01

    The Stormwater Runoff Modeling System (SWARM) enhances understanding of impacts of land-use and climate change on stormwater runoff in watersheds. We developed this singleevent system based on US Department of Agriculture, Natural Resources Conservation Service curve number and unit hydrograph methods. We tested SWARM using US Geological Survey discharge and rain data...

  15. Task Design for Students' Work with Basic Theory in Analysis: The Cases of Multidimensional Differentiability and Curve Integrals

    ERIC Educational Resources Information Center

    Gravesen, Katrine Frovin; Grønbaek, Niels; Winsløw, Carl

    2017-01-01

    We investigate the challenges students face in the transition from calculus courses, focusing on methods related to the analysis of real valued functions given in closed form, to more advanced courses on analysis where focus is on theoretical structure, including proof. We do so based on task design aiming for a number of generic potentials for…

  16. Optimal joint detection and estimation that maximizes ROC-type curves

    PubMed Central

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.

    2017-01-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544

  17. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    PubMed

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  18. Preventing conflicts among bid curves used with transactive controllers in a market-based resource allocation system

    DOEpatents

    Fuller, Jason C.; Chassin, David P.; Pratt, Robert G.; Hauer, Matthew; Tuffner, Francis K.

    2017-03-07

    Disclosed herein are representative embodiments of methods, apparatus, and systems for distributing a resource (such as electricity) using a resource allocation system. One of the disclosed embodiments is a method for operating a transactive thermostatic controller configured to submit bids to a market-based resource allocation system. According to the exemplary method, a first bid curve is determined, the first bid curve indicating a first set of bid prices for corresponding temperatures and being associated with a cooling mode of operation for a heating and cooling system. A second bid curve is also determined, the second bid curve indicating a second set of bid prices for corresponding temperatures and being associated with a heating mode of operation for a heating and cooling system. In this embodiment, the first bid curve, the second bid curve, or both the first bid curve and the second bid curve are modified to prevent overlap of any portion of the first bid curve and the second bid curve.

  19. Seniority Number in Valence Bond Theory.

    PubMed

    Chen, Zhenhua; Zhou, Chen; Wu, Wei

    2015-09-08

    In this work, a hierarchy of valence bond (VB) methods based on the concept of seniority number, defined as the number of singly occupied orbitals in a determinant or an orbital configuration, is proposed and applied to the studies of the potential energy curves (PECs) of H8, N2, and C2 molecules. It is found that the seniority-based VB expansion converges more rapidly toward the full configuration interaction (FCI) or complete active space self-consistent field (CASSCF) limit and produces more accurate PECs with smaller nonparallelity errors than its molecular orbital (MO) theory-based analogue. Test results reveal that the nonorthogonal orbital-based VB theory provides a reverse but more efficient way to truncate the complete active Hilbert space by seniority numbers.

  20. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers.

    PubMed

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-11-26

    Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.

  1. Hough transform method for track finding in center drift chamber

    NASA Astrophysics Data System (ADS)

    Azmi, K. A. Mohammad Kamal; Wan Abdullah, W. A. T.; Ibrahim, Zainol Abidin

    2016-01-01

    Hough transform is a global tracking method used which had been expected to be faster approach for tracking the circular pattern of electron moving in Center Drift Chamber (CDC), by transforming the point of hit into a circular curve. This paper present the implementation of hough transform method for the reconstruction of tracks in Center Drift Chamber (CDC) which have been generated by random number in C language programming. Result from implementation of this method shows higher peak of circle parameter value (xc,yc,rc) that indicate the similarity value of the parameter needed for circular track in CDC for charged particles in the region of CDC.

  2. Biomechanics of North Atlantic Right Whale Bone: Mandibular Fracture as a Fatal Endpoint for Blunt Vessel-Whale Collision Modeling

    DTIC Science & Technology

    2007-09-01

    Calibration curves for CT number ( Hounsfield unit )s vs. mineral density (g /c c...12 3 Figure 3.4. Calibration curves for CT number ( Hounsfield units ) vs. apparent density (g /c c...named Hounsfield units (HU) after Sir Godfrey Hounsfield . The CT number is K([i- iw]/pw), where K = a magnifying constant, which depends on the make of CT

  3. A systematic evaluation of contemporary impurity correction methods in ITS-90 aluminium fixed point cells

    NASA Astrophysics Data System (ADS)

    da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham

    2017-06-01

    The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.

  4. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  5. Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.

    PubMed

    Rad, Kamiar Rahnama; Paninski, Liam

    2010-01-01

    Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.

  6. Azimuthal magnetorotational instability with super-rotation

    NASA Astrophysics Data System (ADS)

    Rüdiger, G.; Schultz, M.; Gellert, M.; Stefani, F.

    2018-02-01

    It is demonstrated that the azimuthal magnetorotational instability (AMRI) also works with radially increasing rotation rates contrary to the standard magnetorotational instability for axial fields which requires negative shear. The stability against non-axisymmetric perturbations of a conducting Taylor-Couette flow with positive shear under the influence of a toroidal magnetic field is considered if the background field between the cylinders is current free. For small magnetic Prandtl number the curves of neutral stability converge in the (Hartmann number,Reynolds number) plane approximating the stability curve obtained in the inductionless limit . The numerical solutions for indicate the existence of a lower limit of the shear rate. For large the curves scale with the magnetic Reynolds number of the outer cylinder but the flow is always stable for magnetic Prandtl number unity as is typical for double-diffusive instabilities. We are particularly interested to know the minimum Hartmann number for neutral stability. For models with resting or almost resting inner cylinder and with perfectly conducting cylinder material the minimum Hartmann number occurs for a radius ratio of \\text{in}=0.9$ . The corresponding critical Reynolds numbers are smaller than 4$ .

  7. A curved surface micro-moiré method and its application in evaluating curved surface residual stress

    NASA Astrophysics Data System (ADS)

    Zhang, Hongye; Wu, Chenlong; Liu, Zhanwei; Xie, Huimin

    2014-09-01

    The moiré method is typically applied to the measurement of deformations of a flat surface while, for a curved surface, this method is rarely used other than for projection moiré or moiré interferometry. Here, a novel colour charge-coupled device (CCD) micro-moiré method has been developed, based on which a curved surface micro-moiré (CSMM) method is proposed with a colour CCD and optical microscope (OM). In the CSMM method, no additional reference grating is needed as a Bayer colour filter array (CFA) installed on the OM in front of the colour CCD image sensor performs this role. Micro-moiré fringes with high contrast are directly observed with the OM through the Bayer CFA under the special condition of observing a curved specimen grating. The principle of the CSMM method based on a colour CCD micro-moiré method and its application range and error analysis are all described in detail. In an experiment, the curved surface residual stress near a welded seam on a stainless steel tube was investigated using the CSMM method.

  8. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

  9. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    PubMed

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  10. Activity measurements of 55Fe by two different methods

    NASA Astrophysics Data System (ADS)

    da Cruz, Paulo A. L.; Iwahara, Akira; da Silva, Carlos J.; Poledna, Roberto; Loureiro, Jamir S.; da Silva, Monica A. L.; Ruzzarin, Anelise

    2018-03-01

    A calibrated germanium detector and CIEMAT/NIST liquid scintillation method were used in the standardization of solution of 55Fe coming from a key-comparison BIPM. Commercial cocktails were used in source preparation for activity measurements in CIEMAT/NIST method. Measurements were performed in Liquid Scintillation Counter. In the germanium counting method standard point sources were prepared for obtaining atomic number versus efficiency curve of the detector in order to obtain the efficiency of 5.9 keV KX-ray of 55Fe by interpolation. The activity concentrations obtained were 508.17 ± 3.56 and 509.95 ± 16.20 kBq/g for CIEMAT/NIST and germanium methods, respectively.

  11. Learning curve for laparoscopic Heller myotomy and Dor fundoplication for achalasia.

    PubMed

    Yano, Fumiaki; Omura, Nobuo; Tsuboi, Kazuto; Hoshino, Masato; Yamamoto, Seryung; Akimoto, Shunsuke; Masuda, Takahiro; Kashiwagi, Hideyuki; Yanaga, Katsuhiko

    2017-01-01

    Although laparoscopic Heller myotomy and Dor fundoplication (LHD) is widely performed to address achalasia, little is known about the learning curve for this technique. We assessed the learning curve for performing LHD. Of the 514 cases with LHD performed between August 1994 and March 2016, the surgical outcomes of 463 cases were evaluated after excluding 50 cases with reduced port surgery and one case with the simultaneous performance of laparoscopic distal partial gastrectomy. A receiver operating characteristic (ROC) curve analysis was used to identify the cut-off value for the number of surgical experiences necessary to become proficient with LHD, which was defined as the completion of the learning curve. We defined the completion of the learning curve when the following 3 conditions were satisfied. 1) The operation time was less than 165 minutes. 2) There was no blood loss. 3) There was no intraoperative complication. In order to establish the appropriate number of surgical experiences required to complete the learning curve, the cut-off value was evaluated by using a ROC curve (AUC 0.717, p < 0.001). Finally, we identified the cut-off value as 16 surgical cases (sensitivity 0.706, specificity 0.646). Learning curve seems to complete after performing 16 cases.

  12. Application of remote sensing and geographical information system for generation of runoff curve number

    NASA Astrophysics Data System (ADS)

    Meshram, S. Gajbhiye; Sharma, S. K.; Tignath, S.

    2017-07-01

    Watershed is an ideal unit for planning and management of land and water resources (Gajbhiye et al., IEEE international conference on advances in technology and engineering (ICATE), Bombay, vol 1, issue 9, pp 23-25, 2013a; Gajbhiye et al., Appl Water Sci 4(1):51-61, 2014a; Gajbhiye et al., J Geol Soc India (SCI-IF 0.596) 84(2):192-196, 2014b). This study aims to generate the curve number, using remote sensing and geographical information system (GIS) and the effect of slope on curve number values. The study was carried out in Kanhaiya Nala watershed located in Satna district of Madhya Pradesh. Soil map, Land Use/Land cover and slope map were generated in GIS Environment. The CN parameter values corresponding to various soil, land cover, and land management conditions were selected from Natural Resource Conservation Service (NRCS) standard table. Curve number (CN) is an index developed by the NRCS, to represent the potential for storm water runoff within a drainage area. The CN for a drainage basin is estimated using a combination of land use, soil, and antecedent soil moisture condition (AMC). In present study effect of slope on CN values were determined. The result showed that the CN unadjusted value are higher in comparison to CN adjusted with slope. Remote sensing and GIS is very reliable technique for the preparation of most of the input data required by the SCS curve number model.

  13. Experimental Investigation of Eccentricity Ratio, Friction, and Oil Flow of Short Journal Bearings

    NASA Technical Reports Server (NTRS)

    Dubois, G B; Ocvirk, F W

    1952-01-01

    An experimental investigation was conducted to obtain performance data on bearings of length-diameter ratios of 1, 1/2, and 1/4 for comparison with theoretical curves. A 1.375-inch-diameter bearing was tested at speeds up to 6000 rpm and with unit loads from 0 to 900 pounds per square inch. Experimental data for eccentricity ratio and friction followed single lines when plotted against a theoretically derived capacity number, which is equal to Sommerfeld number times the square of the length-diameter ratio. The form of the capacity number indicates that under certain conditions the eccentricity ratio is theoretically independent of bearing diameter. A method of plotting oil flow data as a single line is shown. Methods are also discussed for approximating a maximum bearing temperature and evaluating the effect of deflection or misalignment on the eccentricity ratio at the ends of the bearings.

  14. Studies on transonic Double Circular Arc (DCA) profiles of axial flow compressor calculations of profile design

    NASA Astrophysics Data System (ADS)

    Rugun, Y.; Zhaoyan, Q.

    1986-05-01

    In this paper, the concepts and methods for design of high-Mach-number airfoils of axial flow compressor are described. The correlation-equations of main parameters such as geometries of airfoil and cascade, stream parameters and wake characteristic parameters of compressor are provided. For obtaining the total pressure loss coefficients of cascade and adopting the simplified calculating method, several curves and charts are provided by authors. The testing results and calculating values are compared, and both the results are in better agreement.

  15. ON THE ROTATION SPEED OF THE MILKY WAY DETERMINED FROM H i EMISSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reid, M. J.; Dame, T. M.

    2016-12-01

    The circular rotation speed of the Milky Way at the solar radius, Θ{sub 0}, has been estimated to be 220 km s{sup −1} by fitting the maximum velocity of H i emission as a function of Galactic longitude. This result is in tension with a recent estimate of Θ{sub 0} = 240 km s{sup −1}, based on Very Long Baseline Interferometry (VLBI) parallaxes and proper motions from the BeSSeL and VERA surveys for large numbers of high-mass star-forming regions across the Milky Way. We find that the rotation curve best fitted to the VLBI data is slightly curved, and that this curvaturemore » results in a biased estimate of Θ{sub 0} from the H i data when a flat rotation curve is assumed. This relieves the tension between the methods and favors Θ{sub 0} = 240 km s{sup −1}.« less

  16. The Hurwitz Enumeration Problem of Branched Covers and Hodge Integrals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yun S.

    We use algebraic methods to compute the simple Hurwitz numbers for arbitrary source and target Riemann surfaces. For an elliptic curve target, we reproduce the results previously obtained by string theorists. Motivated by the Gromov-Witten potentials, we find a general generating function for the simple Hurwitz numbers in terms of the representation theory of the symmetric group S{sub n}. We also find a generating function for Hodge integrals on the moduli space {bar M}{sub g,2} of Riemann surfaces with two marked points, similar to that found by Faber and Pandharipande for the case of one marked point.

  17. Evaluation of the influences of various force magnitudes and configurations on scoliotic curve correction using finite element analysis.

    PubMed

    Karimi, Mohammad Taghi; Ebrahimi, Mohammad Hossein; Mohammadi, Ali; McGarry, Anthony

    2017-03-01

    Scoliosis is a lateral curvature in the normally straight vertical line of the spine, and the curvature can be moderate to severe. Different treatment can be used based on severity and age of subjects, but most common treatment for this disease is using orthosis. To design orthosis types of force arrangement can be varied, from transverse loads to vertical loads or combination of them. But it is not well introduced how orthoses control scoliotic curve and how to achieve the maximum correction based on force configurations and magnitude. Therefore, it was aimed to determine the effect of various loads configurations and magnitudes on curve correction of a degenerative scoliotic subject. A scoliotic subject participated in this study. The CT-Scan of the subject was used to produce 3D model of spine. The 3D model of spine was produced by Mimics software and the finite element analysis and deformation of scoliotic curve of the spine under seven different forces and in three different conditions was determined by ABAQUS software. The Cobb angle in scoliosis curve decreased significantly by applying forces. In each condition depends on different forces, different corrections have been achieved. It can be concluded that the configurations of the force application mentioned in this study is effective to decrease the scoliosis curve. Although it is a case study, it can be used for a vast number of subjects to predict the correction of scoliosis curve before orthotic treatment. Moreover, it is recommended that this method and the outputs can be compared with clinical findings.

  18. Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor

    NASA Astrophysics Data System (ADS)

    Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.

    2018-01-01

    Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.

  19. Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code

    NASA Technical Reports Server (NTRS)

    Gonzales, Fabian O.; Velez-Reyes, Miguel

    1997-01-01

    When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,

  20. Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions.

    PubMed

    Joshi, Gaurav V; Duan, Yuanyuan; Della Bona, Alvaro; Hill, Thomas J; St John, Kenneth; Griggs, Jason A

    2013-11-01

    To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPam(1/2) reaching a plateau at different critical flaw sizes based on loading method. Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  1. A New Method to Compare Statistical Tree Growth Curves: The PL-GMANOVA Model and Its Application with Dendrochronological Data

    PubMed Central

    Ricker, Martin; Peña Ramírez, Víctor M.; von Rosen, Dietrich

    2014-01-01

    Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A·T+E, where for and for , A =  initial relative growth to be estimated, , and E is an error term for each tree and time point. Furthermore, Ei[–b·r]  = , , with TPR being the turning point radius in a sigmoid curve, and at is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth . One site (at the Popocatépetl volcano) stood out, with being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time. PMID:25402427

  2. Predicting Peak Flows following Forest Fires

    NASA Astrophysics Data System (ADS)

    Elliot, William J.; Miller, Mary Ellen; Dobre, Mariana

    2016-04-01

    Following forest fires, peak flows in perennial and ephemeral streams often increase by a factor of 10 or more. This increase in peak flow rate may overwhelm existing downstream structures, such as road culverts, causing serious damage to road fills at stream crossings. In order to predict peak flow rates following wildfires, we have applied two different tools. One is based on the U.S.D.A Natural Resource Conservation Service Curve Number Method (CN), and the other is by applying the Water Erosion Prediction Project (WEPP) to the watershed. In our presentation, we will describe the science behind the two methods, and present the main variables for each model. We will then provide an example of a comparison of the two methods to a fire-prone watershed upstream of the City of Flagstaff, Arizona, USA, where a fire spread model was applied for current fuel loads, and for likely fuel loads following a fuel reduction treatment. When applying the curve number method, determining the time to peak flow can be problematic for low severity fires because the runoff flow paths are both surface and through shallow lateral flow. The WEPP watershed version incorporates shallow lateral flow into stream channels. However, the version of the WEPP model that was used for this study did not have channel routing capabilities, but rather relied on regression relationships to estimate peak flows from individual hillslope polygon peak runoff rates. We found that the two methods gave similar results if applied correctly, with the WEPP predictions somewhat greater than the CN predictions. Later releases of the WEPP model have incorporated alternative methods for routing peak flows that need to be evaluated.

  3. H2+, HeH and H2: Approximating potential curves, calculating rovibrational states

    NASA Astrophysics Data System (ADS)

    Olivares-Pilón, Horacio; Turbiner, Alexander V.

    2018-06-01

    Analytic consideration of the Bohr-Oppenheimer (BO) potential curves for diatomic molecules is proposed: accurate analytic interpolation for a potential curve consistent with its rovibrational spectra is found. It is shown that in the BO approximation for four lowest electronic states 1 sσg and 2 pσu, 2 pπu and 3 dπg of H2+, the ground state X2Σ+ of HeH and the two lowest states 1 Σg+ and 3 Σu+ of H2, the potential curves can be analytically interpolated in full range of internuclear distances R with not less than 4-5-6 s.d. Approximation based on matching the Laurant-type expansion at small R and a combination of the multipole expansion with one-instanton type contribution at large distances R is given by two-point Padé approximant. The position of minimum, when exists, is predicted within 1% or better. For the molecular ion H2+ in the Lagrange mesh method, the spectra of vibrational, rotational and rovibrational states (ν , L) associated with 1 sσg and 2 pσu, 2 pπu and 3 dπg potential curves are calculated. In general, it coincides with spectra found via numerical solution of the Schrödinger equation (when available) within six s.d. It is shown that 1 sσg curve contains 19 vibrational states (ν , 0) , while 2 pσu curve contains a single one (0 , 0) and 2 pπu state contains 12 vibrational states (ν , 0) . In general, 1 sσg electronic curve contains 420 rovibrational states, which increases up to 423 when we are beyond BO approximation. For the state 2 pσu the total number of rovibrational states (all with ν = 0) is equal to 3, within or beyond Bohr-Oppenheimer approximation. As for the state 2 pπu within the Bohr-Oppenheimer approximation the total number of the rovibrational bound states is equal to 284. The state 3 dπg is repulsive, no rovibrational state is found. It is confirmed in Lagrange mesh formalism the statement that the ground state potential curve of the heteronuclear molecule HeH does not support rovibrational states. Accurate analytical expression for the potential curves of the hydrogen molecule H2 for the states 1Σg+ and 3 Σu+ is presented. The ground state 1 Σg+ contains 15 vibrational states (ν , 0) , ν = 0- 14. In general, this state supports 301 rovibrational states. The potential curve of the state 3Σu+ has a shallow minimum: it does not support any rovibrational state, it is repulsive.

  4. Visual Tracking Using 3D Data and Region-Based Active Contours

    DTIC Science & Technology

    2016-09-28

    adaptive control strategies which explicitly take uncertainty into account. Filtering methods ranging from the classical Kalman filters valid for...linear systems to the much more general particle filters also fit into this framework in a very natural manner. In particular, the particle filtering ...the number of samples required for accurate filtering increases with the dimension of the system noise. In our approach, we approximate curve

  5. Rainfall-runoff in the Albuquerque, New Mexico, area: Measurements, analyses and comparisons

    USGS Publications Warehouse

    Anderson, C.E.; Ward, T.J.; Kelly, T.; ,

    2005-01-01

    Albuquerque, New Mexico, has experienced significant growth over the last 20 years like many other cities in the Southwestern United States. While the US population grew by 37% between the 1970 and 2000 censuses, the growth for Albuquerque was 83%. More people mean more development and increased problems of managing runoff from urbanizing watersheds. The U.S. Geological Survey (USGS) in cooperation with the Albuquerque Arroyo Metropolitan Flood Control Authority (AMAFCA) and the City of Albuquerque has maintained a rainfall-runoff data collection program since 1976. The data from measured precipitation events can be used to verify hydrologic modeling. In this presentation, data from a representative gaged watershed is analyzed and discussed to set the overall framework for the rainfall-runoff process in the Albuquerque area. Of particular interest are the basic relationships between rainfall and watershed runoff response and an analysis of curve numbers as an indicator of runoff function. In urbanized areas, four land treatment types (natural, irrigated lawns, compacted soil, and impervious) are used to define surface infiltration conditions. Rainfall and runoff gage data are used to compare curve number (CN) and initial abstraction/uniform infiltration (IA/INF) techniques in an Albuquerque watershed. The IA/INF method appears to produce superior results over the CN method for the measured rainfall events.

  6. First archaeointensity catalogue and intensity secular variation curve for Iberia spanning the last 3000 years

    NASA Astrophysics Data System (ADS)

    Molina-Cardín, Alberto; Campuzano, Saioa A.; Rivero, Mercedes; Osete, María Luisa; Gómez-Paccard, Miriam; Pérez-Fuentes, José Carlos; Pavón-Carrasco, F. Javier; Chauvin, Annick; Palencia-Ortas, Alicia

    2017-04-01

    In this work we present the first archaeomagnetic intensity database for the Iberian Peninsula covering the last 3 millennia. In addition to previously published archaeointensities (about 100 data), we present twenty new high-quality archaeointensities. The new data have been obtained following the Thellier and Thellier method including pTRM-checks and have been corrected for the effect of the anisotropy of thermoremanent magnetization upon archaeointensity estimates. Importantly, about 50% of the new data obtained correspond to the first millennium BC, a period for which there was not possible to develop an intensity palaeosecular variation curve before due to the lack of high-quality archaeointensity data. The different qualities of the data included in the Iberian dataset have been evaluated following different palaeomagnetic criteria, such as the number of specimens analysed, the laboratory protocol applied and the kind of material analysed. Finally, we present the first intensity palaeosecular variation curve for the Iberian Peninsula centred at Madrid for the last 3000 years. In order to obtain the most reliable secular variation curve, it has been generated using only selected high-quality data from the catalogue.

  7. A Curve Fitting Approach Using ANN for Converting CT Number to Linear Attenuation Coefficient for CT-based PET Attenuation Correction

    NASA Astrophysics Data System (ADS)

    Lai, Chia-Lin; Lee, Jhih-Shian; Chen, Jyh-Cheng

    2015-02-01

    Energy-mapping, the conversion of linear attenuation coefficients (μ) calculated at the effective computed tomography (CT) energy to those corresponding to 511 keV, is an important step in CT-based attenuation correction (CTAC) for positron emission tomography (PET) quantification. The aim of this study was to implement energy-mapping step by using curve fitting ability of artificial neural network (ANN). Eleven digital phantoms simulated by Geant4 application for tomographic emission (GATE) and 12 physical phantoms composed of various volume concentrations of iodine contrast were used in this study to generate energy-mapping curves by acquiring average CT values and linear attenuation coefficients at 511 keV of these phantoms. The curves were built with ANN toolbox in MATLAB. To evaluate the effectiveness of the proposed method, another two digital phantoms (liver and spine-bone) and three physical phantoms (volume concentrations of 3%, 10% and 20%) were used to compare the energy-mapping curves built by ANN and bilinear transformation, and a semi-quantitative analysis was proceeded by injecting 0.5 mCi FDG into a SD rat for micro-PET scanning. The results showed that the percentage relative difference (PRD) values of digital liver and spine-bone phantom are 5.46% and 1.28% based on ANN, and 19.21% and 1.87% based on bilinear transformation. For 3%, 10% and 20% physical phantoms, the PRD values of ANN curve are 0.91%, 0.70% and 3.70%, and the PRD values of bilinear transformation are 3.80%, 1.44% and 4.30%, respectively. Both digital and physical phantoms indicated that the ANN curve can achieve better performance than bilinear transformation. The semi-quantitative analysis of rat PET images showed that the ANN curve can reduce the inaccuracy caused by attenuation effect from 13.75% to 4.43% in brain tissue, and 23.26% to 9.41% in heart tissue. On the other hand, the inaccuracy remained 6.47% and 11.51% in brain and heart tissue when the bilinear transformation was used. Overall, it can be concluded that the bilinear transformation method resulted in considerable bias and the newly proposed calibration curve built by ANN could achieve better results with acceptable accuracy.

  8. Optimal methods for measuring eligibility for liver transplant in hepatocellular carcinoma patients undergoing transarterial chemoembolization.

    PubMed

    Kim, Hyung-Don; Shim, Ju Hyun; Kim, Gi-Ae; Shin, Yong Moon; Yu, Eunsil; Lee, Sung-Gyu; Lee, Danbi; Kim, Kang Mo; Lim, Young-Suk; Lee, Han Chu; Chung, Young-Hwa; Lee, Yung Sang

    2015-05-01

    We investigated the optimal radiologic method for measuring hepatocellular carcinoma (HCC) treated by transarterial chemoembolization (TACE) in order to assess suitability for liver transplantation (LT). 271 HCC patients undergoing TACE prior to LT were classified according to both Milan and up-to-seven criteria after TACE by using the enhancement or size method on computed tomography images. The cumulative incidence function curves with competing risks regression was used in post-LT time-to-recurrence analysis. The predictive accuracy for recurrence was compared using area under the time-dependent receiver operating characteristic curves (AUC) estimation. Of the 271 patients, 246 (90.8%) and 164 (60.5%) fell within Milan and 252 (93.0%) and 210 (77.5%) fell within up-to-seven criteria, when assessed by enhancement and size methods, respectively. Competing risks regression analyses adjusting for covariates indicated that meeting the criteria by enhancement and by size methods was independently related to post-LT time-to-recurrence in the Milan or up-to-seven model. Higher AUC values were observed with the size method only in the up-to-seven model (p<0.05). Mean differences in the sum of tumor diameter and number of tumors between pathologic and radiologic findings were significantly less by the enhancement method (p<0.05). Cumulative incidence curves showed similar recurrence results between patients with and without prior TACE within the criteria based on either method, except for the within up-to-seven by the enhancement method (p=0.017). The enhancement method is a reliable tool for assessing the control or downstaging of HCC within Milan after TACE, although the size method may be preferable when applying the up-to-seven criterion. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  9. Estimation of the reproduction number of dengue fever from spatial epidemic data.

    PubMed

    Chowell, G; Diaz-Dueñas, P; Miller, J C; Alcazar-Velazco, A; Hyman, J M; Fenimore, P W; Castillo-Chavez, C

    2007-08-01

    Dengue, a vector-borne disease, thrives in tropical and subtropical regions worldwide. A retrospective analysis of the 2002 dengue epidemic in Colima located on the Mexican central Pacific coast is carried out. We estimate the reproduction number from spatial epidemic data at the level of municipalities using two different methods: (1) Using a standard dengue epidemic model and assuming pure exponential initial epidemic growth and (2) Fitting a more realistic epidemic model to the initial phase of the dengue epidemic curve. Using Method I, we estimate an overall mean reproduction number of 3.09 (95% CI: 2.34,3.84) as well as local reproduction numbers whose values range from 1.24 (1.15,1.33) to 4.22 (2.90,5.54). Using Method II, the overall mean reproduction number is estimated to be 2.0 (1.75,2.23) and local reproduction numbers ranging from 0.49 (0.0,1.0) to 3.30 (1.63,4.97). Method I systematically overestimates the reproduction number relative to the refined Method II, and hence it would overestimate the intensity of interventions required for containment. Moreover, optimal intervention with defined resources demands different levels of locally tailored mitigation. Local epidemic peaks occur between the 24th and 35th week of the year, and correlate positively with the final local epidemic sizes (rho=0.92, P-value<0.001). Moreover, final local epidemic sizes are found to be linearly related to the local population size (P-value<0.001). This observation supports a roughly constant number of female mosquitoes per person across urban and rural regions.

  10. A novel quantitative reverse-transcription PCR (qRT-PCR) for the enumeration of total bacteria, using meat micro-flora as a model.

    PubMed

    Dolan, Anthony; Burgess, Catherine M; Barry, Thomas B; Fanning, Seamus; Duffy, Geraldine

    2009-04-01

    A sensitive quantitative reverse-transcription PCR (qRT-PCR) method was developed for enumeration of total bacteria. Using two sets of primers separately to target the ribonuclease-P (RNase P) RNA transcripts of gram positive and gram negative bacteria. Standard curves were generated using SYBR Green I kits for the LightCycler 2.0 instrument (Roche Diagnostics) to allow quantification of mixed microflora in liquid media. RNA standards were used and extracted from known cell equivalents and subsequently converted to cDNA for the construction of standard curves. The number of mixed bacteria in culture was determined by qRT-PCR, and the results correlated (r(2)=0.88, rsd=0.466) with the total viable count over the range from approx. Log(10) 3 to approx. Log(10) 7 CFU ml(-1). The rapid nature of this assay (8 h) and its potential as an alternative method to the standard plate count method to predict total viable counts and shelf life are discussed.

  11. Gold Nanoparticle-Aptamer-Based LSPR Sensing of Ochratoxin A at a Widened Detection Range by Double Calibration Curve Method

    NASA Astrophysics Data System (ADS)

    Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin

    2018-04-01

    Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium, and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a “double calibration curve” method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.

  12. Stability analysis of flexible wind turbine blades using finite element method

    NASA Technical Reports Server (NTRS)

    Kamoulakos, A.

    1982-01-01

    Static vibration and flutter analysis of a straight elastic axis blade was performed based on a finite element method solution. The total potential energy functional was formulated according to linear beam theory. The inertia and aerodynamic loads were formulated according to the blade absolute acceleration and absolute velocity vectors. In vibration analysis, the direction of motion of the blade during the first out-of-lane and first in-plane modes was examined; numerical results involve NASA/DOE Mod-0, McCauley propeller, north wind turbine and flat plate behavior. In flutter analysis, comparison cases were examined involving several references. Vibration analysis of a nonstraight elastic axis blade based on a finite element method solution was performed in a similar manner with the straight elastic axis blade, since it was recognized that a curved blade can be approximated by an assembly of a sufficient number of straight blade elements at different inclinations with respect to common system of axes. Numerical results involve comparison between the behavior of a straight and a curved cantilever beam during the lowest two in-plane and out-of-plane modes.

  13. Measurement of drill grinding parameters using laser sensor

    NASA Astrophysics Data System (ADS)

    Yanping, Peng; Kumehara, Hiroyuki; Wei, Zhang; Nomura, Takashi

    2005-12-01

    To measure the grinding parameters and geometry parameters accurately for a drill point is essential to its design and reconditioning. In recent years, a number of non-contact coordinate measuring apparatuses, using CCD camera or laser sensors, are developed. But, a lot work is to be done for further improvement. This paper reports another kind of laser coordinate meter. As an example of its application, the method for geometry inspection of the drill flank surface is detailed. Measured data from laser scanning on the flank surface around some points with several 2-dimensional curves are analyzed with mathematical procedure. If one of these curves turns to be a straight line, it must be the generatrix of the grinding cone. Thus, the grinding parameters are determined by a set of three generatrices. Then, the measurement method and data processing procedure are proposed. Its validity is assessed by measuring a sample with given parameters. The point geometry measured agrees well with the known values. In comparison with other methods in the published literature, it is simpler in computation and more accurate in results.

  14. PET-CT Animal Model for Surveillance of Embedded Metal Fragments

    DTIC Science & Technology

    2012-12-15

    area under the curve ( AUC ) were calculated. Significance level was set at p < .05. Histopathology was assessed by a pathologist, blinded to...were determined. Receiver Operating Characteristic (ROC) curve and the area under the curve ( AUC ) were calculated. Significance...False negatives 10 Principal Investigator (Shinn, Antoinette, Marie) USU Project Number: N11-C18 The area under the curve ( AUC ) was 0.938

  15. Determination of the Isotopic Enrichment of 13C- and 2H-Labeled Tracers of Glucose Using High-Resolution Mass Spectrometry: Application to Dual- and Triple-Tracer Studies.

    PubMed

    Trötzmüller, Martin; Triebl, Alexander; Ajsic, Amra; Hartler, Jürgen; Köfeler, Harald; Regittnig, Werner

    2017-11-21

    Multiple-tracer approaches for investigating glucose metabolism in humans usually involve the administration of stable and radioactive glucose tracers and the subsequent determination of tracer enrichments in sampled blood. When using conventional, low-resolution mass spectrometry (LRMS), the number of spectral interferences rises rapidly with the number of stable tracers employed. Thus, in LRMS, both computational effort and statistical uncertainties associated with the correction for spectral interferences limit the number of stable tracers that can be simultaneously employed (usually two). Here we show that these limitations can be overcome by applying high-resolution mass spectrometry (HRMS). The HRMS method presented is based on the use of an Orbitrap mass spectrometer operated at a mass resolution of 100 000 to allow electrospray-generated ions of the deprotonated glucose molecules to be monitored at their exact masses. The tracer enrichment determination in blood plasma is demonstrated for several triple combinations of 13 C- and 2 H-labeled glucose tracers (e.g., [1- 2 H 1 ]-, [6,6- 2 H 2 ]-, [1,6- 13 C 2 ]glucose). For each combination it is shown that ions arising from 2 H-labeled tracers are completely differentiated from those arising from 13 C-labeled tracers, thereby allowing the enrichment of a tracer to be simply calculated from the observed ion intensities using a standard curve with curve parameters unaffected by the presence of other tracers. For each tracer, the HRMS method exhibits low limits of detection and good repeatability in the tested 0.1-15.0% enrichment range. Additionally, due to short sample preparation and analysis times, the method is well-suited for high-throughput determination of multiple glucose tracer enrichments in plasma samples.

  16. Towards a universal master curve in magnetorheology

    NASA Astrophysics Data System (ADS)

    Ruiz-López, José Antonio; Hidalgo-Alvarez, Roque; de Vicente, Juan

    2017-05-01

    We demonstrate that inverse ferrofluids behave as model magnetorheological fluids. A universal master curve is proposed, using a reduced Mason number, under the frame of a structural viscosity model where the magnetic field strength dependence is solely contained in the Mason number and the particle concentration is solely contained in the critical Mason number (i.e. the yield stress). A linear dependence of the critical Mason number with the particle concentration is observed that is in good agreement with a mean (average) magnetization approximation, particle level dynamic simulations and micromechanical models available in the literature.

  17. Dislocation model of nucleation and development of slip bands and their effect on service life of structural materials subject to cyclic loading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shetulov, D. I.; Andreev, V. V., E-mail: vyach.andreev@mail.ru; Myasnikov, A. M.

    Most of the destructions of machine parts are of fatigue character. Under cyclic loading, the surface layer, in which hardening–softening processes rapidly occur, is formed almost at once after its beginning. The interaction of plastic-deformation traces with each other and with other structural elements, such as grains, results in the formation of a characteristic microstructure of the machine-part surface subject to cyclic loadings. The character of accumulation of slip bands and their shape (narrow, wide, twisting, and broken) depends on the conditions under which (under what factors) the cyclic loading occurs. The fatigue-resistance index expressed in terms of the slopemore » of left portion of the fatigue curve linearized in logarithmic coordinates also depends on the set of relevant factors. The dependence of the surface damageability on the fatigue resistance index makes it possible to implement the method of predicting the fatigue curve by the description of the factors acting on a detail or construction. The position of the inflection point on the curve in the highcycle fatigue region (the endurance limit and the number of loading cycles, the ordinate and abscissa of the inflection point on the fatigue curve, respectively) also depends on the set of relevant factors. In combination with the previously obtained value of the slope of the left portion of the curve in the high-cycle fatigue region, this makes it possible to construct an a priori fatigue curve, thus reducing the scope of required fatigue tests and, hence, high expenses because of their long duration and high cost. The scope of tests upon using the developed method of prediction may be reduced to a minimum of one or two samples at the predicted level of the endurance limit.« less

  18. Characteristic features of a high-energy x-ray spectra estimation method based on the Waggener iterative perturbation principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwasaki, Akira; Kubota, Mamoru; Hirota, Junichi

    2006-11-15

    We have redeveloped a high-energy x-ray spectra estimation method reported by Iwasaki et al. [A. Iwasaki, H. Matsutani, M. Kubota, A. Fujimori, K. Suzaki, and Y. Abe, Radiat. Phys. Chem. 67, 81-91 (2003)]. The method is based on the iterative perturbation principle to minimize differences between measured and calculated transmission curves, originally proposed by Waggener et al. [R. G. Waggener, M. M. Blough, J. A. Terry, D. Chen, N. E. Lee, S. Zhang, and W. D. McDavid, Med. Phys. 26, 1269-1278 (1999)]. The method can estimate spectra applicable for media at least from water to lead using only about tenmore » energy bins. Estimating spectra of 4-15 MV x-ray beams from a linear accelerator, we describe characteristic features of the method with regard to parameters including the prespectrum, number of transmission measurements, number of energy bins, energy bin widths, and artifactual bipeaked spectrum production.« less

  19. On the compressible Taylor?Couette problem

    NASA Astrophysics Data System (ADS)

    Manela, A.; Frankel, I.

    We consider the linear temporal stability of a Couette flow of a Maxwell gas within the gap between a rotating inner cylinder and a concentric stationary outer cylinder both maintained at the same temperature. The neutral curve is obtained for arbitrary Mach (Ma) and arbitrarily small Knudsen (Kn) numbers by use of a continuum model and is verified via comparison to direct simulation Monte Carlo results. At subsonic rotation speeds we find, for the radial ratios considered here, that the neutral curve nearly coincides with the constant-Reynolds-number curve pertaining to the critical value for the onset of instability in the corresponding incompressible-flow problem. With increasing Mach number, transition is deferred to larger Reynolds numbers. It is remarkable that for a fixed Reynolds number, instability is always eventually suppressed beyond some supersonic rotation speed. To clarify this we examine the variation with increasing (Ma) of the reference Couette flow and analyse the narrow-gap limit of the compressible TC problem. The results of these suggest that, as in the incompressible problem, the onset of instability at supersonic speeds is still essentially determined through the balance of inertial and viscous-dissipative effects. Suppression of instability is brought about by increased rates of dissipation associated with the elevated bulk-fluid temperatures occurring at supersonic speeds. A useful approximation is obtained for the neutral curve throughout the entire range of Mach numbers by an adaptation of the familiar incompressible stability criteria with the critical Reynolds (or Taylor) numbers now based on average fluid properties. The narrow-gap analysis further indicates that the resulting approximate neutral curve obtained in the (Ma, Kn) plane consists of two branches: (i) the subsonic part corresponding to a constant ratio (Ma/Kn) (i.e. a constant critical Reynolds number) and (ii) a supersonic branch which at large Ma values corresponds to a constant product Ma Kn. Finally, our analysis helps to resolve some conflicting views in the literature regarding apparently destabilizing compressibility effects.

  20. Linking Parameters Estimated with the Generalized Graded Unfolding Model: A Comparison of the Accuracy of Characteristic Curve Methods

    ERIC Educational Resources Information Center

    Anderson Koenig, Judith; Roberts, James S.

    2007-01-01

    Methods for linking item response theory (IRT) parameters are developed for attitude questionnaire responses calibrated with the generalized graded unfolding model (GGUM). One class of IRT linking methods derives the linking coefficients by comparing characteristic curves, and three of these methods---test characteristic curve (TCC), item…

  1. Relating oxygen partial pressure, saturation and content: the haemoglobin-oxygen dissociation curve.

    PubMed

    Collins, Julie-Ann; Rudenski, Aram; Gibson, John; Howard, Luke; O'Driscoll, Ronan

    2015-09-01

    The delivery of oxygen by arterial blood to the tissues of the body has a number of critical determinants including blood oxygen concentration (content), saturation (S O2 ) and partial pressure, haemoglobin concentration and cardiac output, including its distribution. The haemoglobin-oxygen dissociation curve, a graphical representation of the relationship between oxygen satur-ation and oxygen partial pressure helps us to understand some of the principles underpinning this process. Historically this curve was derived from very limited data based on blood samples from small numbers of healthy subjects which were manipulated in vitro and ultimately determined by equations such as those described by Severinghaus in 1979. In a study of 3524 clinical specimens, we found that this equation estimated the S O2 in blood from patients with normal pH and S O2 >70% with remarkable accuracy and, to our knowledge, this is the first large-scale validation of this equation using clinical samples. Oxygen saturation by pulse oximetry (S pO2 ) is nowadays the standard clinical method for assessing arterial oxygen saturation, providing a convenient, pain-free means of continuously assessing oxygenation, provided the interpreting clinician is aware of important limitations. The use of pulse oximetry reduces the need for arterial blood gas analysis (S aO2 ) as many patients who are not at risk of hypercapnic respiratory failure or metabolic acidosis and have acceptable S pO2 do not necessarily require blood gas analysis. While arterial sampling remains the gold-standard method of assessing ventilation and oxygenation, in those patients in whom blood gas analysis is indicated, arterialised capillary samples also have a valuable role in patient care. The clinical role of venous blood gases however remains less well defined.

  2. A Novel Database to Rank and Display Archeomagnetic Intensity Data

    NASA Astrophysics Data System (ADS)

    Donadini, F.; Korhonen, K.; Riisager, P.; Pesonen, L. J.; Kahma, K.

    2005-12-01

    To understand the content and the causes of the changes in the Earth's magnetic field beyond the observatory records one has to rely on archeomagnetic and lake sediment paleomagnetic data. The regional archeointensity curves are often of different quality and temporally variable which hampers the global analysis of the data in terms of dipole vs non-dipole field. We have developed a novel archeointensity database application utilizing MySQL, PHP (PHP Hypertext Preprocessor), and the Generic Mapping Tools (GMT) for ranking and displaying geomagnetic intensity data from the last 12000 years. Our application has the advantage that no specific software is required to query the database and view the results. Querying the database is performed using any Web browser; a fill-out form is used to enter the site location and a minimum ranking value to select the data points to be displayed. The form also features the possibility to select plotting of the data as an archeointensity curve with error bars, and a Virtual Axial Dipole Moment (VADM) or ancient field value (Ba) curve calculated using the CALS7K model (Continuous Archaeomagnetic and Lake Sediment geomagnetic model) of (Korte and Constable, 2005). The results of a query are displayed on a Web page containing a table summarizing the query parameters, a table showing the archeointensity values satisfying the query parameters, and a plot of VADM or Ba as a function of sample age. The database consists of eight related tables. The main one, INTENSITIES, stores the 3704 archeointensity measurements collected from 159 publications as VADM (and VDM when available) and Ba values, including their standard deviations and sampling locations. It also contains the number of samples and specimens measured from each site. The REFS table stores the references to a particular study. The names, latitudes, and longitudes of the regions where the samples were collected are stored in the SITES table. The MATERIALS, METHODS, SPECIMEN_TYPES and DATING_METHODS tables store information about the sample materials, intensity determination methods, specimen types and age determination methods. The SIGMA_COUNT table is used indirectly for ranking data according to the number of samples measured and their standard deviations. Each intensity measurement is assigned a score (0--2) depending on the number of specimens measured and their standard deviations, the intensity determination method, the type of specimens measured and materials. The ranking of each data point is calculated as the sum of the four scores and varies between 0 and 8. Additionally, users can select the parameters that will be included in the ranking.

  3. Virus Neutralisation: New Insights from Kinetic Neutralisation Curves

    PubMed Central

    Magnus, Carsten

    2013-01-01

    Antibodies binding to the surface of virions can lead to virus neutralisation. Different theories have been proposed to determine the number of antibodies that must bind to a virion for neutralisation. Early models are based on chemical binding kinetics. Applying these models lead to very low estimates of the number of antibodies needed for neutralisation. In contrast, according to the more conceptual approach of stoichiometries in virology a much higher number of antibodies is required for virus neutralisation by antibodies. Here, we combine chemical binding kinetics with (virological) stoichiometries to better explain virus neutralisation by antibody binding. This framework is in agreement with published data on the neutralisation of the human immunodeficiency virus. Knowing antibody reaction constants, our model allows us to estimate stoichiometrical parameters from kinetic neutralisation curves. In addition, we can identify important parameters that will make further analysis of kinetic neutralisation curves more valuable in the context of estimating stoichiometries. Our model gives a more subtle explanation of kinetic neutralisation curves in terms of single-hit and multi-hit kinetics. PMID:23468602

  4. Study of Dynamic Characteristics of Aeroelastic Systems Utilizing Randomdec Signatures

    NASA Technical Reports Server (NTRS)

    Chang, C. S.

    1975-01-01

    The feasibility of utilizing the random decrement method in conjunction with a signature analysis procedure to determine the dynamic characteristics of an aeroelastic system for the purpose of on-line prediction of potential on-set of flutter was examined. Digital computer programs were developed to simulate sampled response signals of a two-mode aeroelastic system. Simulated response data were used to test the random decrement method. A special curve-fit approach was developed for analyzing the resulting signatures. A number of numerical 'experiments' were conducted on the combined processes. The method is capable of determining frequency and damping values accurately from randomdec signatures of carefully selected lengths.

  5. Anesthesiologists' learning curves for bedside qualitative ultrasound assessment of gastric content: a cohort study.

    PubMed

    Arzola, Cristian; Carvalho, Jose C A; Cubillos, Javier; Ye, Xiang Y; Perlas, Anahi

    2013-08-01

    Focused assessment of the gastric antrum by ultrasound is a feasible tool to evaluate the quality of the stomach content. We aimed to determine the amount of training an anesthesiologist would need to achieve competence in the bedside ultrasound technique for qualitative assessment of gastric content. Six anesthesiologists underwent a teaching intervention followed by a formative assessment; then learning curves were constructed. Participants received didactic teaching (reading material, picture library, and lecture) and an interactive hands-on workshop on live models directed by an expert sonographer. The participants were instructed on how to perform a systematic qualitative assessment to diagnose one of three distinct categories of gastric content (empty, clear fluid, solid) in healthy volunteers. Individual learning curves were constructed using the cumulative sum method, and competence was defined as a 90% success rate in a series of ultrasound examinations. A predictive model was further developed based on the entire cohort performance to determine the number of cases required to achieve a 95% success rate. Each anesthesiologist performed 30 ultrasound examinations (a total of 180 assessments), and three of the six participants achieved competence. The average number of cases required to achieve 90% and 95% success rates was estimated to be 24 and 33, respectively. With appropriate training and supervision, it is estimated that anesthesiologists will achieve a 95% success rate in bedside qualitative ultrasound assessment after performing approximately 33 examinations.

  6. Analysis of the Casson and Carreau-Yasuda non-Newtonian blood models in steady and oscillatory flows using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Boyd, Joshua; Buick, James M.; Green, Simon

    2007-09-01

    The lattice Boltzmann method is modified to allow the simulation of non-Newtonian shear-dependent viscosity models. Casson and Carreau-Yasuda non-Newtonian blood viscosity models are implemented and are used to compare two-dimensional Newtonian and non-Newtonian flows in the context of simple steady flow and oscillatory flow in straight and curved pipe geometries. It is found that compared to analogous Newtonian flows, both the Casson and Carreau-Yasuda flows exhibit significant differences in the steady flow situation. In the straight pipe oscillatory flows, both models exhibit differences in velocity and shear, with the largest differences occurring at low Reynolds and Womersley numbers. Larger differences occur for the Casson model. In the curved pipe Carreau-Yasuda model, moderate differences are observed in the velocities in the central regions of the geometries, and the largest shear rate differences are observed near the geometry walls. These differences may be important for the study of atherosclerotic progression.

  7. Hough transform method for track finding in center drift chamber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmi, K. A. Mohammad Kamal, E-mail: khasmidatul@siswa.um.edu.my; Wan Abdullah, W. A. T., E-mail: wat@um.edu.my; Ibrahim, Zainol Abidin

    Hough transform is a global tracking method used which had been expected to be faster approach for tracking the circular pattern of electron moving in Center Drift Chamber (CDC), by transforming the point of hit into a circular curve. This paper present the implementation of hough transform method for the reconstruction of tracks in Center Drift Chamber (CDC) which have been generated by random number in C language programming. Result from implementation of this method shows higher peak of circle parameter value (xc,yc,rc) that indicate the similarity value of the parameter needed for circular track in CDC for charged particlesmore » in the region of CDC.« less

  8. Application of active distribute temperature sensing and fiber optic as sensors to determinate the unsaturated hydraulic conductivity curve

    NASA Astrophysics Data System (ADS)

    Zubelzu, Sergio; Rodriguez-Sinobas, Leonor; Sobrino, Fernando

    2017-04-01

    The development of methodologies for the characterization of soil water content through the use of distribute temperature sensing and fiber optic cable has allowed for modelling with high temporal and spatial accuracy water movement in soils. One of the advantage of using fiber optic as a sensor, compared with the traditional point water probes, is the possibility to measure the variable continuously along the cable every 0.125 m (up to a cable length of 1500) and every second. Traditionally, applications based on fiber optic as a soil water sensor apply the active heated fiber optic technique AHFO to follow the evolution soil water content during and after irrigation events or for hydrologic characterization. However, this paper accomplishes an original experience by using AHFO as a sensor to characterize the soil hydraulic conductivity curve in subsaturated conditions. The non lineal nature between the hidraulic conductivity curve and soil water, showing high slope in the range close to saturation ) favors the AHFO a most suitable sensor due to its ability to measure the variable at small time and length intervals. Thus, it is possible to obtain accurate and a large number of data to be used to estimate the hydraulic conductivity curve from de water flow general equation by numerical methods. Results are promising and showed the feasibility of this technique to estimate the hydraulic conductivity curve for subsaturated soils .

  9. Spatial and dose–response analysis of fibrotic lung changes after stereotactic body radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinogradskiy, Yevegeniy; Diot, Quentin; Kavanagh, Brian

    2013-08-15

    Purpose: Stereotactic body radiation therapy (SBRT) is becoming the standard of care for early stage nonoperable lung cancers. Accurate dose–response modeling is challenging for SBRT because of the decreased number of clinical toxicity events. As a surrogate for a clinical toxicity endpoint, studies have proposed to use radiographic changes in follow up computed tomography (CT) scans to evaluate lung SBRT normal tissue effects. The purpose of the current study was to use local fibrotic lung regions to spatially and dosimetrically evaluate lung changes in patients that underwent SBRT.Methods: Forty seven SBRT patients treated at our institution from 2003 to 2009more » were used for the current study. Our patient cohort had a total of 148 follow up CT scans ranging from 3 to 48 months post-therapy. Post-treatment scans were binned into intervals of 3, 6, 12, 18, 24, 30, and 36 months after the completion of treatment. Deformable image registration was used to align the follow up CT scans with the pretreatment CT and dose distribution. Areas of visible fibrotic changes were contoured. The centroid of each gross tumor volume (GTV) and contoured fibrosis volume was calculated and the fibrosis volume location and movement (magnitude and direction) relative to the GTV and 30 Gy isodose centroid were analyzed. To perform a dose–response analysis, each voxel in the fibrosis volume was sorted into 10 Gy dose bins and the average CT number value for each dose bin was calculated. Dose–response curves were generated by plotting the CT number as a function of dose bin and time posttherapy.Results: Both fibrosis and GTV centroids were concentrated in the upper third of the lung. The average radial movement of fibrosis centroids relative to the GTV centroids was 2.6 cm with movement greater than 5 cm occurring in 11% of patients. Evaluating dose–response curves revealed an overall trend of increasing CT number as a function of dose. The authors observed a CT number plateau at doses ranging from 30 to 50 Gy for the 3, 6, and 12 months posttherapy time points. There was no evident plateau for the dose–response curves generated using data from the 18, 24, 30, and 36 months posttherapy time points.Conclusions: Regions of local fibrotic lung changes in patients that underwent SBRT were evaluated spatially and dosimetrically. The authors found that the average fibrosis movement was 2.6 cm with movement greater than 5 cm possible. Evaluating dose–response curves revealed an overall trend of increasing CT number as a function of dose. Furthermore, our dose–response data also suggest that one of the possible explanations of the CT number plateau effect may be the time posttherapy of the acquired data. Understanding normal tissue dose–response is important for reducing toxicity after SBRT, especially in cases where larger tumors are treated. The methods presented in the current work build on prior quantitative studies and further enhance the understanding of normal lung dose–response after SBRT.« less

  10. Exploring Algorithms for Stellar Light Curves With TESS

    NASA Astrophysics Data System (ADS)

    Buzasi, Derek

    2018-01-01

    The Kepler and K2 missions have produced tens of thousands of stellar light curves, which have been used to measure rotation periods, characterize photometric activity levels, and explore phenomena such as differential rotation. The quasi-periodic nature of rotational light curves, combined with the potential presence of additional periodicities not due to rotation, complicates the analysis of these time series and makes characterization of uncertainties difficult. A variety of algorithms have been used for the extraction of rotational signals, including autocorrelation functions, discrete Fourier transforms, Lomb-Scargle periodograms, wavelet transforms, and the Hilbert-Huang transform. In addition, in the case of K2 a number of different pipelines have been used to produce initial detrended light curves from the raw image frames.In the near future, TESS photometry, particularly that deriving from the full-frame images, will dramatically further expand the number of such light curves, but details of the pipeline to be used to produce photometry from the FFIs remain under development. K2 data offers us an opportunity to explore the utility of different reduction and analysis tool combinations applied to these astrophysically important tasks. In this work, we apply a wide range of algorithms to light curves produced by a number of popular K2 pipeline products to better understand the advantages and limitations of each approach and provide guidance for the most reliable and most efficient analysis of TESS stellar data.

  11. Accurate Solution of Multi-Region Continuum Biomolecule Electrostatic Problems Using the Linearized Poisson-Boltzmann Equation with Curved Boundary Elements

    PubMed Central

    Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce

    2009-01-01

    We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as charge optimization or component analysis, can be computed to high accuracy using the presented BEM approach, in compute times comparable to traditional finite-difference methods. PMID:18567005

  12. Illustration of year-to-year variation in wheat spectral profile crop growth curves. [Kansas, Oklahoma, North Dakota and South Dakota

    NASA Technical Reports Server (NTRS)

    Gonzalez, P.; Jones, C. (Principal Investigator)

    1980-01-01

    Data previously compiled on the year to year variability of spectral profile crop growth parameters for spring and winter wheat in Kansas, Oklahoma, and the Dakotas were used with a profile model to develop graphs illustrating spectral profile crop growth curves for a number of years and a number of spring and winter wheat segments. These curves show the apparent variability in spectral profiles for wheat from one year to another within the same segment and from one segment to another within the same year.

  13. Information geometric methods for complexity

    NASA Astrophysics Data System (ADS)

    Felice, Domenico; Cafaro, Carlo; Mancini, Stefano

    2018-03-01

    Research on the use of information geometry (IG) in modern physics has witnessed significant advances recently. In this review article, we report on the utilization of IG methods to define measures of complexity in both classical and, whenever available, quantum physical settings. A paradigmatic example of a dramatic change in complexity is given by phase transitions (PTs). Hence, we review both global and local aspects of PTs described in terms of the scalar curvature of the parameter manifold and the components of the metric tensor, respectively. We also report on the behavior of geodesic paths on the parameter manifold used to gain insight into the dynamics of PTs. Going further, we survey measures of complexity arising in the geometric framework. In particular, we quantify complexity of networks in terms of the Riemannian volume of the parameter space of a statistical manifold associated with a given network. We are also concerned with complexity measures that account for the interactions of a given number of parts of a system that cannot be described in terms of a smaller number of parts of the system. Finally, we investigate complexity measures of entropic motion on curved statistical manifolds that arise from a probabilistic description of physical systems in the presence of limited information. The Kullback-Leibler divergence, the distance to an exponential family and volumes of curved parameter manifolds, are examples of essential IG notions exploited in our discussion of complexity. We conclude by discussing strengths, limits, and possible future applications of IG methods to the physics of complexity.

  14. Applying active learning to supervised word sense disambiguation in MEDLINE

    PubMed Central

    Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua

    2013-01-01

    Objectives This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. Methods We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Results Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. Conclusions This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models. PMID:23364851

  15. Sibling Curves and Complex Roots 2: Looking Ahead

    ERIC Educational Resources Information Center

    Harding, Ansie; Engelbrecht, Johann

    2007-01-01

    This paper, the second of a two part article, expands on an idea that appeared in literature in the 1950s to show that by restricting the domain to those complex numbers that map onto real numbers, representations of functions other than the ones in the real plane are obtained. In other words, the well-known curves in the real plane only depict…

  16. Relative Frequency Distribution of D125 C Values for Spore Isolates from the Mariner-Mars 1969 Spacecraft

    PubMed Central

    Bond, W. W.; Favero, M. S.; Petersen, N. J.; Marshall, J. H.

    1971-01-01

    Bacterial spore crops were prepared from 103 randomly selected aerobic mesophilic isolates collected during a spore assay of Mariner-Mars 1969 spacecraft conducted by the Jet Propulsion Laboratory. D125 c values, which were determined by the fractional-replicate-unit-negative-most-probable number assay method using a forced air oven, ranged from less than 5 min to a maximum of 58 min. Subsequent identification of the 103 isolates indicated that there was no relationship between species and dry-heat resistance. A theoretical dry-heat survival curve of the “population” was nonlinear. The slope of this curve was determined almost exclusively by the more resistant organisms, although they represented only a small portion of the “population.” PMID:16349904

  17. Effect of tree-ring detrending method on apparent growth trends of black and white spruce in interior Alaska

    NASA Astrophysics Data System (ADS)

    Sullivan, Patrick F.; Pattison, Robert R.; Brownlee, Annalis H.; Cahoon, Sean M. P.; Hollingsworth, Teresa N.

    2016-11-01

    Boreal forests are critical sinks in the global carbon cycle. However, recent studies have revealed increasing frequency and extent of wildfires, decreasing landscape greenness, increasing tree mortality and declining growth of black and white spruce in boreal North America. We measured ring widths from a large set of increment cores collected across a vast area of interior Alaska and examined implications of data processing decisions for apparent trends in black and white spruce growth. We found that choice of detrending method had important implications for apparent long-term growth trends and the strength of climate-growth correlations. Trends varied from strong increases in growth since the Industrial Revolution, when ring widths were detrended using single-curve regional curve standardization (RCS), to strong decreases in growth, when ring widths were normalized by fitting a horizontal line to each ring width series. All methods revealed a pronounced growth peak for black and white spruce centered near 1940. Most detrending methods showed a decline from the peak, leaving recent growth of both species near the long-term mean. Climate-growth analyses revealed negative correlations with growing season temperature and positive correlations with August precipitation for both species. Multiple-curve RCS detrending produced the strongest and/or greatest number of significant climate-growth correlations. Results provide important historical context for recent growth of black and white spruce. Growth of both species might decline with future warming, if not mitigated by increasing precipitation. However, widespread drought-induced mortality is probably not imminent, given that recent growth was near the long-term mean.

  18. A comparison of bootstrap methods and an adjusted bootstrap approach for estimating the prediction error in microarray classification.

    PubMed

    Jiang, Wenyu; Simon, Richard

    2007-12-20

    This paper first provides a critical review on some existing methods for estimating the prediction error in classifying microarray data where the number of genes greatly exceeds the number of specimens. Special attention is given to the bootstrap-related methods. When the sample size n is small, we find that all the reviewed methods suffer from either substantial bias or variability. We introduce a repeated leave-one-out bootstrap (RLOOB) method that predicts for each specimen in the sample using bootstrap learning sets of size ln. We then propose an adjusted bootstrap (ABS) method that fits a learning curve to the RLOOB estimates calculated with different bootstrap learning set sizes. The ABS method is robust across the situations we investigate and provides a slightly conservative estimate for the prediction error. Even with small samples, it does not suffer from large upward bias as the leave-one-out bootstrap and the 0.632+ bootstrap, and it does not suffer from large variability as the leave-one-out cross-validation in microarray applications. Copyright (c) 2007 John Wiley & Sons, Ltd.

  19. Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method

    NASA Astrophysics Data System (ADS)

    Verachtert, R.; Lombaert, G.; Degrande, G.

    2018-03-01

    This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.

  20. Estimating the R-curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1985-01-01

    A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.

  1. Curve Estimation of Number of People Killed in Traffic Accidents in Turkey

    NASA Astrophysics Data System (ADS)

    Berkhan Akalin, Kadir; Karacasu, Murat; Altin, Arzu Yavuz; Ergül, Bariş

    2016-10-01

    One or more than one vehicle in motion on the highway involving death, injury and loss events which have resulted are called accidents. As a result of increasing population and traffic density, traffic accidents continue to increase and this leads to both human losses and harm to the economy. In addition, also leads to social problems. As a result of increasing population and traffic density, traffic accidents continue to increase and this leads to both human losses and harm to the economy. In addition to this, it also leads to social problems. As a result of traffic accidents, millions of people die year by year. A great majority of these accidents occur in developing countries. One of the most important tasks of transportation engineers is to reduce traffic accidents by creating a specific system. For that reason, statistical information about traffic accidents which occur in the past years should be organized by versed people. Factors affecting the traffic accidents are analyzed in various ways. In this study, modelling the number of people killed in traffic accidents in Turkey is determined. The dead people were modelled using curve fitting method with the number of people killed in traffic accidents in Turkey dataset between 1990 and 2014. It was also predicted the number of dead people by using various models for the future. It is decided that linear model is suitable for the estimates.

  2. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  3. Courant number and unsteady flow computation

    USGS Publications Warehouse

    Lai, Chintu; ,

    1993-01-01

    The Courant number C, the key to unsteady flow computation, is a ratio of physical wave velocity, ??, to computational signal-transmission velocity, ??, i.e., C = ??/??. In this way, it uniquely relates a physical quantity to a mathematical quantity. Because most unsteady open-channel flows are describable by a set of n characteristic equations along n characteristic paths, each represented by velocity ??i, i = 1,2,....,n, there exist as many as n components for the numerator of C. To develop a numerical model, a numerical integration must be made on each characteristic curve from an earlier point to a later point on the curve. Different numerical methods are available in unsteady flow computation due to the different paths along which the numerical integration is actually performed. For the denominator of C, the ?? defined as ?? = ?? 0 = ??x/??t has been customarily used; thus, the Courant number has the familiar form of C?? = ??/??0. This form will be referred to as ???common Courant number??? in this paper. The commonly used numerical criteria C?? for stability, neutral stability and instability, are imprecise or not universal in the sense that r0 does not always reflect the true maximum computational data-transmission speed of the scheme at hand, i.e., Ctau is no indication for the Courant constraint. In view of this , a new Courant number, called the ???natural Courant number???, Cn, that truly reflects the Courant constraint, has been defined. However, considering the numerous advantages inherent in the traditional C??, a useful and meaningful composite Courant number, denoted by C??* has been formulated from C??. It is hoped that the new aspects of the Courant number discussed herein afford the hydraulician a broader perspective, consistent criteria, and unified guidelines, with which to model various unsteady flows.

  4. Effects of structural parameters on fluid flow and mixing performance in a curved microchannel with gaps and baffles

    NASA Astrophysics Data System (ADS)

    Li, Jian; Xia, Guodong; Li, Yifan; Tian, Xinping

    2013-07-01

    We provide three-dimensional numerical simulations of mixing performance in a newly proposed micromixer with different structural parameters. The same amount of gaps and baffles are arranged along the curved channel within a certain distance. The effects of their structural parameters on mixing efficiency are presented, which include either the position and feature size of gaps and baffles, or the curvature radius of curved channel. The high efficiency mixing mechanism of the curved channel with gaps and baffles can attribute to the interaction of the increased contact area for premixed liquids, the jet and throttling effect over every unit of gap and baffle, the developing of the multidirectional vortices along the curved channel. The mixing index is sensitive to the width of the gaps and baffles for some Reynolds number ranges, but is not sensitive to the curvature radius of the curved channel. The characteristic of the pressure drop depending on Reynolds number is also investigated in order to keep an appropriate balance with mixing property.

  5. Validity and extension of the SCS-CN method for computing infiltration and rainfall-excess rates

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Singh, Vijay P.

    2004-12-01

    A criterion is developed for determining the validity of the Soil Conservation Service curve number (SCS-CN) method. According to this criterion, the existing SCS-CN method is found to be applicable when the potential maximum retention, S, is less than or equal to twice the total rainfall amount. The criterion is tested using published data of two watersheds. Separating the steady infiltration from capillary infiltration, the method is extended for predicting infiltration and rainfall-excess rates. The extended SCS-CN method is tested using 55 sets of laboratory infiltration data on soils varying from Plainfield sand to Yolo light clay, and the computed and observed infiltration and rainfall-excess rates are found to be in good agreement.

  6. Modeling Surface Growth of Escherichia coli on Agar Plates

    PubMed Central

    Fujikawa, Hiroshi; Morozumi, Satoshi

    2005-01-01

    Surface growth of Escherichia coli cells on a membrane filter placed on a nutrient agar plate under various conditions was studied with a mathematical model. The surface growth of bacterial cells showed a sigmoidal curve with time on a semilogarithmic plot. To describe it, a new logistic model that we presented earlier (H. Fujikawa et al., Food Microbiol. 21:501-509, 2004) was modified. Growth curves at various constant temperatures (10 to 34°C) were successfully described with the modified model (model III). Model III gave better predictions of the rate constant of growth and the lag period than a modified Gompertz model and the Baranyi model. Using the parameter values of model III at the constant temperatures, surface growth at various temperatures was successfully predicted. Surface growth curves at various initial cell numbers were also sigmoidal and converged to the same maximum cell numbers at the stationary phase. Surface growth curves at various nutrient levels were also sigmoidal. The maximum cell number and the rate of growth were lower as the nutrient level decreased. The surface growth curve was the same as that in a liquid, except for the large curvature at the deceleration period. These curves were also well described with model III. The pattern of increase in the ATP content of cells grown on a surface was sigmoidal, similar to that for cell growth. We discovered several characteristics of the surface growth of bacterial cells under various growth conditions and examined the applicability of our model to describe these growth curves. PMID:16332768

  7. [Difference of three standard curves of real-time reverse-transcriptase PCR in viable Vibrio parahaemolyticus quantification].

    PubMed

    Jin, Mengtong; Sun, Wenshuo; Li, Qin; Sun, Xiaohong; Pan, Yingjie; Zhao, Yong

    2014-04-04

    We evaluated the difference of three standard curves in quantifying viable Vibrio parahaemolyticus in samples by real-time reverse-transcriptase PCR (Real-time RT-PCR). The standard curve A was established by 10-fold diluted cDNA. The cDNA was reverse transcripted after RNA synthesized in vitro. The standard curve B and C were established by 10-fold diluted cDNA. The cDNA was synthesized after RNA isolated from Vibrio parahaemolyticus in pure cultures (10(8) CFU/mL) and shrimp samples (10(6) CFU/g) (Standard curve A and C were proposed for the first time). Three standard curves were performed to quantitatively detect V. parahaemolyticus in six samples, respectively (Two pure cultured V. parahaemolyticus samples, two artificially contaminated cooked Litopenaeus vannamei samples and two artificially contaminated Litopenaeus vannamei samples). Then we evaluated the quantitative results of standard curve and the plate counting results and then analysed the differences. The three standard curves all show a strong linear relationship between the fractional cycle number and V. parahaemolyticus concentration (R2 > 0.99); The quantitative results of Real-time PCR were significantly (p < 0.05) lower than the results of plate counting. The relative errors compared with the results of plate counting ranked standard curve A (30.0%) > standard curve C (18.8%) > standard curve B (6.9%); The average differences between standard curve A and standard curve B and C were - 2.25 Lg CFU/mL and - 0.75 Lg CFU/mL, respectively, and the mean relative errors were 48.2% and 15.9%, respectively; The average difference between standard curve B and C was among (1.47 -1.53) Lg CFU/mL and the average relative errors were among 19.0% - 23.8%. Standard curve B could be applied to Real-time RT-PCR when quantify the number of viable microorganisms in samples.

  8. HYSOGs250m, global gridded hydrologic soil groups for curve-number-based runoff modeling.

    PubMed

    Ross, C Wade; Prihodko, Lara; Anchang, Julius; Kumar, Sanath; Ji, Wenjie; Hanan, Niall P

    2018-05-15

    Hydrologic soil groups (HSGs) are a fundamental component of the USDA curve-number (CN) method for estimation of rainfall runoff; yet these data are not readily available in a format or spatial-resolution suitable for regional- and global-scale modeling applications. We developed a globally consistent, gridded dataset defining HSGs from soil texture, bedrock depth, and groundwater. The resulting data product-HYSOGs250m-represents runoff potential at 250 m spatial resolution. Our analysis indicates that the global distribution of soil is dominated by moderately high runoff potential, followed by moderately low, high, and low runoff potential. Low runoff potential, sandy soils are found primarily in parts of the Sahara and Arabian Deserts. High runoff potential soils occur predominantly within tropical and sub-tropical regions. No clear pattern could be discerned for moderately low runoff potential soils, as they occur in arid and humid environments and at both high and low elevations. Potential applications of this data include CN-based runoff modeling, flood risk assessment, and as a covariate for biogeographical analysis of vegetation distributions.

  9. Can a Resident's Publication Record Predict Fellowship Publications?

    PubMed Central

    Prasad, Vinay; Rho, Jason; Selvaraj, Senthil; Cheung, Mike; Vandross, Andrae; Ho, Nancy

    2014-01-01

    Background Internal medicine fellowship programs have an incentive to select fellows who will ultimately publish. Whether an applicant's publication record predicts long term publishing remains unknown. Methods Using records of fellowship bound internal medicine residents, we analyzed whether publications at time of fellowship application predict publications more than 3 years (2 years into fellowship) and up to 7 years after fellowship match. We calculate the sensitivity, specificity, positive and negative predictive values and likelihood ratios for every cutoff number of application publications, and plot a receiver operator characteristic curve of this test. Results Of 307 fellowship bound residents, 126 (41%) published at least one article 3 to 7 years after matching, and 181 (59%) of residents do not publish in this time period. The area under the receiver operator characteristic curve is 0.59. No cutoff value for application publications possessed adequate test characteristics. Conclusion The number of publications an applicant has at time of fellowship application is a poor predictor of who publishes in the long term. These findings do not validate the practice of using application publications as a tool for selecting fellows. PMID:24658088

  10. Melting curves and entropy of fusion of body-centered cubic tungsten under pressure

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Mei; Chen, Xiang-Rong; Xu, Chao; Cai, Ling-Cang; Jing, Fu-Qian

    2012-07-01

    The melting curves and entropy of fusion of body-centered cubic (bcc) tungsten (W) under pressure are investigated via molecular dynamics (MD) simulations with extended Finnis-Sinclair (EFS) potential. The zero pressure melting point obtained is better than other theoretical results by MD simulations with the embedded-atom-method (EAM), Finnis-Sinclair (FS) and modified EAM potentials, and by ab initio MD simulations. Our radial distribution function and running coordination number analyses indicate that apart from the expected increase in disorder, the main change on going from solid to liquid is thus a slight decrease in coordination number. Our entropy of fusion of W during melting, ΔS, at zero pressure, 7.619 J/mol.K, is in good agreement with the experimental and other theoretical data. We found that, with the increasing pressure, the entropy of fusion ΔS decreases fast first and then oscillates with pressure; when the pressure is higher than 100 GPa, the entropy of fusion ΔS is about 6.575 ± 0.086 J/mol.K, which shows less pressure effect.

  11. Structural propensities and entropy effects in peptide helix-coil transitions

    NASA Astrophysics Data System (ADS)

    Chemmama, Ilan E.; Pelea, Adam Colt; Bhandari, Yuba R.; Chapagain, Prem P.; Gerstman, Bernard S.

    2012-09-01

    The helix-coil transition in peptides is a critical structural transition leading to functioning proteins. Peptide chains have a large number of possible configurations that must be accounted for in statistical mechanical investigations. Using hydrogen bond and local helix propensity interaction terms, we develop a method for obtaining and incorporating the degeneracy factor that allows the exact calculation of the partition function for a peptide as a function of chain length. The partition function is used in calculations for engineered peptide chains of various lengths that allow comparison with a variety of different types of experimentally measured quantities, such as fraction of helicity as a function of both temperature and chain length, heat capacity, and denaturation studies. When experimental sensitivity in helicity measurements is properly accounted for in the calculations, the calculated curves fit well with the experimental curves. We determine values of interaction energies for comparison with known biochemical interactions, as well as quantify the difference in the number of configurations available to an amino acid in a random coil configuration compared to a helical configuration.

  12. Sensing Cell-Culture Assays with Low-Cost Circuitry.

    PubMed

    Pérez, Pablo; Huertas, Gloria; Maldonado-Jacobi, Andrés; Martín, María; Serrano, Juan A; Olmo, Alberto; Daza, Paula; Yúfera, Alberto

    2018-06-11

    An alternative approach for cell-culture end-point protocols is proposed herein. This new technique is suitable for real-time remote sensing. It is based on Electrical Cell-substrate Impedance Spectroscopy (ECIS) and employs the Oscillation-Based Test (OBT) method. Simple and straightforward circuit blocks form the basis of the proposed measurement system. Oscillation parameters - frequency and amplitude - constitute the outcome, directly correlated with the culture status. A user can remotely track the evolution of cell cultures in real time over the complete experiment through a web tool continuously displaying the acquired data. Experiments carried out with commercial electrodes and a well-established cell line (AA8) are described, obtaining the cell number in real time from growth assays. The electrodes have been electrically characterized along the design flow in order to predict the system performance and the sensitivity curves. Curves for 1-week cell growth are reported. The obtained experimental results validate the proposed OBT for cell-culture characterization. Furthermore, the proposed electrode model provides a good approximation for the cell number and the time evolution of the studied cultures.

  13. Hierarchical Bayesian analysis to incorporate age uncertainty in growth curve analysis and estimates of age from length: Florida manatee (Trichechus manatus) carcasses

    USGS Publications Warehouse

    Schwarz, L.K.; Runge, M.C.

    2009-01-01

    Age estimation of individuals is often an integral part of species management research, and a number of ageestimation techniques are commonly employed. Often, the error in these techniques is not quantified or accounted for in other analyses, particularly in growth curve models used to describe physiological responses to environment and human impacts. Also, noninvasive, quick, and inexpensive methods to estimate age are needed. This research aims to provide two Bayesian methods to (i) incorporate age uncertainty into an age-length Schnute growth model and (ii) produce a method from the growth model to estimate age from length. The methods are then employed for Florida manatee (Trichechus manatus) carcasses. After quantifying the uncertainty in the aging technique (counts of ear bone growth layers), we fit age-length data to the Schnute growth model separately by sex and season. Independent prior information about population age structure and the results of the Schnute model are then combined to estimate age from length. Results describing the age-length relationship agree with our understanding of manatee biology. The new methods allow us to estimate age, with quantified uncertainty, for 98% of collected carcasses: 36% from ear bones, 62% from length.

  14. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  15. Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sager, D.

    1973-01-01

    Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.

  16. Revised and annotated checklist of aquatic and semi-aquatic Heteroptera of Hungary with comments on biodiversity patterns

    PubMed Central

    Boda, Pál; Bozóki, Tamás; Vásárhelyi, Tamás; Bakonyi, Gábor; Várbíró, Gábor

    2015-01-01

    Abstract A basic knowledge of regional faunas is necessary to follow the changes in macroinvertebrate communities caused by environmental influences and climatic trends in the future. We collected all the available data on water bugs in Hungary using an inventory method, a UTM grid based database was built, and Jackknife richness estimates and species accumulation curves were calculated. Fauna compositions were compared among Central-European states. As a result, an updated and annotated checklist for Hungary is provided, containing 58 species in 21 genera and 12 families. A total 66.8% of the total UTM 10 × 10 km squares in Hungary possess faunistic data for water bugs. The species number in grid cells numbered from 0 to 42, and their diversity patterns showed heterogeneity. The estimated species number of 58 is equal to the actual number of species known from the country. The asymptotic shape of the accumulative species curve predicts that additional sampling efforts will not increase the number of species currently known from Hungary. These results suggest that the number of species in the country was estimated correctly and that the species accumulation curve levels off at an asymptotic value. Thus a considerable increase in species richness is not expected in the future. Even with the species composition changing the chance of species turn-over does exist. Overall, 36.7% of the European water bug species were found in Hungary. The differences in faunal composition between Hungary and its surrounding countries were caused by the rare or unique species, whereas 33 species are common in the faunas of the eight countries. Species richness does show a correlation with latitude, and similar species compositions were observed in the countries along the same latitude. The species list and the UTM-based database are now up-to-date for Hungary, and it will provide a basis for future studies of distributional and biodiversity patterns, biogeography, relative abundance and frequency of occurrences important in community ecology, or the determination of conservation status. PMID:25987880

  17. Cascaded lattice Boltzmann method with improved forcing scheme for large-density-ratio multiphase flow at high Reynolds and Weber numbers.

    PubMed

    Lycett-Brown, Daniel; Luo, Kai H

    2016-11-01

    A recently developed forcing scheme has allowed the pseudopotential multiphase lattice Boltzmann method to correctly reproduce coexistence curves, while expanding its range to lower surface tensions and arbitrarily high density ratios [Lycett-Brown and Luo, Phys. Rev. E 91, 023305 (2015)PLEEE81539-375510.1103/PhysRevE.91.023305]. Here, a third-order Chapman-Enskog analysis is used to extend this result from the single-relaxation-time collision operator, to a multiple-relaxation-time cascaded collision operator, whose additional relaxation rates allow a significant increase in stability. Numerical results confirm that the proposed scheme enables almost independent control of density ratio, surface tension, interface width, viscosity, and the additional relaxation rates of the cascaded collision operator. This allows simulation of large density ratio flows at simultaneously high Reynolds and Weber numbers, which is demonstrated through binary collisions of water droplets in air (with density ratio up to 1000, Reynolds number 6200 and Weber number 440). This model represents a significant improvement in multiphase flow simulation by the pseudopotential lattice Boltzmann method in which real-world parameters are finally achievable.

  18. Predicting Numbers of Problems in Development of Software

    NASA Technical Reports Server (NTRS)

    Simonds, Charles H.

    2005-01-01

    A method has been formulated to enable prediction of the amount of work that remains to be performed in developing flight software for a spacecraft. The basic concept embodied in the method is that of using an idealized curve (specifically, the Weibull function) to interpolate from (1) the numbers of problems discovered thus far to (2) a goal of discovering no new problems after launch (or six months into the future for software already in use in orbit). The steps of the method can be summarized as follows: 1. Take raw data in the form of problem reports (PRs), including the dates on which they are generated. 2. Remove, from the data collection, PRs that are subsequently withdrawn or to which no response is required. 3. Count the numbers of PRs created in 1-week periods and the running total number of PRs each week. 4. Perform the interpolation by making a least-squares fit of the Weibull function to (a) the cumulative distribution of PRs gathered thus far and (b) the goal of no more PRs after the currently anticipated launch date. The interpolation and the anticipated launch date are subject to iterative re-estimation.

  19. Unsteady fluid flow in a slightly curved annular pipe: The impact of the annulus on the flow physics

    NASA Astrophysics Data System (ADS)

    Messaris, Gerasimos A. T.; Karahalios, George T.

    2017-02-01

    The motivation of the present study is threefold. Mainly, the etiological explanation of the Womersley number based on physical reasoning. Next, the extension of a previous work [Messaris, Hadjinicolaou, and Karahalios, "Unsteady fluid flow in a slightly curved pipe: A comparative study of a matched asymptotic expansions solution with a single analytical solution," Phys. Fluids 28, 081901 (2016)] to the annular pipe flow. Finally, the discussion of the effect of the additional stresses generated by a catheter in an artery and exerted on the arterial wall during an in vivo catheterization. As it is known, the square of the Womersley number may be interpreted as an oscillatory Reynolds number which equals to the ratio of the inertial to the viscous forces. The adoption of a modified Womersley number in terms of the annular gap width seems therefore more appropriate to the description of the annular flow than an ordinary Womersley number defined in terms of the pipe radius. On this ground, the non-dimensional equations of motion are approximately solved by two analytical methods: a matched asymptotic expansions method and a single. In the first method, which is valid for very large values of the Womersley number, the flow region consists of the main core and the two boundary layers formed at the inner and outer boundaries. In the second, the fluid is considered as one region and the Womersley number can vary from finite values, such that they fit to the blood flow in the aorta and the main arteries, to infinity. The single solution predicts increasing circumferential and decreasing axial stresses with increasing catheter radius at a prescribed value of the Womersley parameter in agreement with analogous results from other theoretical and numerical solutions. It also predicts the formation of pinches on the secondary flow streamlines and a third boundary layer, additional to those formed at the boundary walls. Finally, we show that the insertion of a catheter in an artery may trigger possible disastrous side effects. It may cause unexpected damage to a predisposed but still dormant location of the arterial wall due to high additional radial pressure that induces an excessive distension of the artery.

  20. Theory of viscous transonic flow over airfoils at high Reynolds number

    NASA Technical Reports Server (NTRS)

    Melnik, R. E.; Chow, R.; Mead, H. R.

    1977-01-01

    This paper considers viscous flows with unseparated turbulent boundary layers over two-dimensional airfoils at transonic speeds. Conventional theoretical methods are based on boundary layer formulations which do not account for the effect of the curved wake and static pressure variations across the boundary layer in the trailing edge region. In this investigation an extended viscous theory is developed that accounts for both effects. The theory is based on a rational analysis of the strong turbulent interaction at airfoil trailing edges. The method of matched asymptotic expansions is employed to develop formal series solutions of the full Reynolds equations in the limit of Reynolds numbers tending to infinity. Procedures are developed for combining the local trailing edge solution with numerical methods for solving the full potential flow and boundary layer equations. Theoretical results indicate that conventional boundary layer methods account for only about 50% of the viscous effect on lift, the remaining contribution arising from wake curvature and normal pressure gradient effects.

  1. Estimation of Flow Duration Curve for Ungauged Catchments using Adaptive Neuro-Fuzzy Inference System and Map Correlation Method: A Case Study from Turkey

    NASA Astrophysics Data System (ADS)

    Kentel, E.; Dogulu, N.

    2015-12-01

    In Turkey the experience and data required for a hydrological model setup is limited and very often not available. Moreover there are many ungauged catchments where there are also many planned projects aimed at utilization of water resources including development of existing hydropower potential. This situation makes runoff prediction at locations with lack of data and ungauged locations where small hydropower plants, reservoirs, etc. are planned an increasingly significant challenge and concern in the country. Flow duration curves have many practical applications in hydrology and integrated water resources management. Estimation of flood duration curve (FDC) at ungauged locations is essential, particularly for hydropower feasibility studies and selection of the installed capacities. In this study, we test and compare the performances of two methods for estimating FDCs in the Western Black Sea catchment, Turkey: (i) FDC based on Map Correlation Method (MCM) flow estimates. MCM is a recently proposed method (Archfield and Vogel, 2010) which uses geospatial information to estimate flow. Flow measurements of stream gauging stations nearby the ungauged location are the only data requirement for this method. This fact makes MCM very attractive for flow estimation in Turkey, (ii) Adaptive Neuro-Fuzzy Inference System (ANFIS) is a data-driven method which is used to relate FDC to a number of variables representing catchment and climate characteristics. However, it`s ease of implementation makes it very useful for practical purposes. Both methods use easily collectable data and are computationally efficient. Comparison of the results is realized based on two different measures: the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE) value. Ref: Archfield, S. A., and R. M. Vogel (2010), Map correlation method: Selection of a reference streamgage to estimate daily streamflow at ungaged catchments, Water Resour. Res., 46, W10513, doi:10.1029/2009WR008481.

  2. Improvements in Spectrum's fit to program data tool.

    PubMed

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  3. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  4. Kinematic classification of non-interacting spiral galaxies

    NASA Astrophysics Data System (ADS)

    Wiegert, Theresa; English, Jayanne

    2014-01-01

    Using neutral hydrogen (HI) rotation curves of 79 galaxies, culled from the literature, as well as measured from HI data, we present a method for classifying disk galaxies by their kinematics. In order to investigate fundamental kinematic properties we concentrate on non-interacting spiral galaxies. We employ a simple parameterized form for the rotation curve in order to derive the three parameters: the maximum rotational velocity, the turnover radius and a measure of the slope of the rotation curve beyond the turnover radius. Our approach uses the statistical Hierarchical Clustering method to guide our division of the resultant 3D distribution of galaxies into five classes. Comparing the kinematic classes in this preliminary classification scheme to a number of galaxy properties, we find that our class containing galaxies with the largest rotational velocities has a mean morphological type of Sb/Sbc while the other classes tend to later types. Other trends also generally agree with those described by previous researchers. In particular we confirm correlations between increasing maximum rotational velocity and the following observed properties: increasing brightness in B-band, increasing size of the optical disk (D25) and increasing star formation rate (as derived using radio continuum data). Our analysis also suggests that lower velocities are associated with a higher ratio of the HI mass over the dynamical mass. Additionally, three galaxies exhibit a drop in rotational velocity amplitude of ≳20% after the turnover radius. However recent investigations suggest that they have interacted with minor companions which is a common cause for declining rotation curves.

  5. Laparoscopic Heller Myotomy with Anterior Fundoplication Improves Frequency and Severity of Symptoms of Achalasia, Regardless of Preoperative Severity Determined by Esophagography.

    PubMed

    Rosemurgy, Alexander; Downs, Darrell; Luberice, Kenneth; Rodriguez, Christian; Swaid, Forat; Patel, Krishen; Toomey, Paul; Ross, Sharona

    2018-02-01

    This study was undertaken to determine whether postoperative outcomes after laparoscopic Heller myotomy with anterior fundoplication could be predicted by preoperative findings on esophagography. Preoperative barium esophagograms of 135 patients undergoing laparoscopic Heller myotomy with anterior fundoplication were reviewed. The number of esophageal curves, esophageal width, and angulation of the gastroesophageal junction (GEJ) were determined; correlations between these determined parameters and symptoms were assessed using linear regression analysis. The number of esophageal curves correlated with the preoperative frequency of dysphagia, vomiting, chest pain, regurgitation, and heartburn. The width of the esophagus negatively correlated with the preoperative frequency of regurgitation. The angulation of the GEJ did not correlate with preoperative symptoms. Laparoscopic Heller myotomy with anterior fundoplication significantly reduced the frequency and severity of all symptoms, regardless of the number of esophageal curves, esophageal width, or angulation of the GEJ. Laparoscopic Heller myotomy with anterior fundoplication provides dramatic palliation for achalasia. More esophageal curves on preoperative esophagography correlate well with the frequency of a broad range of preoperative symptoms, including the frequency of dysphagia and regurgitation. Patients experience dramatically improved frequency and severity of symptoms after laparoscopic Heller myotomy with anterior fundoplication for achalasia regardless of the number of esophageal curves, esophageal width, or the angulation of the GEJ. Findings on barium esophagogram, in evaluating achalasia, should not deter the application of laparosocopic Heller myotomy with anterior fundoplication.

  6. The Astrophysics of Visible-light Orbital Phase Curves in the Space Age

    NASA Astrophysics Data System (ADS)

    Shporer, Avi

    2017-07-01

    The field of visible-light continuous time series photometry is now at its golden age, manifested by the continuum of past (CoRoT, Kepler), present (K2), and future (TESS, PLATO) space-based surveys delivering high precision data with a long baseline for a large number of stars. The availability of the high-quality data has enabled astrophysical studies not possible before, including, for example, detailed asteroseismic investigations and the study of the exoplanet census including small planets. This has also allowed to study the minute photometric variability following the orbital motion in stellar binaries and star-planet systems which is the subject of this review. We focus on systems with a main sequence primary and a low-mass secondary, from a small star to a massive planet. The orbital modulations are induced by a combination of gravitational and atmospheric processes, including the beaming effect, tidal ellipsoidal distortion, reflected light, and thermal emission. Therefore, the phase curve shape contains information about the companion’s mass and atmospheric characteristics, making phase curves a useful astrophysical tool. For example, phase curves can be used to detect and measure the mass of short-period low-mass companions orbiting hot fast-rotating stars out of reach of other detection methods. Another interesting application of phase curves is using the orbital phase modulations to look for non-transiting systems, which comprise the majority of stellar binary and star-planet systems. We discuss the science done with phase curves, the first results obtained so far, and the current difficulties and open questions related to this young and evolving subfield.

  7. [Value of anti-Müllerian hormone in predicting pregnant outcomes of polycystic ovary syndrome patients undergone assisted reproductive technology].

    PubMed

    Li, Y; Tan, J Q; Mai, Z Y; Yang, D Z

    2018-01-25

    Objective: Explore the value of anti-Müllerian hormone (AMH) in predicting pregnant outcomes of polycystic ovary syndrome (PCOS) patients undergone assisted reproductive technology. Methods: The study totally recruited 1 697 patients who underwent the first in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) cycle in Sun Yat-sen Memorial Hospital from the January 2014 to December 2015. The patients were divided into two groups based on the age<35 ( n= 758) and ≥35 years old ( n= 939) , compare the basic data and pregnant outcomes of controlled ovarian hyerstimulation. Spearman correlation method was conducted to analyze the relations between AMH and clinical outcomes. The logistic regression method and partial correlation analysis were used to judge the main factors which determine pregnancy outcomes by controlled the confounding factors. The receiver operating characteristic curve (ROC) was used to evaluate the predictive sensitivity and specificity of AMH. Results: In the group of PCOS patient younger than 35 years, AMH were correlated with the number of antral follicles ( r= 0.388) and retrieved oocytes ( r= 0.235) . When the effect of total dosage and starting dosage of gonadotropin were controlled, AMH was still significantly associated with the number of retrieved oocytes ( P< 0.05) . AMH had no predictive value for the clinical pregnancy of PCOS patient younger than 35 years (area under ROC curve=0.481, P= 0.768) . In the group of PCOS patient≥35 years old, AMH were correlated with the number of antral follicles ( r= 0.450) , retrieved oocytes ( r= 0.399) , available embryo ( r= 0.336) and high quality embryo ( r= 0.235) . When the effect of total dosage and starting dosage of gonadotropin were controlled, the correlations were still significant between those indexes (all P< 0.05) . AMH had no predictive value for the clinical pregnancy of PCOS patient ≥35 years old (area under ROC curve=0.535, P= 0.560) . However, the clinical pregnancy rate of the group of PCOS patient ≥35 years old was slightly higher than the control group ( P= 0.062) . Conclusions: AMH has no predictive value for the pregnancy outcome of PCOS patient. The pregnancy rate of PCOS patient ≥35 years old is slightly higher than the younger group, because the PCOS patient may have better ovarian reserve.

  8. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    PubMed

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  9. GPS/DR Error Estimation for Autonomous Vehicle Localization

    PubMed Central

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  10. Sediment rating curve & Co. - a contest of prediction methods

    NASA Astrophysics Data System (ADS)

    Francke, T.; Zimmermann, A.

    2012-04-01

    In spite of the recent technological progress in sediment monitoring, often the calculation of sediment yield (SSY) still relies on intermittent measurements because of the use of historic records, instrument-failure in continuous recording or financial constraints. Therefore, available measurements are usually inter- and even extrapolated using the sediment rating curve approach, which uses continuously available discharge data to predict sediment concentrations. Extending this idea by further aspects like the inclusion of other predictors (e.g. rainfall, discharge-characteristics, etc.), or the consideration of prediction uncertainty led to a variety of new methods. Now, with approaches such as Fuzzy Logic, Artificial Neural Networks, Tree-based regression, GLMs, etc., the user is left to decide which method to apply. Trying multiple approaches is usually not an option, as considerable effort and expertise may be needed for their application. To establish a helpful guideline in selecting the most appropriate method for SSY-computation, we initiated a study to compare and rank available methods. Depending on problem attributes like hydrological and sediment regime, number of samples, sampling scheme, and availability of ancillary predictors, the performance of different methods is compared. Our expertise allowed us to "register" Random Forests, Quantile Regression Forests and GLMs for the contest. To include many different methods and ensure their sophisticated use we invite scientists that are willing to benchmark their favourite method(s) with us. The more diverse the participating methods are, the more exciting the contest will be.

  11. The Sigmoid Curve as a Metaphor for Growth and Change

    ERIC Educational Resources Information Center

    Hipkins, Rosemary; Cowie, Bronwen

    2016-01-01

    This paper introduces sigmoid or s-curve as a metaphor for describing the dynamics of change. We first encountered the s-curve as a description of a possible growth trajectory whereby populations become established, begin to flourish and the numbers increase rapidly until they reach some limit. At this point, the growth rate slows rapidly then…

  12. Unsteady laminar flow with convective heat transfer through a rotating curved square duct with small curvature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondal, Rabindra Nath, E-mail: rnmondal71@yahoo.com; Shaha, Poly Rani; Roy, Titob

    Unsteady laminar flow with convective heat transfer through a curved square duct rotating at a constant angular velocity about the center of curvature is investigated numerically by using a spectral method, and covering a wide range of the Taylor number −300≤Tr≤1000 for the Dean number Dn = 1000. A temperature difference is applied across the vertical sidewalls for the Grashof number Gr = 100, where the outer wall is heated and the inner wall cooled, the top and bottom walls being adiabatic. Flow characteristics are investigated with the effects of rotational parameter, Tr, and the pressure-driven parameter, Dn, for themore » constant curvature 0.001. Time evolution calculations as well as their phase spaces show that the unsteady flow undergoes through various flow instabilities in the scenario ‘multi-periodic → chaotic → steady-state → periodic → multi-periodic → chaotic’, if Tr is increased in the positive direction. For negative rotation, however, time evolution calculations show that the flow undergoes in the scenario ‘multi-periodic → periodic → steady-state’, if Tr is increased in the negative direction. Typical contours of secondary flow patterns and temperature profiles are obtained at several values of Tr, and it is found that the unsteady flow consists of two- to six-vortex solutions if the duct rotation is involved. External heating is shown to generate a significant temperature gradient at the outer wall of the duct. This study also shows that there is a strong interaction between the heating-induced buoyancy force and the centrifugal-Coriolis instability in the curved channel that stimulates fluid mixing and consequently enhances heat transfer in the fluid.« less

  13. Pore-wall roughness as a fractal surface and theoretical simulation of mercury intrusion/retraction in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsakiroglou, C.D.; Payatakes, A.C.

    The mercury intrusion/retraction curves of many types of porous materials (e.g., sandstones) have sections of finite slope in the region of high and very high pressure. This feature is attributed to the existence of microroughness on the pore walls. In the present work pore-wall roughness features are added to a three-dimensional primary network of chambers-and-throats using ideas of fractal geometry. The roughness of the throats is modeled with a finite number of self-similar triangular prisms of progressively smaller sizes. The roughness of the chambers is modeled in a similar way using right circular cones instead of prisms. Three parameters sufficemore » for the complete characterization of the model of fractal roughness, namely, the number of features per unit length, the common angle of sharpness, and the number of layers (which is taken to be the same for throats and chambers). Analytical relations that give the surface area, pore volume, and mercury saturation of the pore network as functions of the fractal roughness parameters are developed for monolayer and multilayer arrangements. The chamber-and-throat network with fractal pore-wall roughness is used to develop an extended version of the computer-aided simulator of mercury porosimetry that has been reported in previous publications. This new simulator is used to investigate the effects of the roughness features on the form of mercury intrusion/retraction curves. It turns out that the fractal model of the porewall roughness gives an adequate representation of real porous media, and capillary pressure curves which are similar to the experimental ones for many typical porous materials such as sandstones. The method is demonstrated with the analysis of a Greek sandstone.« less

  14. Viscous, resistive MHD stability computed by spectral techniques

    NASA Technical Reports Server (NTRS)

    Dahlburg, R. B.; Zang, T. A.; Montgomery, D.; Hussaini, M. Y.

    1983-01-01

    Expansions in Chebyshev polynomials are used to study the linear stability of one dimensional magnetohydrodynamic (MHD) quasi-equilibria, in the presence of finite resistivity and viscosity. The method is modeled on the one used by Orszag in accurate computation of solutions of the Orr-Sommerfeld equation. Two Reynolds like numbers involving Alfven speeds, length scales, kinematic viscosity, and magnetic diffusivity govern the stability boundaries, which are determined by the geometric mean of the two Reynolds like numbers. Marginal stability curves, growth rates versus Reynolds like numbers, and growth rates versus parallel wave numbers are exhibited. A numerical result which appears general is that instability was found to be associated with inflection points in the current profile, though no general analytical proof has emerged. It is possible that nonlinear subcritical three dimensional instabilities may exist, similar to those in Poiseuille and Couette flow.

  15. Speed of the bacterial flagellar motor near zero load depends on the number of stator units.

    PubMed

    Nord, Ashley L; Sowa, Yoshiyuki; Steel, Bradley C; Lo, Chien-Jung; Berry, Richard M

    2017-10-31

    The bacterial flagellar motor (BFM) rotates hundreds of times per second to propel bacteria driven by an electrochemical ion gradient. The motor consists of a rotor 50 nm in diameter surrounded by up to 11 ion-conducting stator units, which exchange between motors and a membrane-bound pool. Measurements of the torque-speed relationship guide the development of models of the motor mechanism. In contrast to previous reports that speed near zero torque is independent of the number of stator units, we observe multiple speeds that we attribute to different numbers of units near zero torque in both Na + - and H + -driven motors. We measure the full torque-speed relationship of one and two H + units in Escherichia coli by selecting the number of H + units and controlling the number of Na + units in hybrid motors. These experiments confirm that speed near zero torque in H + -driven motors increases with the stator number. We also measured 75 torque-speed curves for Na + -driven chimeric motors at different ion-motive force and stator number. Torque and speed were proportional to ion-motive force and number of stator units at all loads, allowing all 77 measured torque-speed curves to be collapsed onto a single curve by simple rescaling. Published under the PNAS license.

  16. Speed of the bacterial flagellar motor near zero load depends on the number of stator units

    PubMed Central

    Nord, Ashley L.; Sowa, Yoshiyuki; Steel, Bradley C.; Lo, Chien-Jung; Berry, Richard M.

    2017-01-01

    The bacterial flagellar motor (BFM) rotates hundreds of times per second to propel bacteria driven by an electrochemical ion gradient. The motor consists of a rotor 50 nm in diameter surrounded by up to 11 ion-conducting stator units, which exchange between motors and a membrane-bound pool. Measurements of the torque–speed relationship guide the development of models of the motor mechanism. In contrast to previous reports that speed near zero torque is independent of the number of stator units, we observe multiple speeds that we attribute to different numbers of units near zero torque in both Na+- and H+-driven motors. We measure the full torque–speed relationship of one and two H+ units in Escherichia coli by selecting the number of H+ units and controlling the number of Na+ units in hybrid motors. These experiments confirm that speed near zero torque in H+-driven motors increases with the stator number. We also measured 75 torque–speed curves for Na+-driven chimeric motors at different ion-motive force and stator number. Torque and speed were proportional to ion-motive force and number of stator units at all loads, allowing all 77 measured torque–speed curves to be collapsed onto a single curve by simple rescaling. PMID:29078322

  17. MODELING GALACTIC EXTINCTION WITH DUST AND 'REAL' POLYCYCLIC AROMATIC HYDROCARBONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulas, Giacomo; Casu, Silvia; Cecchi-Pestellini, Cesare

    We investigate the remarkable apparent variety of galactic extinction curves by modeling extinction profiles with core-mantle grains and a collection of single polycyclic aromatic hydrocarbons. Our aim is to translate a synthetic description of dust into physically well-grounded building blocks through the analysis of a statistically relevant sample of different extinction curves. All different flavors of observed extinction curves, ranging from the average galactic extinction curve to virtually 'bumpless' profiles, can be described by the present model. We prove that a mixture of a relatively small number (54 species in 4 charge states each) of polycyclic aromatic hydrocarbons can reproducemore » the features of the extinction curve in the ultraviolet, dismissing an old objection to the contribution of polycyclic aromatic hydrocarbons to the interstellar extinction curve. Despite the large number of free parameters (at most the 54 Multiplication-Sign 4 column densities of each species in each ionization state included in the molecular ensemble plus the 9 parameters defining the physical properties of classical particles), we can strongly constrain some physically relevant properties such as the total number of C atoms in all species and the mean charge of the mixture. Such properties are found to be largely independent of the adopted dust model whose variation provides effects that are orthogonal to those brought about by the molecular component. Finally, the fitting procedure, together with some physical sense, suggests (but does not require) the presence of an additional component of chemically different very small carbonaceous grains.« less

  18. Comparison of power curve monitoring methods

    NASA Astrophysics Data System (ADS)

    Cambron, Philippe; Masson, Christian; Tahan, Antoine; Torres, David; Pelletier, Francis

    2017-11-01

    Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM) of wind turbines (WT). In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.

  19. Secondary flow in a curved artery model with Newtonian and non-Newtonian blood-analog fluids

    NASA Astrophysics Data System (ADS)

    Najjari, Mohammad Reza; Plesniak, Michael W.

    2016-11-01

    Steady and pulsatile flows of Newtonian and non-Newtonian fluids through a 180°-curved pipe were investigated using particle image velocimetry (PIV). The experiment was inspired by physiological pulsatile flow through large curved arteries, with a carotid artery flow rate imposed. Sodium iodide (NaI) and sodium thiocyanate (NaSCN) were added to the working fluids to match the refractive index (RI) of the test section to eliminate optical distortion. Rheological measurements revealed that adding NaI or NaSCN changes the viscoelastic properties of non-Newtonian solutions and reduces their shear-thinning property. Measured centerline velocity profiles in the upstream straight pipe agreed well with an analytical solution. In the pulsatile case, secondary flow structures, i.e. deformed-Dean, Dean, Wall and Lyne vortices, were observed in various cross sections along the curved pipe. Vortical structures at each cross section were detected using the d2 vortex identification method. Circulation analysis was performed on each vortex separately during the systolic deceleration phase, and showed that vortices split and rejoin. Secondary flow structures in steady flows were found to be morphologically similar to those in pulsatile flows for sufficiently high Dean number. supported by the George Washington University Center for Biomimetics and Bioinspired Engineering.

  20. Modeling two strains of disease via aggregate-level infectivity curves.

    PubMed

    Romanescu, Razvan; Deardon, Rob

    2016-04-01

    Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.

  1. New configuration factors for curved surfaces

    NASA Astrophysics Data System (ADS)

    Cabeza-Lainez, Jose M.; Pulido-Arcas, Jesus A.

    2013-03-01

    Curved surfaces have not been thoroughly considered in radiative transfer analysis mainly due to the difficulties arisen in the integration process and perhaps because of the lack of spatial vision of the researchers. It is a fact, especially for architectural lighting, that when concave geometries appear inside a curved space, they are mostly avoided. In this way, a vast repertoire of significant forms is neglected and energy waste is evident. Starting from the properties of volumes enclosed by the minimum number of surfaces, the authors formulate, with little calculus, new simple laws, which enable them to discover a set of configuration factors for caps and various segments of the sphere. The procedure is subsequently extended to previously unimagined surfaces as the paraboloid, the ellipsoid or the cone. Appropriate combination of the said forms with right truncated cones produces several complex volumes, often used in architectural and engineering creations and whose radiative performance could not be accurately predicted for decades. To complete the research, a new method for determining interreflections in curved volumes is also presented. Radiative transfer simulation benefits from these findings, as the simplicity of the results has led the authors to create innovative software more efficient for design and evaluation and applicable to emerging fields like LED lighting.

  2. Prognostication of Learning Curve on Surgical Management of Vasculobiliary Injuries after Cholecystectomy

    PubMed Central

    Dar, Faisal Saud; Zia, Haseeb; Rafique, Muhammad Salman; Khan, Nusrat Yar; Salih, Mohammad; Hassan Shah, Najmul

    2016-01-01

    Background. Concomitant vascular injury might adversely impact outcomes after iatrogenic bile duct injury (IBDI). Whether a new HPB center should embark upon repair of complex biliary injuries with associated vascular injuries during learning curve is unknown. The objective of this study was to determine outcome of surgical management of IBDI with and without vascular injuries in a new HPB center during its learning curve. Methods. We retrospectively reviewed patients who underwent surgical management of IBDI at our center. A total of 39 patients were included. Patients without (Group 1) and with vascular injuries (Group 2) were compared. Outcome was defined as 90-day morbidity and mortality. Results. Median age was 39 (20–80) years. There were 10 (25.6%) vascular injuries. E2 injuries were associated significantly with high frequency of vascular injuries (66% versus 15.1%) (P = 0.01). Right hepatectomy was performed in three patients. Out of these, two had a right hepatic duct stricture and one patient had combined right arterial and portal venous injury. The number of patients who developed postoperative complications was not significantly different between the two groups (11.1% versus 23.4%) (P = 0.6). Conclusion. Learning curve is not a negative prognostic variable in the surgical management of iatrogenic vasculobiliary injuries after cholecystectomy. PMID:27525124

  3. A Method for Optimal Allocation between Instream and Offstream Uses in the Maipo River in Central Chile

    NASA Astrophysics Data System (ADS)

    Génova, P. P.; Olivares, M. A.

    2016-12-01

    Minimum instream flows (MIF) have been established in Chile with the aim of protecting aquatic ecosystems. In practice, since current water law only allocates water rights to offstream water uses, MIF becomes the only instrument for instream water allocation. However, MIF do not necessarily maintain an adequate flow for instream uses. Moreover, an efficient allocation of water for instream uses requires the quantification of the benefits obtained from those uses, so that tradeoffs between instream and offstream water uses are properly considered. A model of optimal allocation between instream and offstream uses is elaborated. The proposed method combines two pieces of information. On one hand, benefits of instream use are represented by qualitative recreational benefit curves as a function of instream flow. On the other hand, the opportunity cost given by lost benefits of offstream uses is employed to develop a supply curve for instream flows. We applied this method to the case of the Maipo River, where the main water uses are recreation, hydropower production and drinking water. Based on available information we obtained the qualitative benefits of various recreational activities as a function of flow attributes. Then we developed flow attributes curves as a function of instream flow for a representative number of sections in the river. As a result we obtained the qualitative recreational benefit curve for each section. The marginal cost curve for instream flows was developed from the benefit functions of hydropower production interfering with recreation in the Maipo River. The purpose of this supply curve is to find a range of instream flow that will provide a better quality condition for recreation experience at a lower opportunity cost. Results indicate that offstream uses adversely influence recreational activities in the Maipo River in certain months of the year, significantly decreasing the quality of these in instream uses. As expected, the impact depends of the magnitude of diverted flows, and therefore these impacts can be reduced restricting the amount of water extracted from the river. Accordingly, it is possible to define the optimum amount of water to be allocated to each use for each month such that instream flows are appropriate for recreation and the loss of hydropower production benefits is lowest.

  4. Autorotation

    NASA Astrophysics Data System (ADS)

    Bohr, Jakob; Markvorsen, Steen

    2016-02-01

    A continuous autorotation vector field along a framed space curve is defined, which describes the rotational progression of the frame. We obtain an exact integral for the length of the autorotation vector. This invokes the infinitesimal rotation vector of the frame progression and the unit vector field for the corresponding autorotation vector field. For closed curves we define an autorotation number whose integer value depends on the starting point of the curve. Upon curve deformations, the autorotation number is either constant, or can make a jump of (multiples of) plus-minus two, which corresponds to a change in rotation of multiples of 4π. The autorotation number is therefore not topologically conserved under all transformations. We discuss this within the context of generalised inflection points and of frame revisit points. The results may be applicable to physical systems such as polymers, proteins, and DNA. Finally, turbulence is discussed in the light of autorotation, as is the Philippine wine dance, the Dirac belt trick, and the 4π cycle of the flying snake. This paper is dedicated to Ian K Robinson on the occasion of Ian receiving the Gregori Aminoff Prize 2015.

  5. Edge detection and mathematic fitting for corneal surface with Matlab software.

    PubMed

    Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na

    2017-01-01

    To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.

  6. Application of a Novel DCPD Adjustment Method for the J-R Curve Characterization: A study based on ORNL and ASTM Interlaboratory Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xiang; Sokolov, Mikhail A; Nanstad, Randy K

    Material fracture toughness in the fully ductile region can be described by a J-integral vs. crack growth resistance curve (J-R curve). As a conventional J-R curve measurement method, the elastic unloading compliance (EUC) method becomes impractical for elevated temperature testing due to relaxation of the material and friction induced back-up shape of the J-R curve. One alternative solution of J-R curve testing applies the Direct Current Potential Drop (DCPD) technique for measuring crack extension. However, besides crack growth, potential drop can also be influenced by plastic deformation, crack tip blunting, etc., and uncertainties exist in the current DCPD methodology especiallymore » in differentiating potential drop due to stable crack growth and due to material deformation. Thus, using DCPD for J-R curve determination remains a challenging task. In this study, a new adjustment procedure for applying DCPD to derive the J-R curve has been developed for conventional fracture toughness specimens, including compact tension, three-point bend, and disk-shaped compact specimens. Data analysis has been performed on Oak Ridge National Laboratory (ORNL) and American Society for Testing and Materials (ASTM) interlaboratory results covering different specimen thicknesses, test temperatures, and materials, to evaluate the applicability of the new DCPD adjustment procedure for J-R curve characterization. After applying the newly-developed procedure, direct comparison between the DCPD method and the normalization method on the same specimens indicated close agreement for the overall J-R curves, as well as the provisional values of fracture toughness near the onset of ductile crack extension, Jq, and of tearing modulus.« less

  7. A Novel Videography Method for Generating Crack-Extension Resistance Curves in Small Bone Samples

    PubMed Central

    Katsamenis, Orestis L.; Jenkins, Thomas; Quinci, Federico; Michopoulou, Sofia; Sinclair, Ian; Thurner, Philipp J.

    2013-01-01

    Assessment of bone quality is an emerging solution for quantifying the effects of bone pathology or treatment. Perhaps one of the most important parameters characterising bone quality is the toughness behaviour of bone. Particularly, fracture toughness, is becoming a popular means for evaluating bone quality. The method is moving from a single value approach that models bone as a linear-elastic material (using the stress intensity factor, K) towards full crack extension resistance curves (R-curves) using a non-linear model (the strain energy release rate in J-R curves). However, for explanted human bone or small animal bones, there are difficulties in measuring crack-extension resistance curves due to size constraints at the millimetre and sub-millimetre scale. This research proposes a novel “whitening front tracking” method that uses videography to generate full fracture resistance curves in small bone samples where crack propagation cannot typically be observed. Here we present this method on sharp edge notched samples (<1 mm×1 mm×Length) prepared from four human femora tested in three-point bending. Each sample was loaded in a mechanical tester with the crack propagation recorded using videography and analysed using an algorithm to track the whitening (damage) zone. Using the “whitening front tracking” method, full R-curves and J-R curves could be generated for these samples. The curves for this antiplane longitudinal orientation were similar to those found in the literature, being between the published longitudinal and transverse orientations. The proposed technique shows the ability to generate full “crack” extension resistance curves by tracking the whitening front propagation to overcome the small size limitations and the single value approach. PMID:23405186

  8. Nonlinear Analysis of Cavitating Propellers in Nonuniform Flow

    DTIC Science & Technology

    1992-10-16

    Helmholtz more than a century ago [4]. The method was later extended to treat curved bodies at zero cavitation number by Levi - Civita [4]. The theory was...122, 1895. [63] M.P. Tulin. Steady two -dimensional cavity flows about slender bodies . Technical Report 834, DTMB, May 1953. [64] M.P. Tulin...iterative solution for two -dimensional flows is remarkably fast and that the accuracy of the first iteration solution is sufficient for a wide range of

  9. Choice of boundary condition for lattice-Boltzmann simulation of moderate-Reynolds-number flow in complex domains.

    PubMed

    Nash, Rupert W; Carver, Hywel B; Bernabeu, Miguel O; Hetherington, James; Groen, Derek; Krüger, Timm; Coveney, Peter V

    2014-02-01

    Modeling blood flow in larger vessels using lattice-Boltzmann methods comes with a challenging set of constraints: a complex geometry with walls and inlets and outlets at arbitrary orientations with respect to the lattice, intermediate Reynolds (Re) number, and unsteady flow. Simple bounce-back is one of the most commonly used, simplest, and most computationally efficient boundary conditions, but many others have been proposed. We implement three other methods applicable to complex geometries [Guo, Zheng, and Shi, Phys. Fluids 14, 2007 (2002); Bouzidi, Firdaouss, and Lallemand, Phys. Fluids 13, 3452 (2001); Junk and Yang, Phys. Rev. E 72, 066701 (2005)] in our open-source application hemelb. We use these to simulate Poiseuille and Womersley flows in a cylindrical pipe with an arbitrary orientation at physiologically relevant Re number (1-300) and Womersley (4-12) numbers and steady flow in a curved pipe at relevant Dean number (100-200) and compare the accuracy to analytical solutions. We find that both the Bouzidi-Firdaouss-Lallemand (BFL) and Guo-Zheng-Shi (GZS) methods give second-order convergence in space while simple bounce-back degrades to first order. The BFL method appears to perform better than GZS in unsteady flows and is significantly less computationally expensive. The Junk-Yang method shows poor stability at larger Re number and so cannot be recommended here. The choice of collision operator (lattice Bhatnagar-Gross-Krook vs multiple relaxation time) and velocity set (D3Q15 vs D3Q19 vs D3Q27) does not significantly affect the accuracy in the problems studied.

  10. Reclaimed mineland curve number response to temporal distribution of rainfall

    USGS Publications Warehouse

    Warner, R.C.; Agouridis, C.T.; Vingralek, P.T.; Fogle, A.W.

    2010-01-01

    The curve number (CN) method is a common technique to estimate runoff volume, and it is widely used in coal mining operations such as those in the Appalachian region of Kentucky. However, very little CN data are available for watersheds disturbed by surface mining and then reclaimed using traditional techniques. Furthermore, as the CN method does not readily account for variations in infiltration rates due to varying rainfall distributions, the selection of a single CN value to encompass all temporal rainfall distributions could lead engineers to substantially under- or over-size water detention structures used in mining operations or other land uses such as development. Using rainfall and runoff data from a surface coal mine located in the Cumberland Plateau of eastern Kentucky, CNs were computed for conventionally reclaimed lands. The effects of temporal rainfall distributions on CNs was also examined by classifying storms as intense, steady, multi-interval intense, or multi-interval steady. Results indicate that CNs for such reclaimed lands ranged from 62 to 94 with a mean value of 85. Temporal rainfall distributions were also shown to significantly affect CN values with intense storms having significantly higher CNs than multi-interval storms. These results indicate that a period of recovery is present between rainfall bursts of a multi-interval storm that allows depressional storage and infiltration rates to rebound. ?? 2010 American Water Resources Association.

  11. Exhaustive geographic search with mobile robots along space-filling curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spires, S.V.; Goldsmith, S.Y.

    1998-03-01

    Swarms of mobile robots can be tasked with searching a geographic region for targets of interest, such as buried land mines. The authors assume that the individual robots are equipped with sensors tuned to the targets of interest, that these sensors have limited range, and that the robots can communicate with one another to enable cooperation. How can a swarm of cooperating sensate robots efficiently search a given geographic region for targets in the absence of a priori information about the target`s locations? Many of the obvious approaches are inefficient or lack robustness. One efficient approach is to have themore » robots traverse a space-filling curve. For many geographic search applications, this method is energy-frugal, highly robust, and provides guaranteed coverage in a finite time that decreases as the reciprocal of the number of robots sharing the search task. Furthermore, it minimizes the amount of robot-to-robot communication needed for the robots to organize their movements. This report presents some preliminary results from applying the Hilbert space-filling curve to geographic search by mobile robots.« less

  12. Comparative studies of copy number variation detection methods for next-generation sequencing technologies.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping

    2013-01-01

    Copy number variation (CNV) has played an important role in studies of susceptibility or resistance to complex diseases. Traditional methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution of genomic regions. Following the emergence of next generation sequencing (NGS) technologies, CNV detection methods based on the short read data have recently been developed. However, due to the relatively young age of the procedures, their performance is not fully understood. To help investigators choose suitable methods to detect CNVs, comparative studies are needed. We compared six publicly available CNV detection methods: CNV-seq, FREEC, readDepth, CNVnator, SegSeq and event-wise testing (EWT). They are evaluated both on simulated and real data with different experiment settings. The receiver operating characteristic (ROC) curve is employed to demonstrate the detection performance in terms of sensitivity and specificity, box plot is employed to compare their performances in terms of breakpoint and copy number estimation, Venn diagram is employed to show the consistency among these methods, and F-score is employed to show the overlapping quality of detected CNVs. The computational demands are also studied. The results of our work provide a comprehensive evaluation on the performances of the selected CNV detection methods, which will help biological investigators choose the best possible method.

  13. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  14. Coherent-Anomaly Method in Self-Avoiding Walk Problems

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Suzuki, Masuo

    Self-avoiding walk (SAW), being a nonequilibrium cooperative phenomenon, is investigated with a finite-order-restricted-walk (finite-ORW or FORW) coherent-anomaly method (CAM). The coefficient β1r in the asymptotic form Cnr ≃ βlrλn1r for the total number Cnr of r-ORW's with respect to the step number n is investigated for the first time. An asymptotic form for SAW's is thus obtained from the series of FORW approximants, Cnr ≃ brgμ(1 + a/r)n, as the envelope curve Cn ≃ b(ae/g)gμnng. Numerical results are given by Cn ≃ 1.424n0.27884.1507n and Cn ≃ 1.179n0.158710.005n for the plane triangular lattice and f.c.c. lattice, respectively. A good coincidence of the total numbers estimated from the above simple formulae with exact enumerations for finite-step SAW's implies that the essential nature of SAW (non-Markov process) can be understood from FORW (Markov process) in the CAM framework.

  15. Ab initio study of the diatomic fluorides FeF, CoF, NiF, and CuF.

    PubMed

    Koukounas, Constantine; Mavridis, Aristides

    2008-11-06

    The late-3d transition-metal diatomic fluorides MF = FeF, CoF, NiF, and CuF have been studied using variational multireference (MRCI) and coupled-cluster [RCCSD(T)] methods, combined with large to very large basis sets. We examined a total of 35 (2S+1)|Lambda| states, constructing as well 29 full potential energy curves through the MRCI method. All examined states are ionic, diabatically correlating to M(+)+F(-)((1)S). Notwithstanding the "eccentric" character of the 3d transition metals and the difficulties to accurately be described with all-electron ab initio methods, our results are, in general, in very good agreement with available experimental numbers.

  16. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  17. Effect of radial magnetic field on peristaltic transport of Jeffrey fluid in curved channel with heat / mass transfer

    NASA Astrophysics Data System (ADS)

    Abdulhadi, Ahmed M.; Ahmed, Tamara S.

    2018-05-01

    In this paper, we deals with the impact of radialiy magnetic field on the peristaltic transport of Jeffrey fluid through a curved channel with two dimensional. The effect of slip condition on velocity, the non-slip condition on temperature and conversation is performed. The heat and mass transfer are considered under the influence of various parameters. The flow is investigated under the assumption of long wave length and low Reynolds number approximations. The distribution of temperature and concentration are discussed for various parameters governing the flow with the simultaneous effects of Brinkman number, Soret number and Schmidt number.

  18. A new method for separating the climatic and biological trend components from tree ring series, with implications for paleoclimate reconstructions

    NASA Astrophysics Data System (ADS)

    Bouldin, J.

    2010-12-01

    In the reconstruction of past climates from tree rings multi-decadal to multi-centennial periods, one longstanding problem is the confounding of the natural biological growth trend of the tree with any existing long term trends in the climate. No existing analytical method is capable of resolving these two change components, so it remains unclear how accurate existing ring series standardizations are, and by implication, climate reconstructions based upon them. For example, dendrochronological at the ITRDB are typically standardized by detrending, at each site, each individual tree core, using a relatively stiff deterministic function such as a negative exponential curve or smoothing spline. Another approach, referred to as RCS (Regional Curve Standardization) attempts to solve some problems of the individual series detrending, by constructing a single growth curve from the aggregated cambial ages of the rings of the cores at a site (or collection of sites). This curve is presumed to represent the “ideal” or expected growth of the trees from which it is derived. Although an improvement in some respects, this method will be degraded in direct proportion to the lack of a mixture of tree sizes or ages throughout the span of the chronology. I present a new method of removing the biological curve from tree ring series, such that temporal changes better represent the environmental variation captured by the tree rings. The method institutes several new approaches, such as the correction for the estimated number of missed rings near the pith, and the use of tree size and ring area relationships instead of the traditional tree ages and ring widths. The most important innovation is a careful extraction of the existing information on the relationship between tree size (basal area) and ring area that exists within each single year of the chronology. This information is, by definition, not contaminated by temporal climatic changes, and so when removed, leaves the climatically caused, and random error components of the chronology. A sophisticated algorithm, based on pair-wise ring comparisons in which tree size is standardized both within and between years, forms the basis of the method. Evaluations of the method are underway with both simulated and actual (ITRDB) data, to evaluate the potentials and drawbacks of the method relative to existing methods. The ITRDB test data consists of a set of about 50 primarily high elevation sites from across western North America. Most of these sites show a pronounced 20th Century warming relative to earlier centuries, in accordance with current understanding, albeit at a non-global scale. A relative minority show cooling, occasionally strongly. Current and future work emphasizes evaluation of the method with varying, simulated data, and more thorough empirical evaluations of the method in situations where the type, and intensity, of the primary environmentally limiting factor varies (e.g temperature versus soil moisture limited sites).

  19. On determining the most appropriate test cut-off value: the case of tests with continuous results

    PubMed Central

    Habibzadeh, Parham; Yadollahie, Mahboobeh

    2016-01-01

    There are several criteria for determination of the most appropriate cut-off value in a diagnostic test with continuous results. Mostly based on receiver operating characteristic (ROC) analysis, there are various methods to determine the test cut-off value. The most common criteria are the point on ROC curve where the sensitivity and specificity of the test are equal; the point on the curve with minimum distance from the left-upper corner of the unit square; and the point where the Youden’s index is maximum. There are also methods mainly based on Bayesian decision analysis. Herein, we show that a proposed method that maximizes the weighted number needed to misdiagnose, an index of diagnostic test effectiveness we previously proposed, is the most appropriate technique compared to the aforementioned ones. For determination of the cut-off value, we need to know the pretest probability of the disease of interest as well as the costs incurred by misdiagnosis. This means that even for a certain diagnostic test, the cut-off value is not universal and should be determined for each region and for each disease condition. PMID:27812299

  20. Characterization of water retention curves for a series of cultivated histosols

    Treesearch

    Dennis W. Hallema; Yann Périard; Jonathan A. Lafond; Silvio J. Gumiere; Jean Caron

    2015-01-01

    Water retention curves are essential for the parameterization of soil water models such as HYDRUS. Although hydraulic parameters are known for a large number of mineral and natural organic soils, our knowledge on the hydraulic behavior of cultivated Histosols is rather limited. The objective of this study was to derive characteristic water retention curves for a large...

  1. Regional analysis of annual precipitation maxima in Montana

    USGS Publications Warehouse

    Parrett, Charles

    1997-01-01

    Dimensionless precipitation-frequency curves for estimating precipitation depths having large recurrence intervals were developed for 2-, 6-, and 24-hour storm durations for three homogeneous regions in Montana. Within each homogeneous region, at-site annual precipitation maxima were made dimensionless by dividing by the at-site mean and grouped so that a single frequency curve would be applicable for each duration. L-moment statistics were used to help define the homogeneous regions and to develop the dimensionless precipitation- frequency curves. Data from 459 precipitation stations were used after application of statistical tests to ensure that the data were not serially correlated and were stationary over the general period of data collection (1900-92). The data were found to have a small, but significant, degree of interstation correlation. The GEV distribution was used to construct dimensionless frequency curves of annual precipitation maxima for each duration within each region. Each dimensionless frequency curve was considered to be reliable for recurrence intervals up to the effective record length. Because of significant, though small, interstation correlation in all regions for all durations, and because the selected regions exhibited some heterogeneity, the effective record length was considered to be less than the total number of station-years of data. The effective record length for each duration in each region was estimated using a graphical method and found to range from 500 years for 6-hour duration data in Region 2 to 5,100 years for 24-hour duration data in Region 3.

  2. Collapsed heteroclinic snaking near a heteroclinic chain in dragged meniscus problems.

    PubMed

    Tseluiko, D; Galvagno, M; Thiele, U

    2014-04-01

    A liquid film is studied that is deposited onto a flat plate that is inclined at a constant angle to the horizontal and is extracted from a liquid bath at a constant speed. We analyse steady-state solutions of a long-wave evolution equation for the film thickness. Using centre manifold theory, we first obtain an asymptotic expansion of solutions in the bath region. The presence of an additional temperature gradient along the plate that induces a Marangoni shear stress significantly changes these expansions and leads to the presence of logarithmic terms that are absent otherwise. Next, we numerically obtain steady solutions and analyse their behaviour as the plate velocity is changed. We observe that the bifurcation curve exhibits collapsed (or exponential) heteroclinic snaking when the plate inclination angle is above a certain critical value. Otherwise, the bifurcation curve is monotonic. The steady profiles along these curves are characterised by a foot-like structure that is formed close to the meniscus and is preceded by a thin precursor film further up the plate. The length of the foot increases along the bifurcation curve. Finally, we prove with a Shilnikov-type method that the snaking behaviour of the bifurcation curves is caused by the existence of an infinite number of heteroclinic orbits close to a heteroclinic chain that connects in an appropriate three-dimensional phase space the fixed point corresponding to the precursor film with the fixed point corresponding to the foot and then with the fixed point corresponding to the bath.

  3. Learning Curve and Clinical Outcomes of Performing Surgery with the InterTan Intramedullary Nail in Treating Femoral Intertrochanteric Fractures

    PubMed Central

    2017-01-01

    Purpose. The purpose of this study is to evaluate the learning curve of performing surgery with the InterTan intramedullary nail in treating femoral intertrochanteric fractures, to provide valuable information and experience for surgeons who decide to learn a new procedure. Methods. We retrospectively analyzed data from 53 patients who underwent surgery using an InterTan intramedullary nail at our hospital between July 2012 and September 2015. The negative exponential curve-fit regression analysis was used to evaluate the learning curve. According to 90% learning milestone, patients were divided into two group, and the outcomes were compared. Results. The mean operative time was 69.28 (95% CI 64.57 to 74.00) minutes; with the accumulation of surgical experience, the operation time was gradually decreased. 90% of the potential improvement was expected after 18 cases. In terms of operative time, intraoperative blood loss, hospital stay, and Harris hip score significant differences were found between two groups (p = 0.009, p = 0.000, p = 0.030, and p = 0.002, resp.). Partial weight bearing time, fracture union time, tip apex distance, and the number of blood transfusions and complications were similar between two groups (p > 0.5). Conclusion. This study demonstrated that the learning curve of performing surgery with the InterTan intramedullary nail is acceptable and 90% of the expert's proficiency level is achieved at around 18 cases. PMID:28503572

  4. Surface wave phase velocities from 2-D surface wave tomography studies in the Anatolian plate

    NASA Astrophysics Data System (ADS)

    Arif Kutlu, Yusuf; Erduran, Murat; Çakır, Özcan; Vinnik, Lev; Kosarev, Grigoriy; Oreshin, Sergey

    2014-05-01

    We study the Rayleigh and Love surface wave fundamental mode propagation beneath the Anatolian plate. To examine the inter-station phase velocities a two-station method is used along with the Multiple Filter Technique (MFT) in the Computer Programs in Seismology (Herrmann and Ammon, 2004). The near-station waveform is deconvolved from the far-station waveform removing the propagation effects between the source and the station. This method requires that the near and far stations are aligned with the epicentre on a great circle path. The azimuthal difference of the earthquake to the two-stations and the azimuthal difference between the earthquake and the station are restricted to be smaller than 5o. We selected 3378 teleseismic events (Mw >= 5.7) recorded by 394 broadband local stations with high signal-to-noise ratio within the years 1999-2013. Corrected for the instrument response suitable seismogram pairs are analyzed with the two-station method yielding a collection of phase velocity curves in various period ranges (mainly in the range 25-185 sec). Diffraction from lateral heterogeneities, multipathing, interference of Rayleigh and Love waves can alter the dispersion measurements. In order to obtain quality measurements, we select only smooth portions of the phase velocity curves, remove outliers and average over many measurements. We discard these average phase velocity curves suspected of suffering from phase wrapping errors by comparing them with a reference Earth model (IASP91 by Kennett and Engdahl, 1991). The outlined analysis procedure yields 3035 Rayleigh and 1637 Love individual phase velocity curves. To obtain Rayleigh and Love wave travel times for a given region we performed 2-D tomographic inversion for which the Fast Marching Surface Tomography (FMST) code developed by N. Rawlinson at the Australian National University was utilized. This software package is based on the multistage fast marching method by Rawlinson and Sambridge (2004a, 2004b). The azimuthal coverage of the respective two-station paths is proper to analyze the observed dispersion curves in terms of both azimuthal and radial anisotropy beneath the study region. This research is supported by Joint Research Project of the Scientific and Research Council of Turkey (TUBİTAK- Grant number 111Y190) and the Russian Federation for Basic Research (RFBR).

  5. SU-G-206-17: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Yang, K

    2016-06-15

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semiautomated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  6. SU-F-P-53: RadShield: Semi-Automated Shielding Design for CT Using NCRP 147 and Isodose Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeLorenzo, M; Rutel, I; Wu, D

    Purpose: Computed tomography (CT) exam rooms are shielded more quickly and accurately compared to manual calculations using RadShield, a semi-automated diagnostic shielding software package. Last year, we presented RadShield’s approach to shielding radiographic and fluoroscopic rooms calculating air kerma rate and barrier thickness at many points on the floor plan and reporting the maximum values for each barrier. RadShield has now been expanded to include CT shielding design using not only NCRP 147 methodology but also by overlaying vendor provided isodose curves onto the floor plan. Methods: The floor plan image is imported onto the RadShield workspace to serve asmore » a template for drawing barriers, occupied regions and CT locations. SubGUIs are used to set design goals, occupancy factors, workload, and overlay isodose curve files. CTDI and DLP methods are solved following NCRP 147. RadShield’s isodose curve method employs radial scanning to extract data point sets to fit kerma to a generalized power law equation of the form K(r) = ar^b. RadShield’s semi-automated shielding recommendations were compared against a board certified medical physicist’s design using dose length product (DLP) and isodose curves. Results: The percentage error found between the physicist’s manual calculation and RadShield’s semi-automated calculation of lead barrier thickness was 3.42% and 21.17% for the DLP and isodose curve methods, respectively. The medical physicist’s selection of calculation points for recommending lead thickness was roughly the same as those found by RadShield for the DLP method but differed greatly using the isodose method. Conclusion: RadShield improves accuracy in calculating air-kerma rate and barrier thickness over manual calculations using isodose curves. Isodose curves were less intuitive and more prone to error for the physicist than inverse square methods. RadShield can now perform shielding design calculations for general scattering bodies for which isodose curves are provided.« less

  7. Analysis of a diverse assemblage of diazotrophic bacteria from Spartina alterniflora using DGGE and clone library screening.

    PubMed

    Lovell, Charles R; Decker, Peter V; Bagwell, Christopher E; Thompson, Shelly; Matsui, George Y

    2008-05-01

    Methods to assess the diversity of the diazotroph assemblage in the rhizosphere of the salt marsh cordgrass, Spartina alterniflora were examined. The effectiveness of nifH PCR-denaturing gradient gel electrophoresis (DGGE) was compared to that of nifH clone library analysis. Seventeen DGGE gel bands were sequenced and yielded 58 nonidentical nifH sequences from a total of 67 sequences determined. A clone library constructed using the GC-clamp nifH primers that were employed in the PCR-DGGE (designated the GC-Library) yielded 83 nonidentical sequences from a total of 257 nifH sequences. A second library constructed using an alternate set of nifH primers (N-Library) yielded 83 nonidentical sequences from a total of 138 nifH sequences. Rarefaction curves for the libraries did not reach saturation, although the GC-Library curve was substantially dampened and appeared to be closer to saturation than the N-Library curve. Phylogenetic analyses showed that DGGE gel band sequencing recovered nifH sequences that were frequently sampled in the GC-Library, as well as sequences that were infrequently sampled, and provided a species composition assessment that was robust, efficient, and relatively inexpensive to obtain. Further, the DGGE method permits a large number of samples to be examined for differences in banding patterns, after which bands of interest can be sampled for sequence determination.

  8. Determination of NEHRP Site Class of Seismic Recording Stations in the Northwest Himalayas and Its Adjoining Area Using HVSR Method

    NASA Astrophysics Data System (ADS)

    Harinarayan, N. H.; Kumar, Abhishek

    2018-01-01

    Local site characteristics play an important role in controlling the damage pattern during earthquakes (EQs). These site characteristics may vary from simple to complex and can be estimated by various field tests. In addition, extended Nakamura's method, which uses horizontal to vertical spectral ratio (HVSR) based on available EQ records also available for site class (SC) determination. In this study, SCs for 90 recording stations which are maintained by Program for Excellence in Strong Motion Studies (PESMOS), located in the northwestern Himalayas and the adjoining areas are determined using extended Nakamura's technique. Average HVSR curves obtained at majority of the recording stations are found matching with the existing literature. Predominant frequency ( f peak) from average HVSR curve at each recording station is then used for the determination of SC. Original SC given by PESMOS is purely based on geology and not based on comprehensive soil investigation exercise. In this study, the SC, which is based on the average HVSR curves is found matching with SC given by PESMOS for a majority of recording stations. However, for considerable number of recording stations, a mismatch is also found which is consistent with the existing literature. In addition, SC based on National Earthquake Hazard Reduction Program (NEHRP) scheme is proposed based on f peak for all the 90 recording stations.

  9. Thermoluminescence kinetic features of Lithium Iodide (LiI) single crystal grown by vertical Bridgman technique

    NASA Astrophysics Data System (ADS)

    Daniel, D. Joseph; Kim, H. J.; Kim, Sunghwan; Khan, Sajid

    2017-08-01

    Single crystal of pure Lithium Iodide (LiI) has been grown from melt by using the vertical Bridgman technique. Thermoluminescence (TL) Measurements were carried out at 1 K/s following X-ray irradiation. The TL glow curve consists of a dominant peak at (peak-maximum Tm) 393 K and one low temperature peak of weaker intensity at 343 K. The order of kinetics (b), activation energy (E), and the frequency factor (S) for a prominent TL glow peak observed around 393 K for LiI crystals are reported for the first time. The peak shape analysis of the glow peak indicates the kinetics to be of the first order. The value of E is calculated using various standard methods such as initial rise (IR), whole glow peak (WGP), peak shape (PS), computerized glow curve deconvolution (CGCD) and Variable Heating rate (VHR) methods. An average value of 1.06 eV is obtained in this case. In order to validate the obtained parameters, numerically integrated TL glow curve has been generated using experimentally determined kinetic parameters. The effective atomic number (Zeff) for this material was determined and found to be 52. X-ray induced emission spectra of pure LiI single crystal are studied at room temperature and it is found that the sample exhibit sharp emission at 457 nm and broad emission at 650 nm.

  10. In vivo proton dosimetry using a MOSFET detector in an anthropomorphic phantom with tissue inhomogeneity

    PubMed Central

    Hotta, Kenji; Matsubara, Kana; Nishioka, Shie; Matsuura, Taeko; Kawashima, Mitsuhiko

    2012-01-01

    When in vivo proton dosimetry is performed with a metal‐oxide semiconductor field‐effect transistor (MOSFET) detector, the response of the detector depends strongly on the linear energy transfer. The present study reports a practical method to correct the MOSFET response for linear energy transfer dependence by using a simplified Monte Carlo dose calculation method (SMC). A depth‐output curve for a mono‐energetic proton beam in polyethylene was measured with the MOSFET detector. This curve was used to calculate MOSFET output distributions with the SMC (SMCMOSFET). The SMCMOSFET output value at an arbitrary point was compared with the value obtained by the conventional SMCPPIC, which calculates proton dose distributions by using the depth‐dose curve determined by a parallel‐plate ionization chamber (PPIC). The ratio of the two values was used to calculate the correction factor of the MOSFET response at an arbitrary point. The dose obtained by the MOSFET detector was determined from the product of the correction factor and the MOSFET raw dose. When in vivo proton dosimetry was performed with the MOSFET detector in an anthropomorphic phantom, the corrected MOSFET doses agreed with the SMCPPIC results within the measurement error. To our knowledge, this is the first report of successful in vivo proton dosimetry with a MOSFET detector. PACS number: 87.56.‐v PMID:22402385

  11. Assessment of a Learning Strategy among Spine Surgeons

    PubMed Central

    Gotfryd, Alberto Ofenhejm; Teixeira, William Jacobsen; Martins, Delio Eulálio; Milano, Jeronimo; Iutaka, Alexandre Sadao

    2017-01-01

    Study Design Pilot test, observational study. Objective To evaluate objectively the knowledge transfer provided by theoretical and practical activities during AOSpine courses for spine surgeons. Methods During two AOSpine principles courses, 62 participants underwent precourse assessment, which consisted of questions about their professional experience, preferences regarding adolescent idiopathic scoliosis (AIS) classification, and classifying the curves by means of the Lenke classification of two AIS clinical cases. Two learning strategies were used during the course. A postcourse questionnaire was applied to reclassify the same deformity cases. Differences in the correct answers of clinical cases between pre- and postcourse were analyzed, revealing the number of participants whose accuracy in classification improved after the course. Results Analysis showed a decrease in the number of participants with wrong answers in both cases after the course. In the first case, statistically significant differences were observed in both curve pattern (83.3%, p  =  0.005) and lumbar spine modifier (46.6%, p  =  0.049). No statistically significant improvement was seen in the sagittal thoracic modifier (33.3%, p  =  0.309). In the second case, statistical improvement was obtained in curve pattern (27.4%, p  =  0.018). No statistically significant improvement was seen regarding lumbar spine modifier (9.8%, p  =  0.121) and sagittal thoracic modifier (12.9%, p  =  0.081). Conclusion This pilot test showed objectively that learning strategies used during AOSpine courses improved the participants' knowledge. Teaching strategies must be continually improved to ensure an optimal level of knowledge transfer. PMID:28451507

  12. Quantification of Plasmid Copy Number with Single Colour Droplet Digital PCR.

    PubMed

    Plotka, Magdalena; Wozniak, Mateusz; Kaczorowski, Tadeusz

    2017-01-01

    Bacteria can be considered as biological nanofactories that manufacture a cornucopia of bioproducts most notably recombinant proteins. As such, they must perfectly match with appropriate plasmid vectors to ensure successful overexpression of target genes. Among many parameters that correlate positively with protein productivity plasmid copy number plays pivotal role. Therefore, development of new and more accurate methods to assess this critical parameter will result in optimization of expression of plasmid-encoded genes. In this study, we present a simple and highly accurate method for quantifying plasmid copy number utilizing an EvaGreen single colour, droplet digital PCR. We demonstrate the effectiveness of this method by examining the copy number of the pBR322 vector within Escherichia coli DH5α cells. The obtained results were successfully validated by real-time PCR. However, we observed a strong dependency of the plasmid copy number on the method chosen for isolation of the total DNA. We found that application of silica-membrane-based columns for DNA purification or DNA isolation with use of bead-beating, a mechanical cell disruption lead to determination of an average of 20.5 or 7.3 plasmid copies per chromosome, respectively. We found that recovery of the chromosomal DNA from purification columns was less efficient than plasmid DNA (46.5 ± 1.9% and 87.4 ± 5.5%, respectively) which may lead to observed differences in plasmid copy number. Besides, the plasmid copy number variations dependent on DNA template isolation method, we found that droplet digital PCR is a very convenient method for measuring bacterial plasmid content. Careful determination of plasmid copy number is essential for better understanding and optimization of recombinant proteins production process. Droplet digital PCR is a very precise method that allows performing thousands of individual PCR reactions in a single tube. The ddPCR does not depend on running standard curves and is a straightforward and reliable method to quantify the plasmid copy number. Therefore we believe that the ddPCR designed in this study will be widely used for any plasmid copy number calculation in the future.

  13. Quantification of Plasmid Copy Number with Single Colour Droplet Digital PCR

    PubMed Central

    Plotka, Magdalena; Wozniak, Mateusz; Kaczorowski, Tadeusz

    2017-01-01

    Bacteria can be considered as biological nanofactories that manufacture a cornucopia of bioproducts most notably recombinant proteins. As such, they must perfectly match with appropriate plasmid vectors to ensure successful overexpression of target genes. Among many parameters that correlate positively with protein productivity plasmid copy number plays pivotal role. Therefore, development of new and more accurate methods to assess this critical parameter will result in optimization of expression of plasmid-encoded genes. In this study, we present a simple and highly accurate method for quantifying plasmid copy number utilizing an EvaGreen single colour, droplet digital PCR. We demonstrate the effectiveness of this method by examining the copy number of the pBR322 vector within Escherichia coli DH5α cells. The obtained results were successfully validated by real-time PCR. However, we observed a strong dependency of the plasmid copy number on the method chosen for isolation of the total DNA. We found that application of silica-membrane-based columns for DNA purification or DNA isolation with use of bead-beating, a mechanical cell disruption lead to determination of an average of 20.5 or 7.3 plasmid copies per chromosome, respectively. We found that recovery of the chromosomal DNA from purification columns was less efficient than plasmid DNA (46.5 ± 1.9% and 87.4 ± 5.5%, respectively) which may lead to observed differences in plasmid copy number. Besides, the plasmid copy number variations dependent on DNA template isolation method, we found that droplet digital PCR is a very convenient method for measuring bacterial plasmid content. Careful determination of plasmid copy number is essential for better understanding and optimization of recombinant proteins production process. Droplet digital PCR is a very precise method that allows performing thousands of individual PCR reactions in a single tube. The ddPCR does not depend on running standard curves and is a straightforward and reliable method to quantify the plasmid copy number. Therefore we believe that the ddPCR designed in this study will be widely used for any plasmid copy number calculation in the future. PMID:28085908

  14. STELLAR MAGNETIC CYCLES IN THE SOLAR-LIKE STARS KEPLER-17 AND KEPLER-63

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estrela, Raissa; Valio, Adriana, E-mail: rlf.estrela@gmail.com, E-mail: avalio@craam.mackenzie.br

    2016-11-01

    The stellar magnetic field plays a crucial role in the star internal mechanisms, as in the interactions with its environment. The study of starspots provides information about the stellar magnetic field and can characterize the cycle. Moreover, the analysis of solar-type stars is also useful to shed light onto the origin of the solar magnetic field. The objective of this work is to characterize the magnetic activity of stars. Here, we studied two solar-type stars, Kepler-17 and Kepler-63, using two methods to estimate the magnetic cycle length. The first one characterizes the spots (radius, intensity, and location) by fitting themore » small variations in the light curve of a star caused by the occultation of a spot during a planetary transit. This approach yields the number of spots present in the stellar surface and the flux deficit subtracted from the star by their presence during each transit. The second method estimates the activity from the excess in the residuals of the transit light curves. This excess is obtained by subtracting a spotless model transit from the light curve and then integrating all the residuals during the transit. The presence of long-term periodicity is estimated in both time series. With the first method, we obtained P {sub cycle} = 1.12 ± 0.16 year (Kepler-17) and P {sub cycle} = 1.27 ± 0.16 year (Kepler-63), and for the second approach the values are 1.35 ± 0.27 year and 1.27 ± 0.12 year, respectively. The results of both methods agree with each other and confirm their robustness.« less

  15. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    PubMed

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  16. A method for determining the column curve from tests of columns with equal restraints against rotation on the ends

    NASA Technical Reports Server (NTRS)

    Lundquist, Eugene E; Rossman, Carl A; Houbolt, John C

    1943-01-01

    The results are presented of a theoretical study for the determination of the column curve from tests of column specimens having ends equally restrained against rotation. The theory of this problem is studied and a curve is shown relating the fixity coefficient c to the critical load, the length of the column, and the magnitude of the elastic restraint. A method of using this curve for the determination of the column curve for columns with pin ends from tests of columns with elastically restrained ends is presented. The results of the method as applied to a series of tests on thin-strip columns of stainless steel are also given.

  17. A direct potential fitting RKR method: Semiclassical vs. quantal comparisons

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    2016-12-01

    Quantal and semiclassical (SC) eigenvalues are compared for three diatomic molecular potential curves: the X state of CO, the X state of Rb2, and the A state of I2. The comparisons show higher levels of agreement than generally recognized, when the SC calculations incorporate a quantum defect correction to the vibrational quantum number, in keeping with the Kaiser modification. One particular aspect of this is better agreement between quantal and SC estimates of the zero-point vibrational energy, supporting the need for the Y00 correction in this context. The pursuit of a direct-potential-fitting (DPF) RKR method is motivated by the notion that some of the limitations of RKR potentials may be innate, from their generation by an exact inversion of approximate quantities: the vibrational energy Gυ and rotational constant Bυ from least-squares analysis of spectroscopic data. In contrast, the DPF RKR method resembles the quantal DPF methods now increasingly used to analyze diatomic spectral data, but with the eigenvalues obtained from SC phase integrals. Application of this method to the analysis of 9500 assigned lines in the I2A ← X spectrum fails to alter the quantal-SC disparities found for the A-state RKR curve from a previous analysis. On the other hand, the SC method can be much faster than the quantal method in exploratory work with different potential functions, where it is convenient to use finite-difference methods to evaluate the partial derivatives required in nonlinear fitting.

  18. Dose‐finding methods for Phase I clinical trials using pharmacokinetics in small populations

    PubMed Central

    Zohar, Sarah; Lentz, Frederike; Alberti, Corinne; Friede, Tim; Stallard, Nigel; Comets, Emmanuelle

    2017-01-01

    The aim of phase I clinical trials is to obtain reliable information on safety, tolerability, pharmacokinetics (PK), and mechanism of action of drugs with the objective of determining the maximum tolerated dose (MTD). In most phase I studies, dose‐finding and PK analysis are done separately and no attempt is made to combine them during dose allocation. In cases such as rare diseases, paediatrics, and studies in a biomarker‐defined subgroup of a defined population, the available population size will limit the number of possible clinical trials that can be conducted. Combining dose‐finding and PK analyses to allow better estimation of the dose‐toxicity curve should then be considered. In this work, we propose, study, and compare methods to incorporate PK measures in the dose allocation process during a phase I clinical trial. These methods do this in different ways, including using PK observations as a covariate, as the dependent variable or in a hierarchical model. We conducted a large simulation study that showed that adding PK measurements as a covariate only does not improve the efficiency of dose‐finding trials either in terms of the number of observed dose limiting toxicities or the probability of correct dose selection. However, incorporating PK measures does allow better estimation of the dose‐toxicity curve while maintaining the performance in terms of MTD selection compared to dose‐finding designs that do not incorporate PK information. In conclusion, using PK information in the dose allocation process enriches the knowledge of the dose‐toxicity relationship, facilitating better dose recommendation for subsequent trials. PMID:28321893

  19. Use of the cumulative sum method (CUSUM) to assess the learning curves of ultrasound-guided continuous femoral nerve block.

    PubMed

    Kollmann-Camaiora, A; Brogly, N; Alsina, E; Gilsanz, F

    2017-10-01

    Although ultrasound is a basic competence for anaesthesia residents (AR) there is few data available on the learning process. This prospective observational study aims to assess the learning process of ultrasound-guided continuous femoral nerve block and to determine the number of procedures that a resident would need to perform in order to reach proficiency using the cumulative sum (CUSUM) method. We recruited 19 AR without previous experience. Learning curves were constructed using the CUSUM method for ultrasound-guided continuous femoral nerve block considering 2 success criteria: a decrease of pain score>2 in a [0-10] scale after 15minutes, and time required to perform it. We analyse data from 17 AR for a total of 237 ultrasound-guided continuous femoral nerve blocks. 8/17 AR became proficient for pain relief, however all the AR who did more than 12 blocks (8/8) became proficient. As for time of performance 5/17 of AR achieved the objective of 12minutes, however all the AR who did more than 20 blocks (4/4) achieved it. The number of procedures needed to achieve proficiency seems to be 12, however it takes more procedures to reduce performance time. The CUSUM methodology could be useful in training programs to allow early interventions in case of repeated failures, and develop competence-based curriculum. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. [Comparison among various software for LMS growth curve fitting methods].

    PubMed

    Han, Lin; Wu, Wenhong; Wei, Qiuxia

    2015-03-01

    To explore the methods to realize the growth curve fitting of coefficients of skewness-median-coefficient of variation (LMS) using different software, and to optimize growth curve statistical method for grass-root child and adolescent staffs. Regular physical examination data of head circumference for normal infants aging 3, 6, 9 and 12 months in Baotou City were analyzed. Statistical software such as SAS, R, STATA and SPSS were used to fit the LMS growth curve and the results were evaluated upon the user 's convenience, study circle, user interface, results display forms, software update and maintenance and so on. Growth curve fitting results showed the same calculation outcome and each of statistical software had its own advantages and disadvantages. With all the evaluation aspects in consideration, R software excelled others in LMS growth curve fitting. R software have the advantage over other software in grass roots child and adolescent staff.

  1. Accuracy of AFM force distance curves via direct solution of the Euler-Bernoulli equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eppell, Steven J., E-mail: steven.eppell@case.edu; Liu, Yehe; Zypman, Fredy R.

    2016-03-15

    In an effort to improve the accuracy of force-separation curves obtained from atomic force microscope data, we compare force-separation curves computed using two methods to solve the Euler-Bernoulli equation. A recently introduced method using a direct sequential forward solution, Causal Time-Domain Analysis, is compared against a previously introduced Tikhonov Regularization method. Using the direct solution as a benchmark, it is found that the regularization technique is unable to reproduce accurate curve shapes. Using L-curve analysis and adjusting the regularization parameter, λ, to match either the depth or the full width at half maximum of the force curves, the two techniquesmore » are contrasted. Matched depths result in full width at half maxima that are off by an average of 27% and matched full width at half maxima produce depths that are off by an average of 109%.« less

  2. Comparison of Growing Rod Instrumentation Versus Serial Cast Treatment for Early-Onset Scoliosis.

    PubMed

    Johnston, Charles E; McClung, Anna M; Thompson, George H; Poe-Kochert, Connie; Sanders, James O

    2013-09-01

    A comparison of 2 methods of early-onset scoliosis treatment using radiographic measures and complication rates. To determine whether a delaying tactic (serial casting) has comparable efficacy to a surgical method (insertion of growing rod instrumentation [GRI]) in the initial phase of early-onset deformity management. Serial casts are used in experienced centers to delay operative management of curves of surgical magnitude (greater than 50°) in children up to age 6 years. A total of 27 casted patients from 3 institutions were matched with 27 patients from a multicenter database according to age (within 6 months of each other), curve magnitude (within 10° of each other), and diagnosis. Outcomes were compared according to major curve magnitude, spine length (T1-S1), duration and number of treatment encounters, and complications. There was no difference in age (5.5 years) or initial curve magnitude (65°) between groups, which reflects the accuracy of the matching process. Six pairs of patients had neuromuscular diagnoses, 11 had idiopathic deformities, and 10 had syndromic scoliosis. Growing rod instrumentation patients had smaller curves (45.9° vs. 64.9°; p = .002) at follow-up, but there was no difference in absolute spine length (GRI = 32.0 cm; cast = 30.6 cm; p = .26), even though GRI patients had been under treatment for a longer duration (4.5 vs. 2.4 years; p < .0001) and had undergone a mean of 5.5 lengthenings compared with 4.0 casts. Growing rod instrumentation patients had a 44% complication rate, compared with 1 cast complication. Of 27 casted patients, 15 eventually had operative treatment after a mean delay of 1.7 years after casting. Cast treatment is a valuable delaying tactic for younger children with early-onset scoliosis. Spine deformity is adequately controlled, spine length is not compromised, and surgical complications associated with early GRI treatment are avoided. Copyright © 2013 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.

  3. A new method to compare statistical tree growth curves: the PL-GMANOVA model and its application with dendrochronological data.

    PubMed

    Ricker, Martin; Peña Ramírez, Víctor M; von Rosen, Dietrich

    2014-01-01

    Growth curves are monotonically increasing functions that measure repeatedly the same subjects over time. The classical growth curve model in the statistical literature is the Generalized Multivariate Analysis of Variance (GMANOVA) model. In order to model the tree trunk radius (r) over time (t) of trees on different sites, GMANOVA is combined here with the adapted PL regression model Q = A · T+E, where for b ≠ 0 : Q = Ei[-b · r]-Ei[-b · r1] and for b = 0 : Q  = Ln[r/r1], A =  initial relative growth to be estimated, T = t-t1, and E is an error term for each tree and time point. Furthermore, Ei[-b · r]  = ∫(Exp[-b · r]/r)dr, b = -1/TPR, with TPR being the turning point radius in a sigmoid curve, and r1 at t1 is an estimated calibrating time-radius point. Advantages of the approach are that growth rates can be compared among growth curves with different turning point radiuses and different starting points, hidden outliers are easily detectable, the method is statistically robust, and heteroscedasticity of the residuals among time points is allowed. The model was implemented with dendrochronological data of 235 Pinus montezumae trees on ten Mexican volcano sites to calculate comparison intervals for the estimated initial relative growth A. One site (at the Popocatépetl volcano) stood out, with A being 3.9 times the value of the site with the slowest-growing trees. Calculating variance components for the initial relative growth, 34% of the growth variation was found among sites, 31% among trees, and 35% over time. Without the Popocatépetl site, the numbers changed to 7%, 42%, and 51%. Further explanation of differences in growth would need to focus on factors that vary within sites and over time.

  4. Satellite altimetry based rating curves throughout the entire Amazon basin

    NASA Astrophysics Data System (ADS)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present experiment shows that the stochastic approach is more efficient than the determinist one. By using for the parameters prior credible intervals defined by the user, this method provides an estimate of best rating curve estimate without any unlikely parameter, and all sites achieved convergence before reaching the maximum number of model evaluations. Results were assessed trough the Nash Sutcliffe efficiency coefficient, applied both to discharge and logarithm of discharges. Most of the virtual stations had good or very good results, showing values of Ens going from 0.7 to 0.98. However, worse results were found at a few virtual stations, unveiling the necessity of investigating possibilities of segmentation of the rating curve, depending on the stage or the rising or recession limb, but also possible errors in the altimetry series.

  5. A direct approach to estimating the number of potential fatalities from an eruption: Application to the Central Volcanic Complex of Tenerife Island

    NASA Astrophysics Data System (ADS)

    Marrero, J. M.; García, A.; Llinares, A.; Rodriguez-Losada, J. A.; Ortiz, R.

    2012-03-01

    One of the critical issues in managing volcanic crises is making the decision to evacuate a densely-populated region. In order to take a decision of such importance it is essential to estimate the cost in lives for each of the expected eruptive scenarios. One of the tools that assist in estimating the number of potential fatalities for such decision-making is the calculation of the FN-curves. In this case the FN-curve is a graphical representation that relates the frequency of the different hazards to be expected for a particular volcano or volcanic area, and the number of potential fatalities expected for each event if the zone of impact is not evacuated. In this study we propose a method for assessing the impact that a possible eruption from the Tenerife Central Volcanic Complex (CVC) would have on the population at risk. Factors taken into account include the spatial probability of the eruptive scenarios (susceptibility) and the temporal probability of the magnitudes of the eruptive scenarios. For each point or cell of the susceptibility map with greater probability, a series of probability-scaled hazard maps is constructed for the whole range of magnitudes expected. The number of potential fatalities is obtained from the intersection of the hazard maps with the spatial map of population distribution. The results show that the Emergency Plan for Tenerife must provide for the evacuation of more than 100,000 persons.

  6. The computational complexity of elliptic curve integer sub-decomposition (ISD) method

    NASA Astrophysics Data System (ADS)

    Ajeena, Ruma Kareem K.; Kamarulhaili, Hailiza

    2014-07-01

    The idea of the GLV method of Gallant, Lambert and Vanstone (Crypto 2001) is considered a foundation stone to build a new procedure to compute the elliptic curve scalar multiplication. This procedure, that is integer sub-decomposition (ISD), will compute any multiple kP of elliptic curve point P which has a large prime order n with two low-degrees endomorphisms ψ1 and ψ2 of elliptic curve E over prime field Fp. The sub-decomposition of values k1 and k2, not bounded by ±C√n , gives us new integers k11, k12, k21 and k22 which are bounded by ±C√n and can be computed through solving the closest vector problem in lattice. The percentage of a successful computation for the scalar multiplication increases by ISD method, which improved the computational efficiency in comparison with the general method for computing scalar multiplication in elliptic curves over the prime fields. This paper will present the mechanism of ISD method and will shed light mainly on the computation complexity of the ISD approach that will be determined by computing the cost of operations. These operations include elliptic curve operations and finite field operations.

  7. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    NASA Astrophysics Data System (ADS)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  8. Examination of B. subtilis var. niger Spore Killing by Dry Heat Methods

    NASA Technical Reports Server (NTRS)

    Kempf, Michael J.; Kirschner, Larry E.

    2004-01-01

    Dry heat microbial reduction is the only NASA approved sterilization method to reduce the microbial bioburden on space-flight hardware prior to launch. Reduction of the microbial bioburden on spacecraft is necessary to meet planetary protection requirements specific for the mission. Microbial bioburden reduction also occurs if a spacecraft enters a planetary atmosphere (e.g., Mars) and is heated due to frictional forces. Temperatures reached during atmospheric entry events (>200 C) are sufficient to damage or destroy flight hardware and also kill microbial spores that reside on the in-bound spacecraft. The goal of this research is to determine the survival rates of bacterial spores when they are subjected to conditions similar to those the spacecraft would encounter (i.e., temperature, pressure, etc.). B. subtilis var. niger spore coupons were exposed to a range of temperatures from 125 C to 200 C in a vacuum oven (at <1 Torr). After the exposures, the spores were removed by sonication, dilutions were made, and the spores were plated using the pour plate method with tryptic soy agar. After 3 days incubation at 32 C, the number of colony-forming units was counted. Lethality rate constants and D-values were calculated at each temperature. The calculated D-values were: 27 minutes (at 125 C), 13 minutes (at 135 C), and <0.1 minutes (at 150 C). The 125 C and 135 C survivor curves appeared as concavedownward curves. The 150 C survivor curve appeared as a straight-line. Due to the prolonged ramp-up time to the exposure conditions, spore killing during the ramp-up resulted in insufficient data to draw curves for exposures at 160 C, 175 C, and 200 C. Exploratory experiments using novel techniques, with short ramp times, for performing high temperature exposures were also examined. Several of these techniques, such as vacuum furnaces, thermal spore exposure vessels, and laser heating of the coupons, will be discussed.

  9. The melting curve of Ni to 1 Mbar

    NASA Astrophysics Data System (ADS)

    Lord, Oliver T.; Wood, Ian G.; Dobson, David P.; Vočadlo, Lidunka; Wang, Weiwei; Thomson, Andrew R.; Wann, Elizabeth T. H.; Morard, Guillaume; Mezouar, Mohamed; Walter, Michael J.

    2014-12-01

    The melting curve of Ni has been determined to 125 GPa using laser-heated diamond anvil cell (LH-DAC) experiments in which two melting criteria were used: firstly, the appearance of liquid diffuse scattering (LDS) during in situ X-ray diffraction (XRD) and secondly, plateaux in temperature vs. laser power functions in both in situ and off-line experiments. Our new melting curve, defined by a Simon-Glatzel fit to the data where TM (K) = [ (PM/18.78 ± 10.20 + 1) ]1/2.42 ± 0.66 × 1726, is in good agreement with the majority of the theoretical studies on Ni melting and matches closely the available shock wave melting data. It is however dramatically steeper than the previous off-line LH-DAC studies in which determination of melting was based on the visual observation of motion aided by the laser speckle method. We estimate the melting point (TM) of Ni at the inner-core boundary (ICB) pressure of 330 GPa to be TM = 5800 ± 700 K (2 σ), within error of the value for Fe of TM = 6230 ± 500 K determined in a recent in situ LH-DAC study by similar methods to those employed here. This similarity suggests that the alloying of 5-10 wt.% Ni with the Fe-rich core alloy is unlikely to have any significant effect on the temperature of the ICB, though this is dependent on the details of the topology of the Fe-Ni binary phase diagram at core pressures. Our melting temperature for Ni at 330 GPa is ∼2500 K higher than that found in previous experimental studies employing the laser speckle method. We find that those earlier melting curves coincide with the onset of rapid sub-solidus recrystallization, suggesting that visual observations of motion may have misinterpreted dynamic recrystallization as convective motion of a melt. This finding has significant implications for our understanding of the high-pressure melting behaviour of a number of other transition metals.

  10. Quantitative Ultrasound for Measuring Obstructive Severity in Children with Hydronephrosis.

    PubMed

    Cerrolaza, Juan J; Peters, Craig A; Martin, Aaron D; Myers, Emmarie; Safdar, Nabile; Linguraru, Marius George

    2016-04-01

    We define sonographic biomarkers for hydronephrotic renal units that can predict the necessity of diuretic nuclear renography. We selected a cohort of 50 consecutive patients with hydronephrosis of varying severity in whom 2-dimensional sonography and diuretic mercaptoacetyltriglycine renography had been performed. A total of 131 morphological parameters were computed using quantitative image analysis algorithms. Machine learning techniques were then applied to identify ultrasound based safety thresholds that agreed with the t½ for washout. A best fit model was then derived for each threshold level of t½ that would be clinically relevant at 20, 30 and 40 minutes. Receiver operating characteristic curve analysis was performed. Sensitivity, specificity and area under the receiver operating characteristic curve were determined. Improvement obtained by the quantitative imaging method compared to the Society for Fetal Urology grading system and the hydronephrosis index was statistically verified. For the 3 thresholds considered and at 100% sensitivity the specificities of the quantitative imaging method were 94%, 70% and 74%, respectively. Corresponding area under the receiver operating characteristic curve values were 0.98, 0.94 and 0.94, respectively. Improvement obtained by the quantitative imaging method over the Society for Fetal Urology grade and hydronephrosis index was statistically significant (p <0.05 in all cases). Quantitative imaging analysis of renal sonograms in children with hydronephrosis can identify thresholds of clinically significant washout times with 100% sensitivity to decrease the number of diuretic renograms in up to 62% of children. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  11. Multiple Optical Traps with a Single-Beam Optical Tweezer Utilizing Surface Micromachined Planar Curved Grating

    NASA Astrophysics Data System (ADS)

    Kuo, Ju-Nan; Chen, Kuan-Yu

    2010-11-01

    In this paper, we present a single-beam optical tweezer integrated with a planar curved diffraction grating for microbead manipulation. Various curvatures of the surface micromachined planar curved grating are systematically investigated. The planar curved grating was fabricated using multiuser micro-electro-mechanical-system (MEMS) processes (MUMPs). The angular separation and the number of diffracted orders were determined. Experimental results indicate that the diffraction patterns and curvature of the planar curved grating are closely related. As the curvature of the planar curved grating increases, the vertical diffraction angle increases, resulting in the strip patterns of the planar curved grating. A single-beam optical tweezer integrated with a planar curved diffraction grating was developed. We demonstrate a technique for creating multiple optical traps from a single laser beam using the developed planar curved grating. The strip patterns of the planar curved grating that resulted from diffraction were used to trap one row of polystyrene beads.

  12. Laparoscopic varicocelectomy: virtual reality training and learning curve.

    PubMed

    Wang, Zheng; Ni, Yuhua; Zhang, Yinan; Jin, Xunbo; Xia, Qinghua; Wang, Hanbo

    2014-01-01

    To explore the role that virtual reality training might play in the learning curve of laparoscopic varicocelectomy. A total of 1326 laparoscopic varicocelectomy cases performed by 16 participants from July 2005 to June 2012 were retrospectively analyzed. The participants were divided into 2 groups: group A was trained by laparoscopic trainer boxes; group B was trained by a virtual reality training course preoperatively. The operation time curves were drafted, and the learning, improving, and platform stages were divided and statistically confirmed. The operation time and number of cases in the learning and improving stages of both groups were compared. Testicular artery sparing failure and postoperative hydroceles rate were statistically analyzed for the confirmation of the learning curve. The learning curve of laparoscopic varicocelectomy was 15 cases, and with 14 cases more, it came into the platform stage. The number of cases for the learning stages of both groups showed no statistical difference (P=.49), but the operation time of group B for the learning stage was less than that of group A (P<.00001). The number of cases of group B for the improving stage was significantly less than that of group A (P=.005), but the operation time of both groups in the improving stage showed no difference (P=.30). The difference of testicular artery sparing failure rates among these 3 stages was proved significant (P<.0001), the postoperative hydroceles rate showed no statistical difference (P=.60). The virtual reality training shortened the operation time in the learning stage and hastened the trainees' steps in the improving stage, but did not shorten the learning curve as expected to.

  13. Generalised Joint Hypermobility in Caucasian Girls with Idiopathic Scoliosis: Relation with Age, Curve Size, and Curve Pattern

    PubMed Central

    2014-01-01

    The aim of the study was to assess the prevalence of generalised joint hypermobility (GJH) in 155 girls with idiopathic scoliosis (IS) (age 9–18 years, mean 13.8 ± 2.3). The control group included 201 healthy girls. The presence of GJH was assessed with Beighton (B) test. GJH was diagnosed in 23.2% of IS girls and in 13.4% of controls (P = 0.02). The prevalence of GJH was significantly (P = 0.01) lower in IS girls aged 16–18 years in comparison with younger individuals. There was no difference regarding GJH occurrence between girls with mild (11–24°), moderate (25–40°), and severe scoliosis (>40°) (P = 0.78), between girls with single thoracic, single lumbar, and double curve scoliosis (P = 0.59), and between girls with thoracic scoliosis length ≤7 and >7 vertebrae (P = 0.25). No correlation between the number of points in B and the Cobb angle (P = 0.93), as well as between the number of points in B and the number of the vertebrae within thoracic scoliosis (P = 0.63), was noticed. GJH appeared more often in IS girls than in healthy controls. Its prevalence decreased with age. No relation between GJH prevalence and curve size, curve pattern, or scoliosis length was found. PMID:24550704

  14. Evaluation of SCS-CN method using a fully distributed physically based coupled surface-subsurface flow model

    NASA Astrophysics Data System (ADS)

    Shokri, Ali

    2017-04-01

    The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.

  15. Evaluation of Time-Varying Hydrology within the Training Range Environmental Evaluation and Characterization System (TREECS TM)

    DTIC Science & Technology

    2014-08-01

    daily) hydrology UI user interface of a model USGS U.S. Geological Survey USLE Universal Soil Loss Equation used to compute soil erosion rate for...SCS curve number runoff method, inches or m It daily infiltration rate for day t, m/day K soil erodibility factor in the USLE and MUSLE L length...and soil erosion (using the Universal Soil Loss Equation, or USLE ) as a reference even when time-varying hydrology is selected for use. The UI also

  16. Simulation and study of small numbers of random events

    NASA Technical Reports Server (NTRS)

    Shelton, R. D.

    1986-01-01

    Random events were simulated by computer and subjected to various statistical methods to extract important parameters. Various forms of curve fitting were explored, such as least squares, least distance from a line, maximum likelihood. Problems considered were dead time, exponential decay, and spectrum extraction from cosmic ray data using binned data and data from individual events. Computer programs, mostly of an iterative nature, were developed to do these simulations and extractions and are partially listed as appendices. The mathematical basis for the compuer programs is given.

  17. Discrete Film Cooling in a Rocket with Curved Walls

    DTIC Science & Technology

    2009-12-01

    insight to be gained by observing the process of effusion cooling in its most basic elements. In rocket applications, the first desired condition is...ηspan. Convergence was determined by doubling the number of cells, mostly in the region near the hole, until less than a 1 % change was observed in the...method was required to determine the absolute start time for the transient process . To find the time error, start again with TS − Ti Taw − Ti = 1 − exp

  18. Emergence of the bcc Phase and Phase Transition in Be through Phonon Quasiparticle Calculations

    NASA Astrophysics Data System (ADS)

    Zhang, D. B., Sr.; Wentzcovitch, R. M.

    2016-12-01

    Beryllium (Be) is an important material with applications in a number of areas ranging from aerospace components to X-ray equipment. Yet a precise understanding of the phase diagram of Be remains elusive. We have investigated the phase stability of Be using a recently developed hybrid free energy computation method that accounts for anharmonic effects by invoking phonon quasiparticle properties. We find that the hcp to bcc transition occurs near the melting curve at 0

  19. Craniofacial Reconstruction Using Rational Cubic Ball Curves

    PubMed Central

    Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan

    2015-01-01

    This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632

  20. Optimizing the parameters of heat transmission in a small heat exchanger with spiral tapes cut as triangles and Aluminum oxide nanofluid using central composite design method

    NASA Astrophysics Data System (ADS)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-07-01

    The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.

  1. Optimizing the parameters of heat transmission in a small heat exchanger with spiral tapes cut as triangles and Aluminum oxide nanofluid using central composite design method

    NASA Astrophysics Data System (ADS)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-02-01

    The present study aims at optimizing the heat transmission parameters such as Nusselt number and friction factor in a small double pipe heat exchanger equipped with rotating spiral tapes cut as triangles and filled with aluminum oxide nanofluid. The effects of Reynolds number, twist ratio (y/w), rotating twisted tape and concentration (w%) on the Nusselt number and friction factor are also investigated. The central composite design and the response surface methodology are used for evaluating the responses necessary for optimization. According to the optimal curves, the most optimized value obtained for Nusselt number and friction factor was 146.6675 and 0.06020, respectively. Finally, an appropriate correlation is also provided to achieve the optimal model of the minimum cost. Optimization results showed that the cost has decreased in the best case.

  2. Integrated analysis on static/dynamic aeroelasticity of curved panels based on a modified local piston theory

    NASA Astrophysics Data System (ADS)

    Yang, Zhichun; Zhou, Jian; Gu, Yingsong

    2014-10-01

    A flow field modified local piston theory, which is applied to the integrated analysis on static/dynamic aeroelastic behaviors of curved panels, is proposed in this paper. The local flow field parameters used in the modification are obtained by CFD technique which has the advantage to simulate the steady flow field accurately. This flow field modified local piston theory for aerodynamic loading is applied to the analysis of static aeroelastic deformation and flutter stabilities of curved panels in hypersonic flow. In addition, comparisons are made between results obtained by using the present method and curvature modified method. It shows that when the curvature of the curved panel is relatively small, the static aeroelastic deformations and flutter stability boundaries obtained by these two methods have little difference, while for curved panels with larger curvatures, the static aeroelastic deformation obtained by the present method is larger and the flutter stability boundary is smaller compared with those obtained by the curvature modified method, and the discrepancy increases with the increasing of curvature of panels. Therefore, the existing curvature modified method is non-conservative compared to the proposed flow field modified method based on the consideration of hypersonic flight vehicle safety, and the proposed flow field modified local piston theory for curved panels enlarges the application range of piston theory.

  3. Synthesis, characteristics and thermoluminescent dosimetry features of γ-irradiated Ce doped CaF2 nanophosphor.

    PubMed

    Zahedifar, M; Sadeghi, E; Mozdianfard, M R; Habibi, E

    2013-08-01

    Nanoparticles of cerium doped calcium fluoride (CaF2:Ce) were synthesized for the first time using the hydrothermal method. The formation of nanostructures was confirmed by X-ray diffraction (XRD) patterns, indicating cubic lattice structure for the particles produced. Their shape and size were observed by scanning electron microscopy (SEM). Thermoluminescence characteristics were studied by having the samples irradiated by gamma rays of (60)Co source. The optimum thermal treatment of 400 °C for 30 min was found for the produced nanoparticles. The Tm-Tstop and computerized glow curve deconvolution (CGCD) methods, used to determine the number of component glow peaks and kinetic parameters, indicated seven overlapping glow peaks on the TL glow curve at approximately 394, 411, 425, 445, 556, 594 and 632 K. A linear dose response of up to 2000 Gy, was observed for the prepared nanoparticles. Maximum TL sensitivity was found at 0.4 mol% of Ce impurity. Other TL dosimetry features, including reusability and fading, were also presented and discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Basic Research of Intrinsic Tamper Indication Markings Defined by Pulsed Laser Irradiation (Quad Chart).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, Neville R.

    Objective: We will research how short (ns) and ultrashort (fs) laser pulses interact with the surfaces of various materials to create complex color layers and morphological patterns. Method: We are investigating the site-specific, formation of microcolor features. Also, research includes a fundamental study of the physics underlying periodic ripple formation during femtosecond laser irradiation. Status of effort: Laser induced color markings were demonstrated on an increased number of materials (including metal thin films) and investigated for optical properties and microstructure. Technology that allows for marking curved surfaces (and large areas) has been implemented. We have used electro-magnetic solvers to modelmore » light-solid interactions leading to periodic surface ripple patterns. This includes identifying the roles of surface plasmon polaritons. Goals/Milestones: Research corrosion resistance of oxide color markings (salt spray, fog, polarization tests); Through modeling, investigate effects of multi-source scattering and interference on ripple patterns; Investigate microspectrophotometry for mapping color; and Investigate new methods for laser color marking curved surfaces and large areas.« less

  5. The structure of binding curves and practical identifiability of equilibrium ligand-binding parameters

    PubMed Central

    Middendorf, Thomas R.

    2017-01-01

    A critical but often overlooked question in the study of ligands binding to proteins is whether the parameters obtained from analyzing binding data are practically identifiable (PI), i.e., whether the estimates obtained from fitting models to noisy data are accurate and unique. Here we report a general approach to assess and understand binding parameter identifiability, which provides a toolkit to assist experimentalists in the design of binding studies and in the analysis of binding data. The partial fraction (PF) expansion technique is used to decompose binding curves for proteins with n ligand-binding sites exactly and uniquely into n components, each of which has the form of a one-site binding curve. The association constants of the PF component curves, being the roots of an n-th order polynomial, may be real or complex. We demonstrate a fundamental connection between binding parameter identifiability and the nature of these one-site association constants: all binding parameters are identifiable if the constants are all real and distinct; otherwise, at least some of the parameters are not identifiable. The theory is used to construct identifiability maps from which the practical identifiability of binding parameters for any two-, three-, or four-site binding curve can be assessed. Instructions for extending the method to generate identifiability maps for proteins with more than four binding sites are also given. Further analysis of the identifiability maps leads to the simple rule that the maximum number of structurally identifiable binding parameters (shown in the previous paper to be equal to n) will also be PI only if the binding curve line shape contains n resolved components. PMID:27993951

  6. Determination of the human spine curve based on laser triangulation.

    PubMed

    Poredoš, Primož; Čelan, Dušan; Možina, Janez; Jezeršek, Matija

    2015-02-05

    The main objective of the present method was to automatically obtain a spatial curve of the thoracic and lumbar spine based on a 3D shape measurement of a human torso with developed scoliosis. Manual determination of the spine curve, which was based on palpation of the thoracic and lumbar spinous processes, was found to be an appropriate way to validate the method. Therefore a new, noninvasive, optical 3D method for human torso evaluation in medical practice is introduced. Twenty-four patients with confirmed clinical diagnosis of scoliosis were scanned using a specially developed 3D laser profilometer. The measuring principle of the system is based on laser triangulation with one-laser-plane illumination. The measurement took approximately 10 seconds at 700 mm of the longitudinal translation along the back. The single point measurement accuracy was 0.1 mm. Computer analysis of the measured surface returned two 3D curves. The first curve was determined by manual marking (manual curve), and the second was determined by detecting surface curvature extremes (automatic curve). The manual and automatic curve comparison was given as the root mean square deviation (RMSD) for each patient. The intra-operator study involved assessing 20 successive measurements of the same person, and the inter-operator study involved assessing measurements from 8 operators. The results obtained for the 24 patients showed that the typical RMSD between the manual and automatic curve was 5.0 mm in the frontal plane and 1.0 mm in the sagittal plane, which is a good result compared with palpatory accuracy (9.8 mm). The intra-operator repeatability of the presented method in the frontal and sagittal planes was 0.45 mm and 0.06 mm, respectively. The inter-operator repeatability assessment shows that that the presented method is invariant to the operator of the computer program with the presented method. The main novelty of the presented paper is the development of a new, non-contact method that provides a quick, precise and non-invasive way to determine the spatial spine curve for patients with developed scoliosis and the validation of the presented method using the palpation of the spinous processes, where no harmful ionizing radiation is present.

  7. Clarifications regarding the use of model-fitting methods of kinetic analysis for determining the activation energy from a single non-isothermal curve.

    PubMed

    Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M

    2013-02-05

    This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.

  8. Fitting the curve in Excel®: Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NASA Astrophysics Data System (ADS)

    McCraig, Michael A.; Osinski, Gordon R.; Cloutis, Edward A.; Flemming, Roberta L.; Izawa, Matthew R. M.; Reddy, Vishnu; Fieber-Beyer, Sherry K.; Pompilio, Loredana; van der Meer, Freek; Berger, Jeffrey A.; Bramble, Michael S.; Applin, Daniel M.

    2017-03-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to working with spectra may find inadequate help or documentation in the scientific literature or in the software packages available for curve fitting. This problem also extends to the parameterization of spectra and the dissemination of derived metrics. Often, when derived metrics are reported, such as band centres, the discussion of exactly how the metrics were derived, or if there was any systematic curve fitting performed, is not included. Herein we provide both recommendations and methods for curve fitting and explanations of the terms and methods used. Techniques to curve fit spectral data of various types are demonstrated using simple-to-understand mathematics and equations written to be used in Microsoft Excel® software, free of macros, in a cut-and-paste fashion that allows one to curve fit spectra in a reasonably user-friendly manner. The procedures use empirical curve fitting, include visualizations, and ameliorates many of the unknowns one may encounter when using black-box commercial software. The provided framework is a comprehensive record of the curve fitting parameters used, the derived metrics, and is intended to be an example of a format for dissemination when curve fitting data.

  9. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  10. Selected Aspects of Cryogenic Tank Fatigue Calculations for Offshore Application

    NASA Astrophysics Data System (ADS)

    Skrzypacz, J.; Jaszak, P.

    2018-02-01

    The paper presents the way of the fatigue life calculation of a cryogenic tank dedicated for the carriers ship application. The independent tank type C was taken into consideration. The calculation took into account a vast range of the load spectrum resulting in the ship accelerations. The stress at the most critical point of the tank was determined by means of the finite element method. The computation methods and codes used in the design of the LNG tank were presented. The number of fatigue cycles was determined by means of S-N curve. The cumulated linear damage theory was used to determine life factor.

  11. Group solution for unsteady free-convection flow from a vertical moving plate subjected to constant heat flux

    NASA Astrophysics Data System (ADS)

    Kassem, M.

    2006-03-01

    The problem of heat and mass transfer in an unsteady free-convection flow over a continuous moving vertical sheet in an ambient fluid is investigated for constant heat flux using the group theoretical method. The nonlinear coupled partial differential equation governing the flow and the boundary conditions are transformed to a system of ordinary differential equations with appropriate boundary conditions. The obtained ordinary differential equations are solved numerically using the shooting method. The effect of Prandlt number on the velocity and temperature of the boundary-layer is plotted in curves. A comparison with previous work is presented.

  12. Evaluation of quantification methods for real-time PCR minor groove binding hybridization probe assays.

    PubMed

    Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V

    2007-02-01

    Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.

  13. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1987-01-01

    New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.

  14. Heuristic approach to capillary pressures averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  15. Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach

    NASA Astrophysics Data System (ADS)

    Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic

    2015-04-01

    Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24

  16. Interactive Terascale Particle Visualization

    NASA Technical Reports Server (NTRS)

    Ellsworth, David; Green, Bryan; Moran, Patrick

    2004-01-01

    This paper describes the methods used to produce an interactive visualization of a 2 TB computational fluid dynamics (CFD) data set using particle tracing (streaklines). We use the method introduced by Bruckschen et al. [2001] that pre-computes a large number of particles, stores them on disk using a space-filling curve ordering that minimizes seeks, and then retrieves and displays the particles according to the user's command. We describe how the particle computation can be performed using a PC cluster, how the algorithm can be adapted to work with a multi-block curvilinear mesh, and how the out-of-core visualization can be scaled to 296 billion particles while still achieving interactive performance on PG hardware. Compared to the earlier work, our data set size and total number of particles are an order of magnitude larger. We also describe a new compression technique that allows the lossless compression of the particles by 41% and speeds the particle retrieval by about 30%.

  17. A lithology identification method for continental shale oil reservoir based on BP neural network

    NASA Astrophysics Data System (ADS)

    Han, Luo; Fuqiang, Lai; Zheng, Dong; Weixu, Xia

    2018-06-01

    The Dongying Depression and Jiyang Depression of the Bohai Bay Basin consist of continental sedimentary facies with a variable sedimentary environment and the shale layer system has a variety of lithologies and strong heterogeneity. It is difficult to accurately identify the lithologies with traditional lithology identification methods. The back propagation (BP) neural network was used to predict the lithology of continental shale oil reservoirs. Based on the rock slice identification, x-ray diffraction bulk rock mineral analysis, scanning electron microscope analysis, and the data of well logging and logging, the lithology was divided with carbonate, clay and felsic as end-member minerals. According to the core-electrical relationship, the frequency histogram was then used to calculate the logging response range of each lithology. The lithology-sensitive curves selected from 23 logging curves (GR, AC, CNL, DEN, etc) were chosen as the input variables. Finally, the BP neural network training model was established to predict the lithology. The lithology in the study area can be divided into four types: mudstone, lime mudstone, lime oil-mudstone, and lime argillaceous oil-shale. The logging responses of lithology were complicated and characterized by the low values of four indicators and medium values of two indicators. By comparing the number of hidden nodes and the number of training times, we found that the number of 15 hidden nodes and 1000 times of training yielded the best training results. The optimal neural network training model was established based on the above results. The lithology prediction results of BP neural network of well XX-1 showed that the accuracy rate was over 80%, indicating that the method was suitable for lithology identification of continental shale stratigraphy. The study provided the basis for the reservoir quality and oily evaluation of continental shale reservoirs and was of great significance to shale oil and gas exploration.

  18. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    PubMed

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  19. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  20. Surface family with a common involute asymptotic curve

    NASA Astrophysics Data System (ADS)

    Bayram, Ergi˙n; Bi˙li˙ci˙, Mustafa

    2016-03-01

    We construct a surface family possessing an involute of a given curve as an asymptotic curve. We express necessary and sufficient conditions for that curve with the above property. We also present natural results for such ruled surfaces. Finally, we illustrate the method with some examples, e.g. circles and helices as given curves.

  1. What can we Learn from the Rising Light Curves of Radioactively Powered Supernovae?

    NASA Astrophysics Data System (ADS)

    Piro, Anthony L.; Nakar, Ehud

    2013-05-01

    The light curve of the explosion of a star with a radius <~ 10-100 R ⊙ is powered mostly by radioactive decay. Observationally, such events are dominated by hydrogen-deficient progenitors and classified as Type I supernovae (SNe I), i.e., white dwarf thermonuclear explosions (Type Ia), and core collapses of hydrogen-stripped massive stars (Type Ib/c). Current transient surveys are finding SNe I in increasing numbers and at earlier times, allowing their early emission to be studied in unprecedented detail. Motivated by these developments, we summarize the physics that produces their rising light curves and discuss ways in which observations can be utilized to study these exploding stars. The early radioactive-powered light curves probe the shallowest deposits of 56Ni. If the amount of 56Ni mixing in the outermost layers of the star can be deduced, then it places important constraints on the progenitor and properties of the explosive burning. In practice, we find that it is difficult to determine the level of mixing because it is hard to disentangle whether the explosion occurred recently and one is seeing radioactive heating near the surface or whether the explosion began in the past and the radioactive heating is deeper in the ejecta. In the latter case, there is a "dark phase" between the moment of explosion and the first observed light emitted once the shallowest layers of 56Ni are exposed. Because of this, simply extrapolating a light curve from radioactive heating back in time is not a reliable method for estimating the explosion time. The best solution is to directly identify the moment of explosion, either through observing shock breakout (in X-ray/UV) or the cooling of the shock-heated surface (in UV/optical), so that the depth being probed by the rising light curve is known. However, since this is typically not available, we identify and discuss a number of other diagnostics that are helpful for deciphering how recently an explosion occurred. As an example, we apply these arguments to the recent SN Ic PTF 10vgv. We demonstrate that just a single measurement of the photospheric velocity and temperature during the rise places interesting constraints on its explosion time, radius, and level of 56Ni mixing.

  2. What Mathematical Competencies Are Needed for Success in College.

    ERIC Educational Resources Information Center

    Garofalo, Joe

    1990-01-01

    Identifies requisite math skills for a microeconomics course, offering samples of supply curves, demand curves, equilibrium prices, elasticity, and complex graph problems. Recommends developmental mathematics competencies, including problem solving, reasoning, connections, communication, number and operation sense, algebra, relationships,…

  3. Choice of boundary condition for lattice-Boltzmann simulation of moderate-Reynolds-number flow in complex domains

    NASA Astrophysics Data System (ADS)

    Nash, Rupert W.; Carver, Hywel B.; Bernabeu, Miguel O.; Hetherington, James; Groen, Derek; Krüger, Timm; Coveney, Peter V.

    2014-02-01

    Modeling blood flow in larger vessels using lattice-Boltzmann methods comes with a challenging set of constraints: a complex geometry with walls and inlets and outlets at arbitrary orientations with respect to the lattice, intermediate Reynolds (Re) number, and unsteady flow. Simple bounce-back is one of the most commonly used, simplest, and most computationally efficient boundary conditions, but many others have been proposed. We implement three other methods applicable to complex geometries [Guo, Zheng, and Shi, Phys. Fluids 14, 2007 (2002), 10.1063/1.1471914; Bouzidi, Firdaouss, and Lallemand, Phys. Fluids 13, 3452 (2001), 10.1063/1.1399290; Junk and Yang, Phys. Rev. E 72, 066701 (2005), 10.1103/PhysRevE.72.066701] in our open-source application hemelb. We use these to simulate Poiseuille and Womersley flows in a cylindrical pipe with an arbitrary orientation at physiologically relevant Re number (1-300) and Womersley (4-12) numbers and steady flow in a curved pipe at relevant Dean number (100-200) and compare the accuracy to analytical solutions. We find that both the Bouzidi-Firdaouss-Lallemand (BFL) and Guo-Zheng-Shi (GZS) methods give second-order convergence in space while simple bounce-back degrades to first order. The BFL method appears to perform better than GZS in unsteady flows and is significantly less computationally expensive. The Junk-Yang method shows poor stability at larger Re number and so cannot be recommended here. The choice of collision operator (lattice Bhatnagar-Gross-Krook vs multiple relaxation time) and velocity set (D3Q15 vs D3Q19 vs D3Q27) does not significantly affect the accuracy in the problems studied.

  4. Real-time PCR for rapidly detecting aniline-degrading bacteria in activated sludge.

    PubMed

    Kayashima, Takakazu; Suzuki, Hisako; Maeda, Toshinari; Ogawa, Hiroaki I

    2013-05-01

    We developed a detection method that uses quantitative real-time PCR (qPCR) and the TaqMan system to easily and rapidly assess the population of aniline-degrading bacteria in activated sludge prior to conducting a biodegradability test on a chemical compound. A primer and probe set for qPCR was designed by a multiple alignment of conserved amino acid sequences encoding the large (α) subunit of aniline dioxygenase. PCR amplification tests showed that the designed primer and probe set targeted aniline-degrading strains such as Acidovorax sp., Gordonia sp., Rhodococcus sp., and Pseudomonas putida, thereby suggesting that the developed method can detect a wide variety of aniline-degrading bacteria. There was a strong correlation between the relative copy number of the α-aniline dioxygenase gene in activated sludge obtained with the developed qPCR method and the number of aniline-degrading bacteria measured by the Most Probable Number method, which is the conventional method, and a good correlation with the lag time of the BOD curve for aniline degradation produced by the biodegradability test in activated sludge samples collected from eight different wastewater treatment plants in Japan. The developed method will be valuable for the rapid and accurate evaluation of the activity of inocula prior to conducting a ready biodegradability test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. GF-7 Imaging Simulation and Dsm Accuracy Estimate

    NASA Astrophysics Data System (ADS)

    Yue, Q.; Tang, X.; Gao, X.

    2017-05-01

    GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.

  6. Integrating expert opinion with modelling for quantitative multi-hazard risk assessment in the Eastern Italian Alps

    NASA Astrophysics Data System (ADS)

    Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.

    2016-11-01

    Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.

  7. Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggieri, Ruggero, E-mail: ruggieri.ruggero@gmail.com; Stavreva, Nadejda; Naccarato, Stefania

    2012-08-01

    Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as duemore » to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.« less

  8. The “curved lead pathway” method to enable a single lead to reach any two intracranial targets

    NASA Astrophysics Data System (ADS)

    Ding, Chen-Yu; Yu, Liang-Hong; Lin, Yuan-Xiang; Chen, Fan; Lin, Zhang-Ya; Kang, De-Zhi

    2017-01-01

    Deep brain stimulation is an effective way to treat movement disorders, and a powerful research tool for exploring brain functions. This report proposes a “curved lead pathway” method for lead implantation, such that a single lead can reach in sequence to any two intracranial targets. A new type of stereotaxic system for implanting a curved lead to the brain of human/primates was designed, the auxiliary device needed for this method to be used in rat/mouse was fabricated and verified in rat, and the Excel algorithm used for automatically calculating the necessary parameters was implemented. This “curved lead pathway” method of lead implantation may complement the current method, make lead implantation for multiple targets more convenient, and expand the experimental techniques of brain function research.

  9. Measuring Model Rocket Engine Thrust Curves

    ERIC Educational Resources Information Center

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  10. The Kepler Catalog of Stellar Flares

    NASA Astrophysics Data System (ADS)

    Davenport, James R. A.

    2016-09-01

    A homogeneous search for stellar flares has been performed using every available Kepler light curve. An iterative light curve de-trending approach was used to filter out both astrophysical and systematic variability to detect flares. The flare recovery completeness has also been computed throughout each light curve using artificial flare injection tests, and the tools for this work have been made publicly available. The final sample contains 851,168 candidate flare events recovered above the 68% completeness threshold, which were detected from 4041 stars, or 1.9% of the stars in the Kepler database. The average flare energy detected is ˜1035 erg. The net fraction of flare stars increases with g - I color, or decreasing stellar mass. For stars in this sample with previously measured rotation periods, the total relative flare luminosity is compared to the Rossby number. A tentative detection of flare activity saturation for low-mass stars with rapid rotation below a Rossby number of ˜0.03 is found. A power-law decay in flare activity with Rossby number is found with a slope of -1, shallower than typical measurements for X-ray activity decay with Rossby number.

  11. Asymptotic theory of neutral stability of the Couette flow of a vibrationally excited gas

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yu. N.; Ershov, I. V.

    2017-01-01

    An asymptotic theory of the neutral stability curve for a supersonic plane Couette flow of a vibrationally excited gas is developed. The initial mathematical model consists of equations of two-temperature viscous gas dynamics, which are used to derive a spectral problem for a linear system of eighth-order ordinary differential equations within the framework of the classical linear stability theory. Unified transformations of the system for all shear flows are performed in accordance with the classical Lin scheme. The problem is reduced to an algebraic secular equation with separation into the "inviscid" and "viscous" parts, which is solved numerically. It is shown that the thus-calculated neutral stability curves agree well with the previously obtained results of the direct numerical solution of the original spectral problem. In particular, the critical Reynolds number increases with excitation enhancement, and the neutral stability curve is shifted toward the domain of higher wave numbers. This is also confirmed by means of solving an asymptotic equation for the critical Reynolds number at the Mach number M ≤ 4.

  12. Analyzing survival curves at a fixed point in time for paired and clustered right-censored data

    PubMed Central

    Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De

    2018-01-01

    In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280

  13. Piecewise polynomial representations of genomic tracks.

    PubMed

    Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz

    2012-01-01

    Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.

  14. TL dating of pottery fragments from four archaeological sites in Taquari Valley, Brazil

    NASA Astrophysics Data System (ADS)

    Cano, Nilo F.; Machado, Neli T. G.; Gennari, Roseli F.; Rocca, Rene R.; Munita, Casimiro S.; Watanabe, Shigueo

    2012-12-01

    Sixty-three pottery fragments from four archaeological sites, numbered RST110, RST101, RST114 and RST114, in the Taquari Valley, vicinity of the city of Lajeado, Rio Grande do Sul state, southern Brazil, have been dated by the thermoluminescence method. Some of them from RST110 and RST101 are as old as 1400-1200 years, whereas those from RST114 and RST107 are younger than 800 years. This result indicates that RST101 and RST110 were peopled earlier than RST114 and RST107. The recent dates found are 302, 295 and 146 years and they are possible, since the first German immigrants who arrived in this region encountered Tupi-Guarani Indians still living there. One interesting result refers to the glow curves of quartz grains RST110, RST101 and RST114 that differ from the glow curves of RST107 quartz grains.

  15. Signal Detection Theory Applied to Helicopter Transmission Diagnostic Thresholds

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Keller, Jonathan A.; Wade, Daniel R.

    2008-01-01

    Helicopter Health Usage Monitoring Systems (HUMS) have potential for providing data to support increasing the service life of a dynamic mechanical component in the transmission of a helicopter. Data collected can demonstrate the HUMS condition indicator responds to a specific component fault with appropriate alert limits and minimal false alarms. Defining thresholds for specific faults requires a tradeoff between the sensitivity of the condition indicator (CI) limit to indicate damage and the number of false alarms. A method using Receiver Operating Characteristic (ROC) curves to assess CI performance was demonstrated using CI data collected from accelerometers installed on several UH60 Black Hawk and AH64 Apache helicopters and an AH64 helicopter component test stand. Results of the analysis indicate ROC curves can be used to reliably assess the performance of commercial HUMS condition indicators to detect damaged gears and bearings in a helicopter transmission.

  16. MAX UnMix: A web application for unmixing magnetic coercivity distributions

    NASA Astrophysics Data System (ADS)

    Maxbauer, Daniel P.; Feinberg, Joshua M.; Fox, David L.

    2016-10-01

    It is common in the fields of rock and environmental magnetism to unmix magnetic mineral components using statistical methods that decompose various types of magnetization curves (e.g., acquisition, demagnetization, or backfield). A number of programs have been developed over the past decade that are frequently used by the rock magnetic community, however many of these programs are either outdated or have obstacles inhibiting their usability. MAX UnMix is a web application (available online at http://www.irm.umn.edu/maxunmix), built using the shiny package for R studio, that can be used for unmixing coercivity distributions derived from magnetization curves. Here, we describe in detail the statistical model underpinning the MAX UnMix web application and discuss the programs functionality. MAX UnMix is an improvement over previous unmixing programs in that it is designed to be user friendly, runs as an independent website, and is platform independent.

  17. Signal Detection Theory Applied to Helicopter Transmission Diagnostic Thresholds

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Keller, Jonathan A.; Wade, Daniel R.

    2009-01-01

    Helicopter Health Usage Monitoring Systems (HUMS) have potential for providing data to support increasing the service life of a dynamic mechanical component in the transmission of a helicopter. Data collected can demonstrate the HUMS condition indicator responds to a specific component fault with appropriate alert limits and minimal false alarms. Defining thresholds for specific faults requires a tradeoff between the sensitivity of the condition indicator (CI) limit to indicate damage and the number of false alarms. A method using Receiver Operating Characteristic (ROC) curves to assess CI performance was demonstrated using CI data collected from accelerometers installed on several UH60 Black Hawk and AH64 Apache helicopters and an AH64 helicopter component test stand. Results of the analysis indicate ROC curves can be used to reliably assess the performance of commercial HUMS condition indicators to detect damaged gears and bearings in a helicopter transmission.

  18. Implementation Learning and Forgetting Curve to Scheduling in Garment Industry

    NASA Astrophysics Data System (ADS)

    Muhamad Badri, Huda; Deros, Baba Md; Syahri, M.; Saleh, Chairul; Fitria, Aninda

    2016-02-01

    The learning curve shows the relationship between time and the cumulative number of units produced which using the mathematical description on the performance of workers in performing repetitive works. The problems of this study is level differences in the labors performance before and after the break which affects the company's production scheduling. The study was conducted in the garment industry, which the aims is to predict the company production scheduling using the learning curve and forgetting curve. By implementing the learning curve and forgetting curve, this paper contributes in improving the labors performance that is in line with the increase in maximum output 3 hours productive before the break are 15 unit product with learning curve percentage in the company is 93.24%. Meanwhile, the forgetting curve improving maximum output 3 hours productive after the break are 11 unit product with the percentage of forgetting curve in the company is 92.96%. Then, the obtained 26 units product on the productive hours one working day is used as the basic for production scheduling.

  19. A Fourier method for the analysis of exponential decay curves.

    PubMed

    Provencher, S W

    1976-01-01

    A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.

  20. Level set methods for detonation shock dynamics using high-order finite elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobrev, V. A.; Grogan, F. C.; Kolev, T. V.

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two-more » and three-dimensional benchmark problems as well as applications to DSD.« less

Top