Sample records for curve based additive

  1. Fabricating small-scale, curved, polymeric structures with convex and concave menisci through interfacial free energy equilibrium.

    PubMed

    Cheng, Chao-Min; Matsuura, Koji; Wang, I-Jan; Kuroda, Yuka; LeDuc, Philip R; Naruse, Keiji

    2009-11-21

    Polymeric curved structures are widely used in imaging systems including optical fibers and microfluidic channels. Here, we demonstrate that small-scale, poly(dimethylsiloxane) (PDMS)-based, curved structures can be fabricated through controlling interfacial free energy equilibrium. Resultant structures have a smooth, symmetric, curved surface, and may be convex or concave in form based on surface tension balance. Their curvatures are controlled by surface characteristics (i.e., hydrophobicity and hydrophilicity) of the molds and semi-liquid PDMS. In addition, these structures are shown to be biocompatible for cell culture. Our system provides a simple, efficient and economical method for generating integrateable optical components without costly fabrication facilities.

  2. Temporal Drivers of Liking Based on Functional Data Analysis and Non-Additive Models for Multi-Attribute Time-Intensity Data of Fruit Chews.

    PubMed

    Kuesten, Carla; Bi, Jian

    2018-06-03

    Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.

  3. Bootstrap-based procedures for inference in nonparametric receiver-operating characteristic curve regression analysis.

    PubMed

    Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G

    2018-03-01

    Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.

  4. Sensitivity curves for searches for gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Thrane, Eric; Romano, Joseph D.

    2013-12-01

    We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.

  5. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less

  6. OGLE-2016-BLG-0168 Binary Microlensing Event: Prediction and Confirmation of the Microlens Parallax Effect from Space-based Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, I.-G.; Yee, J. C.; Jung, Y. K.

    The microlens parallax is a crucial observable for conclusively identifying the nature of lens systems in microlensing events containing or composed of faint (even dark) astronomical objects such as planets, neutron stars, brown dwarfs, and black holes. With the commencement of a new era of microlensing in collaboration with space-based observations, the microlens parallax can be routinely measured. In addition, space-based observations can provide opportunities to verify the microlens parallax measured from ground-only observations and to find a unique solution to the lensing light-curve analysis. Furthermore, since most space-based observations cannot cover the full light curves of lensing events, itmore » is also necessary to verify the reliability of the information extracted from fragmentary space-based light curves. We conduct a test based on the microlensing event OGLE-2016-BLG-0168, created by a binary lens system consisting of almost equal mass M-dwarf stars, to demonstrate that it is possible to verify the microlens parallax and to resolve degeneracies using the space-based light curve even though the observations are fragmentary. Since space-based observatories will frequently produce fragmentary light curves due to their short observing windows, the methodology of this test will be useful for next-generation microlensing experiments that combine space-based and ground-based collaboration.« less

  7. OGLE-2016-BLG-0168 Binary Microlensing Event: Prediction and Confirmation of the Microlens Parallax Effect from Space-based Observations

    NASA Astrophysics Data System (ADS)

    Shin, I.-G.; Udalski, A.; Yee, J. C.; Calchi Novati, S.; Han, C.; Skowron, J.; Mróz, P.; Soszyński, I.; Poleski, R.; Szymański, M. K.; Kozłowski, S.; Pietrukowicz, P.; Ulaczyk, K.; Pawlak, M.; OGLE Collaboration; Albrow, M. D.; Gould, A.; Chung, S.-J.; Hwang, K.-H.; Jung, Y. K.; Ryu, Y.-H.; Zhu, W.; Cha, S.-M.; Kim, D.-J.; Kim, H.-W.; Kim, S.-L.; Lee, C.-U.; Lee, Y.; Park, B.-G.; Pogge, R. W.; KMTNet Group; Beichman, C.; Bryden, G.; Carey, S.; Gaudi, B. S.; Henderson, C. B.; Shvartzvald, Y.; Spitzer Team

    2017-11-01

    The microlens parallax is a crucial observable for conclusively identifying the nature of lens systems in microlensing events containing or composed of faint (even dark) astronomical objects such as planets, neutron stars, brown dwarfs, and black holes. With the commencement of a new era of microlensing in collaboration with space-based observations, the microlens parallax can be routinely measured. In addition, space-based observations can provide opportunities to verify the microlens parallax measured from ground-only observations and to find a unique solution to the lensing light-curve analysis. Furthermore, since most space-based observations cannot cover the full light curves of lensing events, it is also necessary to verify the reliability of the information extracted from fragmentary space-based light curves. We conduct a test based on the microlensing event OGLE-2016-BLG-0168, created by a binary lens system consisting of almost equal mass M-dwarf stars, to demonstrate that it is possible to verify the microlens parallax and to resolve degeneracies using the space-based light curve even though the observations are fragmentary. Since space-based observatories will frequently produce fragmentary light curves due to their short observing windows, the methodology of this test will be useful for next-generation microlensing experiments that combine space-based and ground-based collaboration.

  8. Development of technique for air coating and nickel and copper metalization of solar cells

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Solar cells were made with a variety of base metal screen printing inks applied over silicon nitride AR coating and copper electroplated. Fritted and fritless nickel and fritless tin base printing inks were evaluated. Conversion efficiencies as high as 9% were observed with fritted nickel ink contacts, however, curve shapes were generally poor, reflecting high series resistance. Problems encountered in addition to high series reistance included loss of adhesion of the nickel contacts during plating and poor adhesion, oxidation and inferior curve shapes with the tin base contacts.

  9. A curved surface micro-moiré method and its application in evaluating curved surface residual stress

    NASA Astrophysics Data System (ADS)

    Zhang, Hongye; Wu, Chenlong; Liu, Zhanwei; Xie, Huimin

    2014-09-01

    The moiré method is typically applied to the measurement of deformations of a flat surface while, for a curved surface, this method is rarely used other than for projection moiré or moiré interferometry. Here, a novel colour charge-coupled device (CCD) micro-moiré method has been developed, based on which a curved surface micro-moiré (CSMM) method is proposed with a colour CCD and optical microscope (OM). In the CSMM method, no additional reference grating is needed as a Bayer colour filter array (CFA) installed on the OM in front of the colour CCD image sensor performs this role. Micro-moiré fringes with high contrast are directly observed with the OM through the Bayer CFA under the special condition of observing a curved specimen grating. The principle of the CSMM method based on a colour CCD micro-moiré method and its application range and error analysis are all described in detail. In an experiment, the curved surface residual stress near a welded seam on a stainless steel tube was investigated using the CSMM method.

  10. Recognition of Protein-coding Genes Based on Z-curve Algorithms

    PubMed Central

    -Biao Guo, Feng; Lin, Yan; -Ling Chen, Ling

    2014-01-01

    Recognition of protein-coding genes, a classical bioinformatics issue, is an absolutely needed step for annotating newly sequenced genomes. The Z-curve algorithm, as one of the most effective methods on this issue, has been successfully applied in annotating or re-annotating many genomes, including those of bacteria, archaea and viruses. Two Z-curve based ab initio gene-finding programs have been developed: ZCURVE (for bacteria and archaea) and ZCURVE_V (for viruses and phages). ZCURVE_C (for 57 bacteria) and Zfisher (for any bacterium) are web servers for re-annotation of bacterial and archaeal genomes. The above four tools can be used for genome annotation or re-annotation, either independently or combined with the other gene-finding programs. In addition to recognizing protein-coding genes and exons, Z-curve algorithms are also effective in recognizing promoters and translation start sites. Here, we summarize the applications of Z-curve algorithms in gene finding and genome annotation. PMID:24822027

  11. Risk of adverse outcomes among infants of immigrant women according to birth-weight curves tailored to maternal world region of origin.

    PubMed

    Urquia, Marcelo L; Berger, Howard; Ray, Joel G

    2015-01-06

    Infants of immigrant women in Western nations generally have lower birth weights than infants of native-born women. Whether this difference is physiologic or pathological is unclear. We determined whether the use of birth-weight curves tailored to maternal world region of origin would discriminate adverse neonatal and obstetric outcomes more accurately than a single birth-weight curve based on infants of Canadian-born women. We performed a retrospective cohort study of in-hospital singleton live births (328,387 to immigrant women, 761,260 to nonimmigrant women) in Ontario between 2002 and 2012 using population health services data linked to the national immigration database. We classified infants as small for gestational age (<10th percentile) or large for gestational age (≥90th percentile) using both Canadian and world region-specific birth-weight curves and compared associations with adverse neonatal and obstetric outcomes. Compared with world region-specific birth-weight curves, the Canadian curve classified 20 431 (6.2%) additional newborns of immigrant women as small for gestational age, of whom 15,467 (75.7%) were of East or South Asian descent. The odds of neonatal death were lower among small-for-gestational-age infants of immigrant women than among those of nonimmigrant women based on the Canadian birth-weight curve (adjusted odds ratio [OR] 0.83, 95% confidence interval [CI] 0.72-0.95), but higher when small for gestational age was defined by the world region-specific curves (adjusted OR 1.24, 95% CI 1.08-1.42). Conversely, the odds of some adverse outcomes were lower among large-for-gestational-age infants of immigrant women than among those of nonimmigrant women based on world region-specific birth-weight curves, but were similar based on the Canadian curve. World region-specific birth-weight curves seemed to be more appropriate than a single Canadian population-based curve for assessing the risk of adverse neonatal and obstetric outcomes among small- and large-for-gestational-age infants born to immigrant women, especially those from the East and South Asian regions. © 2015 Canadian Medical Association or its licensors.

  12. Constraining the Physical Properties of Meteor Stream Particles by Light Curve Shapes Using the Virtual Meteor Observatory

    NASA Technical Reports Server (NTRS)

    Koschny, D.; Gritsevich, M.; Barentsen, G.

    2011-01-01

    Different authors have produced models for the physical properties of meteoroids based on the shape of a meteor's light curve, typically from short observing campaigns. We here analyze the height profiles and light curves of approx.200 double-station meteors from the Leonids and Perseids using data from the Virtual Meteor Observatory, to demonstrate that with this web-based meteor database it is possible to analyze very large datasets from different authors in a consistent way. We compute the average heights for begin point, maximum luminosity, and end heights for Perseids and Leonids. We also compute the skew of the light curve, usually called the F-parameter. The results compare well with other author's data. We display the average light curve in a novel way to assess the light curve shape in addition to using the F-parameter. While the Perseids show a peaked light curve, the average Leonid light curve has a more flat peak. This indicates that the particle distribution of Leonid meteors can be described by a Gaussian distribution; the Perseids can be described with a power law. The skew for Leonids is smaller than for Perseids, indicating that the Leonids are more fragile than the Perseids.

  13. A Type D Non-Vacuum Spacetime with Causality Violating Curves, and Its Physical Interpretation

    NASA Astrophysics Data System (ADS)

    Ahmed, Faizuddin

    2017-12-01

    We present a topologically trivial, non-vacuum solution of the Einstein’s field equations in four dimensions, which is regular everywhere. The metric admits circular closed timelike curves, which appear beyond the null curve, and these timelike curves are linearly stable under linear perturbations. Additionally, the spacetime admits null geodesics curve, which are not closed, and the metric is of type D in the Petrov classification scheme. The stress-energy tensor anisotropic fluid satisfy the different energy conditions and a generalization of Equation-of-State parameter of perfect fluid p=ω ρ . The metric admits a twisting, shearfree, nonexapnding timelike geodesic congruence. Finally, the physical interpretation of this solution, based on the study of the equation of the geodesics deviation, will be presented.

  14. Determination of NEHRP Site Class of Seismic Recording Stations in the Northwest Himalayas and Its Adjoining Area Using HVSR Method

    NASA Astrophysics Data System (ADS)

    Harinarayan, N. H.; Kumar, Abhishek

    2018-01-01

    Local site characteristics play an important role in controlling the damage pattern during earthquakes (EQs). These site characteristics may vary from simple to complex and can be estimated by various field tests. In addition, extended Nakamura's method, which uses horizontal to vertical spectral ratio (HVSR) based on available EQ records also available for site class (SC) determination. In this study, SCs for 90 recording stations which are maintained by Program for Excellence in Strong Motion Studies (PESMOS), located in the northwestern Himalayas and the adjoining areas are determined using extended Nakamura's technique. Average HVSR curves obtained at majority of the recording stations are found matching with the existing literature. Predominant frequency ( f peak) from average HVSR curve at each recording station is then used for the determination of SC. Original SC given by PESMOS is purely based on geology and not based on comprehensive soil investigation exercise. In this study, the SC, which is based on the average HVSR curves is found matching with SC given by PESMOS for a majority of recording stations. However, for considerable number of recording stations, a mismatch is also found which is consistent with the existing literature. In addition, SC based on National Earthquake Hazard Reduction Program (NEHRP) scheme is proposed based on f peak for all the 90 recording stations.

  15. Growth standard charts for monitoring bodyweight in dogs of different sizes

    PubMed Central

    Salt, Carina; Morris, Penelope J.; Wilson, Derek; Lund, Elizabeth M.; Cole, Tim J.; Butterwick, Richard F.

    2017-01-01

    Limited information is available on what constitutes optimal growth in dogs. The primary aim of this study was to develop evidence-based growth standards for dogs, using retrospective analysis of bodyweight and age data from >6 million young dogs attending a large corporate network of primary care veterinary hospitals across the USA. Electronic medical records were used to generate bodyweight data from immature client-owned dogs, that were healthy and had remained in ideal body condition throughout the first 3 years of life. Growth centile curves were constructed using Generalised Additive Models for Location, Shape and Scale. Curves were displayed graphically as centile charts covering the age range 12 weeks to 2 years. Over 100 growth charts were modelled, specific to different combinations of breed, sex and neuter status. Neutering before 37 weeks was associated with a slight upward shift in growth trajectory, whilst neutering after 37 weeks was associated with a slight downward shift in growth trajectory. However, these shifts were small in comparison to inter-individual variability amongst dogs, suggesting that separate curves for neutered dogs were not needed. Five bodyweight categories were created to cover breeds up to 40kg, using both visual assessment and hierarchical cluster analysis of breed-specific growth curves. For 20/24 of the individual breed centile curves, agreement with curves for the corresponding bodyweight categories was good. For the remaining 4 breed curves, occasional deviation across centile lines was observed, but overall agreement was acceptable. This suggested that growth could be described using size categories rather than requiring curves for specific breeds. In the current study, a series of evidence-based growth standards have been developed to facilitate charting of bodyweight in healthy dogs. Additional studies are required to validate these standards and create a clinical tool for growth monitoring in pet dogs. PMID:28873413

  16. Growth standard charts for monitoring bodyweight in dogs of different sizes.

    PubMed

    Salt, Carina; Morris, Penelope J; German, Alexander J; Wilson, Derek; Lund, Elizabeth M; Cole, Tim J; Butterwick, Richard F

    2017-01-01

    Limited information is available on what constitutes optimal growth in dogs. The primary aim of this study was to develop evidence-based growth standards for dogs, using retrospective analysis of bodyweight and age data from >6 million young dogs attending a large corporate network of primary care veterinary hospitals across the USA. Electronic medical records were used to generate bodyweight data from immature client-owned dogs, that were healthy and had remained in ideal body condition throughout the first 3 years of life. Growth centile curves were constructed using Generalised Additive Models for Location, Shape and Scale. Curves were displayed graphically as centile charts covering the age range 12 weeks to 2 years. Over 100 growth charts were modelled, specific to different combinations of breed, sex and neuter status. Neutering before 37 weeks was associated with a slight upward shift in growth trajectory, whilst neutering after 37 weeks was associated with a slight downward shift in growth trajectory. However, these shifts were small in comparison to inter-individual variability amongst dogs, suggesting that separate curves for neutered dogs were not needed. Five bodyweight categories were created to cover breeds up to 40kg, using both visual assessment and hierarchical cluster analysis of breed-specific growth curves. For 20/24 of the individual breed centile curves, agreement with curves for the corresponding bodyweight categories was good. For the remaining 4 breed curves, occasional deviation across centile lines was observed, but overall agreement was acceptable. This suggested that growth could be described using size categories rather than requiring curves for specific breeds. In the current study, a series of evidence-based growth standards have been developed to facilitate charting of bodyweight in healthy dogs. Additional studies are required to validate these standards and create a clinical tool for growth monitoring in pet dogs.

  17. Kinematic Methods of Designing Free Form Shells

    NASA Astrophysics Data System (ADS)

    Korotkiy, V. A.; Khmarova, L. I.

    2017-11-01

    The geometrical shell model is formed in light of the set requirements expressed through surface parameters. The shell is modelled using the kinematic method according to which the shell is formed as a continuous one-parameter set of curves. The authors offer a kinematic method based on the use of second-order curves with a variable eccentricity as a form-making element. Additional guiding ruled surfaces are used to control the designed surface form. The authors made a software application enabling to plot a second-order curve specified by a random set of five coplanar points and tangents.

  18. Assessing neural activity related to decision-making through flexible odds ratio curves and their derivatives.

    PubMed

    Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Pardo-Vazquez, Jose L; Leboran, Victor; Molenberghs, Geert; Faes, Christel; Acuña, Carlos

    2011-06-30

    It is well established that neural activity is stochastically modulated over time. Therefore, direct comparisons across experimental conditions and determination of change points or maximum firing rates are not straightforward. This study sought to compare temporal firing probability curves that may vary across groups defined by different experimental conditions. Odds-ratio (OR) curves were used as a measure of comparison, and the main goal was to provide a global test to detect significant differences of such curves through the study of their derivatives. An algorithm is proposed that enables ORs based on generalized additive models, including factor-by-curve-type interactions to be flexibly estimated. Bootstrap methods were used to draw inferences from the derivatives curves, and binning techniques were applied to speed up computation in the estimation and testing processes. A simulation study was conducted to assess the validity of these bootstrap-based tests. This methodology was applied to study premotor ventral cortex neural activity associated with decision-making. The proposed statistical procedures proved very useful in revealing the neural activity correlates of decision-making in a visual discrimination task. Copyright © 2011 John Wiley & Sons, Ltd.

  19. CYP2C19 progress curve analysis and mechanism-based inactivation by three methylenedioxyphenyl compounds.

    PubMed

    Salminen, Kaisa A; Meyer, Achim; Imming, Peter; Raunio, Hannu

    2011-12-01

    Several in vitro criteria were used to assess whether three methylenedioxyphenyl (MDP) compounds, the isoquinoline alkaloids bulbocapnine, canadine, and protopine, are mechanism-based inactivators of CYP2C19. The recently reported fluorometric CYP2C19 progress curve analysis approach was applied first to determine whether these alkaloids demonstrate time-dependent inhibition. In this experiment, bulbocapnine, canadine, and protopine displayed time dependence and saturation in their inactivation kinetics with K(I) and k(inact) values of 72.4 ± 14.7 μM and 0.38 ± 0.036 min(-1), 2.1 ± 0.63 μM and 0.18 ± 0.015 min(-1), and 7.1 ± 2.3 μM and 0.24 ± 0.021 min(-1), respectively. Additional studies were performed to determine whether other specific criteria for mechanism-based inactivation were fulfilled: NADPH dependence, irreversibility, and involvement of a catalytic step in the enzyme inactivation. CYP2C19 activity was not significantly restored by dialysis when it had been inactivated by the alkaloids in the presence of a NADPH-regenerating system, and a metabolic-intermediate complex-associated increase in absorbance at approximately 455 nm was observed. In conclusion, the CYP2C19 progress curve analysis method revealed time-dependent inhibition by these alkaloids, and additional experiments confirmed its quasi-irreversible nature. This study revealed that the CYP2C19 progress curve analysis method is useful for identifying novel mechanism-based inactivators and yields a wealth of information in one run. The alkaloids bulbocapnine, canadine, and protopine, present in herbal medicines, are new mechanism-based inactivators and the first MDP compounds exhibiting quasi-irreversible inactivation of CYP2C19.

  20. Learning curves in health professions education.

    PubMed

    Pusic, Martin V; Boutis, Kathy; Hatala, Rose; Cook, David A

    2015-08-01

    Learning curves, which graphically show the relationship between learning effort and achievement, are common in published education research but are not often used in day-to-day educational activities. The purpose of this article is to describe the generation and analysis of learning curves and their applicability to health professions education. The authors argue that the time is right for a closer look at using learning curves-given their desirable properties-to inform both self-directed instruction by individuals and education management by instructors.A typical learning curve is made up of a measure of learning (y-axis), a measure of effort (x-axis), and a mathematical linking function. At the individual level, learning curves make manifest a single person's progress towards competence including his/her rate of learning, the inflection point where learning becomes more effortful, and the remaining distance to mastery attainment. At the group level, overlaid learning curves show the full variation of a group of learners' paths through a given learning domain. Specifically, they make overt the difference between time-based and competency-based approaches to instruction. Additionally, instructors can use learning curve information to more accurately target educational resources to those who most require them.The learning curve approach requires a fine-grained collection of data that will not be possible in all educational settings; however, the increased use of an assessment paradigm that explicitly includes effort and its link to individual achievement could result in increased learner engagement and more effective instructional design.

  1. Interaction Analysis of Longevity Interventions Using Survival Curves.

    PubMed

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-06

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.

  2. Interaction Analysis of Longevity Interventions Using Survival Curves

    PubMed Central

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-01

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622

  3. Electric Transport Traction Power Supply System With Distributed Energy Sources

    NASA Astrophysics Data System (ADS)

    Abramov, E. Y.; Schurov, N. I.; Rozhkova, M. V.

    2016-04-01

    The paper states the problem of traction substation (TSS) leveling of daily-load curve for urban electric transport. The circuit of traction power supply system (TPSS) with distributed autonomous energy source (AES) based on photovoltaic (PV) and energy storage (ES) units is submitted here. The distribution algorithm of power flow for the daily traction load curve leveling is also introduced in this paper. In addition, it illustrates the implemented experiment model of power supply system.

  4. The viscoelastic behavior of a composite in a thermal environment

    NASA Technical Reports Server (NTRS)

    Morris, D. H.; Brinson, H. F.; Griffith, W. I.; Yeow, Y. T.

    1979-01-01

    A proposed method for the accelerated predictions of modulus and life times for time dependent polymer matrix composite laminates is presented. The method, based on the time temperature superposition principle and lamination theory, is described in detail. Unidirectional reciprocal of compliance master curves and the shift functions needed are presented and discussed. Master curves for arbitrarily oriented unidirectional laminates are predicted and compared with experimantal results obtained from master curves generated from 15 minute tests and with 25 hour tests. Good agreement is shown. Predicted 30 deg and 60 deg unidirectional strength master curves are presented and compared to results of creep rupture tests. Reasonable agreement is demonstrated. In addition, creep rupture results for a (90 deg + or - 60 deg/90 deg) sub 2s laminate are presented.

  5. Visual navigation using edge curve matching for pinpoint planetary landing

    NASA Astrophysics Data System (ADS)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  6. An extension of the receiver operating characteristic curve and AUC-optimal classification.

    PubMed

    Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto

    2012-10-01

    While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.

  7. Comparison of Paired ROC Curves through a Two-Stage Test.

    PubMed

    Yu, Wenbao; Park, Eunsik; Chang, Yuan-Chin Ivan

    2015-01-01

    The area under the receiver operating characteristic (ROC) curve (AUC) is a popularly used index when comparing two ROC curves. Statistical tests based on it for analyzing the difference have been well developed. However, this index is less informative when two ROC curves cross and have similar AUCs. In order to detect differences between ROC curves in such situations, a two-stage nonparametric test that uses a shifted area under the ROC curve (sAUC), along with AUCs, is proposed for paired designs. The new procedure is shown, numerically, to be effective in terms of power under a wide range of scenarios; additionally, it outperforms two conventional ROC-type tests, especially when two ROC curves cross each other and have similar AUCs. Larger sAUC implies larger partial AUC at the range of low false-positive rates in this case. Because high specificity is important in many classification tasks, such as medical diagnosis, this is an appealing characteristic. The test also implicitly analyzes the equality of two commonly used binormal ROC curves at every operating point. We also apply the proposed method to synthesized data and two real examples to illustrate its usefulness in practice.

  8. Statistical aspects of modeling the labor curve.

    PubMed

    Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M

    2015-06-01

    In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Modeling the impact of spatial relationships on horizontal curve safety.

    PubMed

    Findley, Daniel J; Hummer, Joseph E; Rasdorf, William; Zegeer, Charles V; Fowler, Tyler J

    2012-03-01

    The curved segments of roadways are more hazardous because of the additional centripetalforces exerted on a vehicle, driver expectations, and other factors. The safety of a curve is dependent on various factors, most notably by geometric factors, but the location of a curve in relation to other curves is also thought to influence the safety of those curves because of a driver's expectation to encounter additional curves. The link between an individual curve's geometric characteristics and its safety performance has been established, but spatial considerations are typically not included in a safety analysis. The spatial considerations included in this research consisted of four components: distance to adjacent curves, direction of turn of the adjacent curves, and radius and length of the adjacent curves. The primary objective of this paper is to quantify the spatial relationship between adjacent horizontal curves and horizontal curve safety using a crash modification factor. Doing so enables a safety professional to more accurately estimate safety to allocate funding to reduce or prevent future collisions and more efficiently design new roadway sections to minimize crash risk where there will be a series of curves along a route. The most important finding from this research is the statistical significance of spatial considerations for the prediction of horizontal curve safety. The distances to adjacent curves were found to be a reliable predictor of observed collisions. This research recommends a model which utilizes spatial considerations for horizontal curve safety prediction in addition to current Highway Safety Manual prediction capabilities using individual curve geometric features. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. A simplified model for glass formation

    NASA Technical Reports Server (NTRS)

    Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.

    1979-01-01

    A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.

  11. Class-specific Error Bounds for Ensemble Classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prenger, R; Lemmond, T; Varshney, K

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less

  12. An extended CFD model to predict the pumping curve in low pressure plasma etch chamber

    NASA Astrophysics Data System (ADS)

    Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu

    2014-12-01

    Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.

  13. A type N radiation field solution with Λ <0 in a curved space-time and closed time-like curves

    NASA Astrophysics Data System (ADS)

    Ahmed, Faizuddin

    2018-05-01

    An anti-de Sitter background four-dimensional type N solution of the Einstein's field equations, is presented. The matter-energy content pure radiation field satisfies the null energy condition (NEC), and the metric is free-from curvature divergence. In addition, the metric admits a non-expanding, non-twisting and shear-free geodesic null congruence which is not covariantly constant. The space-time admits closed time-like curves which appear after a certain instant of time in a causally well-behaved manner. Finally, the physical interpretation of the solution, based on the study of the equation of the geodesics deviation, is analyzed.

  14. Using Evolved Fuzzy Neural Networks for Injury Detection from Isokinetic Curves

    NASA Astrophysics Data System (ADS)

    Couchet, Jorge; Font, José María; Manrique, Daniel

    In this paper we propose an evolutionary fuzzy neural networks system for extracting knowledge from a set of time series containing medical information. The series represent isokinetic curves obtained from a group of patients exercising the knee joint on an isokinetic dynamometer. The system has two parts: i) it analyses the time series input in order generate a simplified model of an isokinetic curve; ii) it applies a grammar-guided genetic program to obtain a knowledge base represented by a fuzzy neural network. Once the knowledge base has been generated, the system is able to perform knee injuries detection. The results suggest that evolved fuzzy neural networks perform better than non-evolutionary approaches and have a high accuracy rate during both the training and testing phases. Additionally, they are robust, as the system is able to self-adapt to changes in the problem without human intervention.

  15. New Risk Curves for NHTSA's Brain Injury Criterion (BrIC): Derivations and Assessments.

    PubMed

    Laituri, Tony R; Henry, Scott; Pline, Kevin; Li, Guosong; Frankstein, Michael; Weerappuli, Para

    2016-11-01

    The National Highway Traffic Safety Administration (NHTSA) recently published a Request for Comments regarding a potential upgrade to the US New Car Assessment Program (US NCAP) - a star-rating program pertaining to vehicle crashworthiness. Therein, NHTSA (a) cited two metrics for assessing head risk: Head Injury Criterion (HIC15) and Brain Injury Criterion (BrIC), and (b) proposed to conduct risk assessment via its risk curves for those metrics, but did not prescribe a specific method for applying them. Recent studies, however, have indicated that the NHTSA risk curves for BrIC significantly overstate field-based head injury rates. Therefore, in the present three-part study, a new set of BrIC-based risk curves was derived, an overarching head risk equation involving risk curves for both BrIC and HIC15 was assessed, and some additional candidatepredictor- variable assessments were conducted. Part 1 pertained to the derivation. Specifically, data were pooled from various sources: Navy volunteers, amateur boxers, professional football players, simple-fall subjects, and racecar drivers. In total, there were 4,501 cases, with brain injury reported in 63. Injury outcomes were approximated on the Abbreviated Injury Scale (AIS). The statistical analysis was conducted subject to ordinal logistic regression analysis (OLR), such that the various levels of brain injury were cast as a function of BrIC. The resulting risk curves, with Goodman Kruksal Gamma=0.83, were significantly different than those from NHTSA. Part 2 pertained to the assessment relative to field data. Two perspectives were considered: "aggregate" (ΔV=0-56 km/h) and "point" (high-speed, regulatory focus). For the aggregate perspective, the new risk curves for BrIC were applied in field models pertaining to belted, mid-size, adult drivers in 11-1 o'clock, full-engagement frontal crashes in the National Automotive Sampling System (NASS, 1993-2014 calendar years). For the point perspective, BrIC data from tests were used. The assessments were conducted for minor, moderate, and serious injury levels for both Newer Vehicles (airbag-fitted) and Older Vehicles (not airbag-fitted). Curve-based injury rates and NASS-based injury rates were compared via average percent difference (AvgPctDiff). The new risk curves demonstrated significantly better fidelity than those from NHTSA. For example, for the aggregate perspective (n=12 assessments), the results were as follows: AvgPctDiff (present risk curves) = +67 versus AvgPctDiff (NHTSA risk curves) = +9378. Part 2 also contained a more comprehensive assessment. Specifically, BrIC-based risk curves were used to estimate brain-related injury probabilities, HIC15-based risk curves from NHTSA were used to estimate bone/other injury probabilities, and the maximum of the two resulting probabilities was used to represent the attendant headinjury probabilities. (Those HIC15-based risk curves yielded AvgPctDiff=+85 for that application.) Subject to the resulting 21 assessments, similar results were observed: AvgPctDiff (present risk curves) = +42 versus AvgPctDiff (NHTSA risk curves) = +5783. Therefore, based on the results from Part 2, if the existing BrIC metric is to be applied by NHTSA in vehicle assessment, we recommend that the corresponding risk curves derived in the present study be considered. Part 3 pertained to the assessment of various other candidate brain-injury metrics. Specifically, Parts 1 and 2 were revisited for HIC15, translation acceleration (TA), rotational acceleration (RA), rotational velocity (RV), and a different rotational brain injury criterion from NHTSA (BRIC). The rank-ordered results for the 21 assessments for each metric were as follows: RA, HIC15, BRIC, TA, BrIC, and RV. Therefore, of the six studied sets of OLR-based risk curves, the set for rotational acceleration demonstrated the best performance relative to NASS.

  16. The Phase Curve Survey of the Irregular Saturnian Satellites: A Possible Method of Physical Classification

    NASA Technical Reports Server (NTRS)

    Bauer, James M.; Grav, Tommy; Buratti, Bonnie J.; Hicks, Michael D.

    2006-01-01

    During its 2005 January opposition, the saturnian system could be viewed at an unusually low phase angle. We surveyed a subset of Saturn's irregular satellites to obtain their true opposition magnitudes, or nearly so, down to phase angle values of 0.01 deg. Combining our data taken at the Palomar 200-inch and Cerro Tololo Inter-American Observatory's 4-m Blanco telescope with those in the literature, we present the first phase curves for nearly half the irregular satellites originally reported by Gladman et al. [2001. Nature 412, 163-166], including Paaliaq (SXX), Siarnaq (SXXIX), Tarvos (SXXI), Ijiraq (SXXII), Albiorix (SXVI), and additionally Phoebe's narrowest angle brightness measured to date. We find centaur-like steepness in the phase curves or opposition surges in most cases with the notable exception of three, Albiorix and Tarvos, which are suspected to be of similar origin based on dynamical arguments, and Siarnaq.During its 2005 January opposition, the saturnian system could be viewed at an unusually low phase angle. We surveyed a subset of Saturn's irregular satellites to obtain their true opposition magnitudes, or nearly so, down to phase angle values of 0.01 deg. Combining our data taken at the Palomar 200-inch and Cerro Tololo Inter-American Observatory's 4-m Blanco telescope with those in the literature, we present the first phase curves for nearly half the irregular satellites originally reported by Gladman et al. [2001. Nature 412, 163-166], including Paaliaq (SXX), Siarnaq (SXXIX), Tarvos (SXXI), Ijiraq (SXXII), Albiorix (SXVI), and additionally Phoebe's narrowest angle brightness measured to date. We find centaur-like steepness in the phase curves or opposition surges in most cases with the notable exception of three, Albiorix and Tarvos, which are suspected to be of similar origin based on dynamical arguments, and Siarnaq.

  17. Additive Mixing and Conformal Coating of Noniridescent Structural Colors with Robust Mechanical Properties Fabricated by Atomization Deposition.

    PubMed

    Li, Qingsong; Zhang, Yafeng; Shi, Lei; Qiu, Huihui; Zhang, Suming; Qi, Ning; Hu, Jianchen; Yuan, Wei; Zhang, Xiaohua; Zhang, Ke-Qin

    2018-04-24

    Artificial structural colors based on short-range-ordered amorphous photonic structures (APSs) have attracted great scientific and industrial interest in recent years. However, the previously reported methods of self-assembling colloidal nanoparticles lack fine control of the APS coating and fixation on substrates and poorly realize three-dimensional (3D) conformal coatings for objects with irregular or highly curved surfaces. In this paper, atomization deposition of silica colloidal nanoparticles with poly(vinyl alcohol) as the additive is proposed to solve the above problems. By finely controlling the thicknesses of APS coatings, additive mixing of noniridescent structural colors is easily realized. Based on the intrinsic omnidirectional feature of atomization, a one-step 3D homogeneous conformal coating is also readily realized on various irregular or highly curved surfaces, including papers, resins, metal plates, ceramics, and flexible silk fabrics. The vivid coatings on silk fabrics by atomization deposition possess robust mechanical properties, which are confirmed by rubbing and laundering tests, showing great potential in developing an environmentally friendly coloring technique in the textile industry.

  18. Integrated analysis on static/dynamic aeroelasticity of curved panels based on a modified local piston theory

    NASA Astrophysics Data System (ADS)

    Yang, Zhichun; Zhou, Jian; Gu, Yingsong

    2014-10-01

    A flow field modified local piston theory, which is applied to the integrated analysis on static/dynamic aeroelastic behaviors of curved panels, is proposed in this paper. The local flow field parameters used in the modification are obtained by CFD technique which has the advantage to simulate the steady flow field accurately. This flow field modified local piston theory for aerodynamic loading is applied to the analysis of static aeroelastic deformation and flutter stabilities of curved panels in hypersonic flow. In addition, comparisons are made between results obtained by using the present method and curvature modified method. It shows that when the curvature of the curved panel is relatively small, the static aeroelastic deformations and flutter stability boundaries obtained by these two methods have little difference, while for curved panels with larger curvatures, the static aeroelastic deformation obtained by the present method is larger and the flutter stability boundary is smaller compared with those obtained by the curvature modified method, and the discrepancy increases with the increasing of curvature of panels. Therefore, the existing curvature modified method is non-conservative compared to the proposed flow field modified method based on the consideration of hypersonic flight vehicle safety, and the proposed flow field modified local piston theory for curved panels enlarges the application range of piston theory.

  19. Can hydraulic-modelled rating curves reduce uncertainty in high flow data?

    NASA Astrophysics Data System (ADS)

    Westerberg, Ida; Lam, Norris; Lyon, Steve W.

    2017-04-01

    Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show the potential of the hydraulically-modelled curves, particularly where the calibration gaugings are of high quality and cover a wide range of flow conditions.

  20. Analyses of some exoplanets' transits and transit timing variations

    NASA Astrophysics Data System (ADS)

    Püsküllü, ćaǧlar; Soydugan, Faruk

    2017-02-01

    We present solutions of the transit light curves and transit timing variations (TTVs) analyses of the exoplanets HAT-P-5b, HAT-P-9b and HAT-P-25b. Transit light curves were collected at Çanakkale Onsekiz Mart University Observatory and TUBITAK National Observatory. The models were produced by WINFITTER program and stellar, planetary and orbital properties were obtained and discussed. We gave new transit times and generated TTVs with them by appending additional data based on Exoplanet Transit Database (ETD). Significant signals at the TTVs were also investigated.

  1. The first eclipsing binary catalogue from the MOA-II data base

    NASA Astrophysics Data System (ADS)

    Li, M. C. A.; Rattenbury, N. J.; Bond, I. A.; Sumi, T.; Bennett, D. P.; Koshimoto, N.; Abe, F.; Asakura, Y.; Barry, R.; Bhattacharya, A.; Donachie, M.; Evans, P.; Freeman, M.; Fukui, A.; Hirao, Y.; Itow, Y.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Nagakane, M.; Ohnishi, K.; Saito, To.; Sharan, A.; Sullivan, D. J.; Suzuki, D.; Tristram, P. J.; Yonehara, A.

    2017-09-01

    We present the first catalogue of eclipsing binaries in two MOA (Microlensing Observations in Astrophysics) fields towards the Galactic bulge, in which over 8000 candidates, mostly contact and semidetached binaries of periods <1 d, were identified. In this paper, the light curves of a small number of interesting candidates, including eccentric binaries, binaries with noteworthy phase modulations and eclipsing RS Canum Venaticorum type stars, are shown as examples. In addition, we identified three triple object candidates by detecting the light-travel-time effect in their eclipse time variation curves.

  2. Hydration Characteristics of Low-Heat Cement Substituted by Fly Ash and Limestone Powder.

    PubMed

    Kim, Si-Jun; Yang, Keun-Hyeok; Moon, Gyu-Don

    2015-09-01

    This study proposed a new binder as an alternative to conventional cement to reduce the heat of hydration in mass concrete elements. As a main cementitious material, low-heat cement (LHC) was considered, and then fly ash (FA), modified FA (MFA) by vibrator mill, and limestone powder (LP) were used as a partial replacement of LHC. The addition of FA delayed the induction period at the hydration heat curve and the maximum heat flow value ( q max ) increased compared with the LHC based binder. As the proportion and fineness of the FA increased, the induction period of the hydration heat curve was extended, and the q max increased. The hydration production of Ca(OH)₂ was independent of the addition of FA or MFA up to an age of 7 days, beyond which the amount of Ca(OH)₂ gradually decreased owing to their pozzolanic reaction. In the case of LP being used as a supplementary cementitious material, the induction period of the hydration heat curve was reduced by comparison with the case of LHC based binder, and monocarboaluminate was observed as a hydration product. The average pore size measured at an age of 28 days was smaller for LHC with FA or MFA than for 100% LHC.

  3. Utilization of curve offsets in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Haseltalab, Vahid; Yaman, Ulas; Dolen, Melik

    2018-05-01

    Curve offsets are utilized in different fields of engineering and science. Additive manufacturing, which lately becomes an explicit requirement in manufacturing industry, utilizes curve offsets widely. One of the necessities of offsetting is for scaling which is required if there is shrinkage after the fabrication or if the surface quality of the resulting part is unacceptable. Therefore, some post-processing is indispensable. But the major application of curve offsets in additive manufacturing processes is for generating head trajectories. In a point-wise AM process, a correct tool-path in each layer can reduce lots of costs and increase the surface quality of the fabricated parts. In this study, different curve offset generation algorithms are analyzed to show their capabilities and disadvantages through some test cases and improvements on their drawbacks are suggested.

  4. Compression of contour data through exploiting curve-to-curve dependence

    NASA Technical Reports Server (NTRS)

    Yalabik, N.; Cooper, D. B.

    1975-01-01

    An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.

  5. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity

    PubMed Central

    Wittenberg, Leah A.; Jonsson, Nina J.; Chan, RV Paul; Chiang, Michael F.

    2014-01-01

    Presence of plus disease in retinopathy of prematurity (ROP) is an important criterion for identifying treatment-requiring ROP. Plus disease is defined by a standard published photograph selected over 20 years ago by expert consensus. However, diagnosis of plus disease has been shown to be subjective and qualitative. Computer-based image analysis, using quantitative methods, has potential to improve the objectivity of plus disease diagnosis. The objective was to review the published literature involving computer-based image analysis for ROP diagnosis. The PubMed and Cochrane library databases were searched for the keywords “retinopathy of prematurity” AND “image analysis” AND/OR “plus disease.” Reference lists of retrieved articles were searched to identify additional relevant studies. All relevant English-language studies were reviewed. There are four main computer-based systems, ROPtool (AU ROC curve, plus tortuosity 0.95, plus dilation 0.87), RISA (AU ROC curve, arteriolar TI 0.71, venular diameter 0.82), Vessel Map (AU ROC curve, arteriolar dilation 0.75, venular dilation 0.96), and CAIAR (AU ROC curve, arteriole tortuosity 0.92, venular dilation 0.91), attempting to objectively analyze vessel tortuosity and dilation in plus disease in ROP. Some of them show promise for identification of plus disease using quantitative methods. This has potential to improve the diagnosis of plus disease, and may contribute to the management of ROP using both traditional binocular indirect ophthalmoscopy and image-based telemedicine approaches. PMID:21366159

  6. Excess junction current of silicon solar cells

    NASA Technical Reports Server (NTRS)

    Wang, E. Y.; Legge, R. N.; Christidis, N.

    1973-01-01

    The current-voltage characteristics of n(plus)-p silicon solar cells with 0.1, 1.0, 2.0, and 10 ohm-cm p-type base materials have been examined in detail. In addition to the usual I-V measurements, we have studied the temperature dependence of the slope of the I-V curve at the origin by the lock-in technique. The excess junction current coefficient (Iq) deduced from the slope at the origin depends on the square root of the intrinsic carrier concentration. The Iq obtained from the I-V curve fitting over the entire forward bias region at various temperatures shows the same temperature dependence. This result, in addition to the presence of an aging effect, suggest that the surface channel effect is the dominant cause of the excess junction current.

  7. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE PAGES

    An, Ke; Yuan, Lang; Dial, Laura; ...

    2017-09-11

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  8. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ke; Yuan, Lang; Dial, Laura

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  9. Using GAMM to examine inter-individual heterogeneity in thermal performance curves for Natrix natrix indicates bet hedging strategy by mothers.

    PubMed

    Vickers, Mathew J; Aubret, Fabien; Coulon, Aurélie

    2017-01-01

    The thermal performance curve (TPC) illustrates the dependence on body- and therefore environmental- temperature of many fitness-related aspects of ectotherm ecology and biology including foraging, growth, predator avoidance, and reproduction. The typical thermal performance curve model is linear in its parameters despite the well-known, strong, non-linearity of the response of performance to temperature. In addition, it is usual to consider a single model based on few individuals as descriptive of a species-level response to temperature. To overcome these issues, we used generalized additive mixed modeling (GAMM) to estimate thermal performance curves for 73 individual hatchling Natrix natrix grass snakes from seven clutches, taking advantage of the structure of GAMM to demonstrate that almost 16% of the deviance in thermal performance curves is attributed to inter-individual variation, while only 1.3% is attributable to variation amongst clutches. GAMM allows precise estimation of curve characteristics, which we used to test hypotheses on tradeoffs thought to constrain the thermal performance curve: hotter is better, the specialist-generalist trade off, and resource allocation/acquisition. We observed a negative relationship between maximum performance and performance breadth, indicating a specialist-generalist tradeoff, and a positive relationship between thermal optimum and maximum performance, suggesting "hotter is better". There was a significant difference among matrilines in the relationship between Area Under the Curve and maximum performance - relationship that is an indicator of evenness in acquisition or allocation of resources. As we used unfed hatchlings, the observed matriline effect indicates divergent breeding strategies among mothers, with some mothers provisioning eggs unequally resulting in some offspring being better than others, while other mothers provisioned the eggs more evenly, resulting in even performance throughout the clutch. This observation is reminiscent of bet-hedging strategies, and implies the possibility for intra-clutch variability in the TPCs to buffer N. natrix against unpredictable environmental variability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Fission yield calculation using toy model based on Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id; Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221; Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. Theremore » are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90« less

  11. Fission yield calculation using toy model based on Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Jubaidah, Kurniadi, Rizal

    2015-09-01

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  12. Blazhko Effect

    NASA Technical Reports Server (NTRS)

    Teays, Terry

    1996-01-01

    The cause of the Blazhko effect, the long-term modulation of the light and radial velocity curves of some RR Lyr stars, is still not understood. The observational characteristics of the Blazhko effect are discussed. Some preliminary results are presented from two recent campaigns to observe RR Lyr, using the International Ultraviolet Explorer along with ground-based spectroscopy and photometry, throughout a pulsation cycle, at a variety of Blazhko phases. A set of ultraviolet light curves have been generated from low dispersion IUE spectra. In addition, the (visual) light curves from IUE's Fine Error Sensor are analyzed using the Fourier decomposition technique. The values of the parameters Psi(sub 21) and R(sub 21) at different Blazhko phases of RR Lyr span the range of values found for non-Blazhko variables of similar period.

  13. Training, Simulation, the Learning Curve, and How to Reduce Complications in Urology.

    PubMed

    Brunckhorst, Oliver; Volpe, Alessandro; van der Poel, Henk; Mottrie, Alexander; Ahmed, Kamran

    2016-04-01

    Urology is at the forefront of minimally invasive surgery to a great extent. These procedures produce additional learning challenges and possess a steep initial learning curve. Training and assessment methods in surgical specialties such as urology are known to lack clear structure and often rely on differing operative flow experienced by individuals and institutions. This article aims to assess current urology training modalities, to identify the role of simulation within urology, to define and identify the learning curves for various urologic procedures, and to discuss ways to decrease complications in the context of training. A narrative review of the literature was conducted through December 2015 using the PubMed/Medline, Embase, and Cochrane Library databases. Evidence of the validity of training methods in urology includes observation of a procedure, mentorship and fellowship, e-learning, and simulation-based training. Learning curves for various urologic procedures have been recommended based on the available literature. The importance of structured training pathways is highlighted, with integration of modular training to ensure patient safety. Valid training pathways are available in urology. The aim in urology training should be to combine all of the available evidence to produce procedure-specific curricula that utilise the vast array of training methods available to ensure that we continue to improve patient outcomes and reduce complications. The current evidence for different training methods available in urology, including simulation-based training, was reviewed, and the learning curves for various urologic procedures were critically analysed. Based on the evidence, future pathways for urology curricula have been suggested to ensure that patient safety is improved. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  14. Decision curve analysis assessing the clinical benefit of NMP22 in the detection of bladder cancer: secondary analysis of a prospective trial.

    PubMed

    Barbieri, Christopher E; Cha, Eugene K; Chromecki, Thomas F; Dunning, Allison; Lotan, Yair; Svatek, Robert S; Scherr, Douglas S; Karakiewicz, Pierre I; Sun, Maxine; Mazumdar, Madhu; Shariat, Shahrokh F

    2012-03-01

    • To employ decision curve analysis to determine the impact of nuclear matrix protein 22 (NMP22) on clinical decision making in the detection of bladder cancer using data from a prospective trial. • The study included 1303 patients at risk for bladder cancer who underwent cystoscopy, urine cytology and measurement of urinary NMP22 levels. • We constructed several prediction models to estimate risk of bladder cancer. The base model was generated using patient characteristics (age, gender, race, smoking and haematuria); cytology and NMP22 were added to the base model to determine effects on predictive accuracy. • Clinical net benefit was calculated by summing the benefits and subtracting the harms and weighting these by the threshold probability at which a patient or clinician would opt for cystoscopy. • In all, 72 patients were found to have bladder cancer (5.5%). In univariate analyses, NMP22 was the strongest predictor of bladder cancer presence (predictive accuracy 71.3%), followed by age (67.5%) and cytology (64.3%). • In multivariable prediction models, NMP22 improved the predictive accuracy of the base model by 8.2% (area under the curve 70.2-78.4%) and of the base model plus cytology by 4.2% (area under the curve 75.9-80.1%). • Decision curve analysis revealed that adding NMP22 to other models increased clinical benefit, particularly at higher threshold probabilities. • NMP22 is a strong, independent predictor of bladder cancer. • Addition of NMP22 improves the accuracy of standard predictors by a statistically and clinically significant margin. • Decision curve analysis suggests that integration of NMP22 into clinical decision making helps avoid unnecessary cystoscopies, with minimal increased risk of missing a cancer. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.

  15. Radiation and annealing response of WWER 440 beltline welding seams

    NASA Astrophysics Data System (ADS)

    Viehrig, Hans-Werner; Houska, Mario; Altstadt, Eberhard

    2015-01-01

    The focus of this paper is on the irradiation response and the effect of thermal annealing in weld materials extracted from decommissioned WWER 440 reactor pressure vessels of the nuclear power plant Greifswald. The characterisation is based on the measurement of the hardness, the yield stress, the Master Curve reference temperature, T0, and the Charpy-V transition temperature through the thickness of multi-layer beltline welding seams in the irradiated and the thermally annealed condition. Additionally, the weld bead structure was characterised by light microscopic studies. We observed a large variation in the through thickness T0 values in the irradiated as well as in thermally annealed condition. The T0 values measured with the T-S-oriented Charpy size SE(B) specimens cut from different thickness locations of the multilayer welding seams strongly depend on the intrinsic weld bead structure along the crack tip. The Master Curve, T0, and Charpy-V, TT47J, based ductile-to-brittle transition temperature progressions through the thickness of the multi-layer welding seam do not correspond to the forecast according to the Russian code. In general, the fracture toughness values at cleavage failure, KJc, measured on SE(B) specimens from the irradiated and large-scale thermally annealed beltline welding seams follow the Master Curve description, but more than the expected number lie outside the curves for 2% and 98% fracture probability. In this case the test standard ASTM E1921 indicates the investigated multi-layer weld metal as not uniform. The multi modal Master Curve based approach describes the temperature dependence of the specimen size adjusted KJc-1T values well. Thermal annealing at 475 °C for 152 h results in the expected decrease of the hardness and tensile strength and the shift of Master Curve and Charpy-V based ductile-to-brittle transition temperatures to lower values.

  16. Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions

    NASA Astrophysics Data System (ADS)

    De Risi, Raffaele; Goda, Katsuichiro

    2017-08-01

    Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.

  17. A new CFD based non-invasive method for functional diagnosis of coronary stenosis.

    PubMed

    Xie, Xinzhou; Zheng, Minwen; Wen, Didi; Li, Yabing; Xie, Songyun

    2018-03-22

    Accurate functional diagnosis of coronary stenosis is vital for decision making in coronary revascularization. With recent advances in computational fluid dynamics (CFD), fractional flow reserve (FFR) can be derived non-invasively from coronary computed tomography angiography images (FFR CT ) for functional measurement of stenosis. However, the accuracy of FFR CT is limited due to the approximate modeling approach of maximal hyperemia conditions. To overcome this problem, a new CFD based non-invasive method is proposed. Instead of modeling maximal hyperemia condition, a series of boundary conditions are specified and those simulated results are combined to provide a pressure-flow curve for a stenosis. Then, functional diagnosis of stenosis is assessed based on parameters derived from the obtained pressure-flow curve. The proposed method is applied to both idealized and patient-specific models, and validated with invasive FFR in six patients. Results show that additional hemodynamic information about the flow resistances of a stenosis is provided, which cannot be directly obtained from anatomy information. Parameters derived from the simulated pressure-flow curve show a linear and significant correlations with invasive FFR (r > 0.95, P < 0.05). The proposed method can assess flow resistances by the pressure-flow curve derived parameters without modeling of maximal hyperemia condition, which is a new promising approach for non-invasive functional assessment of coronary stenosis.

  18. Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness

    PubMed Central

    Samonds, Jason M.; Potetz, Brian R.; Lee, Tai Sing

    2014-01-01

    We propose using the statistical measurement of the sample skewness of the distribution of mean firing rates of a tuning curve to quantify sharpness of tuning. For some features, like binocular disparity, tuning curves are best described by relatively complex and sometimes diverse functions, making it difficult to quantify sharpness with a single function and parameter. Skewness provides a robust nonparametric measure of tuning curve sharpness that is invariant with respect to the mean and variance of the tuning curve and is straightforward to apply to a wide range of tuning, including simple orientation tuning curves and complex object tuning curves that often cannot even be described parametrically. Because skewness does not depend on a specific model or function of tuning, it is especially appealing to cases of sharpening where recurrent interactions among neurons produce sharper tuning curves that deviate in a complex manner from the feedforward function of tuning. Since tuning curves for all neurons are not typically well described by a single parametric function, this model independence additionally allows skewness to be applied to all recorded neurons, maximizing the statistical power of a set of data. We also compare skewness with other nonparametric measures of tuning curve sharpness and selectivity. Compared to these other nonparametric measures tested, skewness is best used for capturing the sharpness of multimodal tuning curves defined by narrow peaks (maximum) and broad valleys (minima). Finally, we provide a more formal definition of sharpness using a shape-based information gain measure and derive and show that skewness is correlated with this definition. PMID:24555451

  19. Virus Neutralisation: New Insights from Kinetic Neutralisation Curves

    PubMed Central

    Magnus, Carsten

    2013-01-01

    Antibodies binding to the surface of virions can lead to virus neutralisation. Different theories have been proposed to determine the number of antibodies that must bind to a virion for neutralisation. Early models are based on chemical binding kinetics. Applying these models lead to very low estimates of the number of antibodies needed for neutralisation. In contrast, according to the more conceptual approach of stoichiometries in virology a much higher number of antibodies is required for virus neutralisation by antibodies. Here, we combine chemical binding kinetics with (virological) stoichiometries to better explain virus neutralisation by antibody binding. This framework is in agreement with published data on the neutralisation of the human immunodeficiency virus. Knowing antibody reaction constants, our model allows us to estimate stoichiometrical parameters from kinetic neutralisation curves. In addition, we can identify important parameters that will make further analysis of kinetic neutralisation curves more valuable in the context of estimating stoichiometries. Our model gives a more subtle explanation of kinetic neutralisation curves in terms of single-hit and multi-hit kinetics. PMID:23468602

  20. Dealing with Non-stationarity in Intensity-Frequency-Duration Curve

    NASA Astrophysics Data System (ADS)

    Rengaraju, S.; Rajendran, V.; C T, D.

    2017-12-01

    Extremes like flood and drought are becoming frequent and more vulnerable in recent times, generally attributed to the recent revelation of climate change. One of the main concerns is that whether the present infrastructures like dams, storm water drainage networks, etc., which were designed following the so called `stationary' assumption, are capable of withstanding the expected severe extremes. Stationary assumption considers that extremes are not changing with respect to time. However, recent studies proved that climate change has altered the climate extremes both temporally and spatially. Traditionally, the observed non-stationary in the extreme precipitation is incorporated in the extreme value distributions in terms of changing parameters. Nevertheless, this raises a question which parameter needs to be changed, i.e. location or scale or shape, since either one or more of these parameters vary at a given location. Hence, this study aims to detect the changing parameters to reduce the complexity involved in the development of non-stationary IDF curve and to provide the uncertainty bound of estimated return level using Bayesian Differential Evolutionary Monte Carlo (DE-MC) algorithm. Firstly, the extreme precipitation series is extracted using Peak Over Threshold. Then, the time varying parameter(s) is(are) detected for the extracted series using Generalized Additive Models for Location Scale and Shape (GAMLSS). Then, the IDF curve is constructed using Generalized Pareto Distribution incorporating non-stationarity only if the parameter(s) is(are) changing with respect to time, otherwise IDF curve will follow stationary assumption. Finally, the posterior probability intervals of estimated return revel are computed through Bayesian DE-MC approach and the non-stationary based IDF curve is compared with the stationary based IDF curve. The results of this study emphasize that the time varying parameters also change spatially and the IDF curves should incorporate non-stationarity only if there is change in the parameters, though there may be significant change in the extreme rainfall series. Our results evoke the importance of updating the infrastructure design strategies for the changing climate, by adopting the non-stationary based IDF curves.

  1. Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves

    NASA Astrophysics Data System (ADS)

    Wakeford, H. R.; Sing, D. K.; Evans, T.; Deming, D.; Mandell, A.

    2016-03-01

    Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 μm probe primarily the H2O absorption band at 1.4 μm, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as Rp/R*, which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts {δ }λ (λ ) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.

  2. Learning Curve Analysis and Surgical Outcomes of Single-port Laparoscopic Myomectomy.

    PubMed

    Lee, Hee Jun; Kim, Ju Yeong; Kim, Seul Ki; Lee, Jung Ryeol; Suh, Chang Suk; Kim, Seok Hyun

    2015-01-01

    To identify learning curves for single-port laparoscopic myomectomy (SPLM) and evaluate surgical outcomes according to the sequence of operation. A retrospective study. A university-based hospital (Canadian Task Force classification II-2). The medical records from 205 patients who had undergone SPLM from October 2009 to May 2013 were reviewed. Because the myomectomy time was significantly affected by the size and number of myomas removed by SPLM, cases in which 2 or more of the myomas removed were >7 cm in diameter were excluded. Furthermore, cases involving additional operations performed simultaneously (e.g., ovarian or hysteroscopic surgery) were also excluded. A total of 161 cases of SPLM were included. None. We assessed the SPLM learning curve via a graph based on operation time versus sequence of cases. Patients were chronologically arranged according to their surgery dates and were then placed into 1 of 4 groups according to their operation sequence. SPLM was completed successfully in 160 of 161 cases (99.4%). One case was converted to multiport surgery. Basal characteristics of the patients between the 4 groups did not differ. The median operation times for the 4 groups were 112.0, 92.8, 83.7, and 90.0 minutes, respectively. Operation time decreased significantly in the second, third, and fourth groups compared with that in the first group (p < .001). Proficiency, which is the point at which the slope of the learning curve became less steep, was evident after about 45 operations. Results from the current study suggested that proficiency for SPLM was achieved after about 45 operations. Additionally, operation time decreased with experience without an increase in complication rate. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.

  3. Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves

    NASA Technical Reports Server (NTRS)

    Wakeford, H. R.; Sing, D.K.; Deming, D.; Mandell, A.

    2016-01-01

    Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 microns probe primarily the H2O absorption band at 1.4 microns, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as "ramp" probability (R (sub p)) divided by "ramp" total (R (sub asterisk)), which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts delta (sub lambda) times lambda) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.

  4. Quantifying the Uncertainty in Discharge Data Using Hydraulic Knowledge and Uncertain Gaugings

    NASA Astrophysics Data System (ADS)

    Renard, B.; Le Coz, J.; Bonnifait, L.; Branger, F.; Le Boursicaud, R.; Horner, I.; Mansanarez, V.; Lang, M.

    2014-12-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.).

  5. Evaluation of the field relevance of several injury risk functions.

    PubMed

    Prasad, Priya; Mertz, Harold J; Dalmotas, Danius J; Augenstein, Jeffrey S; Diggs, Kennerly

    2010-11-01

    An evaluation of the four injury risk curves proposed in the NHTSA NCAP for estimating the risk of AIS>= 3 injuries to the head, neck, chest and AIS>=2 injury to the Knee-Thigh-Hip (KTH) complex has been conducted. The predicted injury risk to the four body regions based on driver dummy responses in over 300 frontal NCAP tests were compared against those to drivers involved in real-world crashes of similar severity as represented in the NASS. The results of the study show that the predicted injury risks to the head and chest were slightly below those in NASS, and the predicted risk for the knee-thigh-hip complex was substantially below that observed in the NASS. The predicted risk for the neck by the Nij curve was greater than the observed risk in NASS by an order of magnitude due to the Nij risk curve predicting a non-zero risk when Nij = 0. An alternative and published Nte risk curve produced a risk estimate consistent with the NASS estimate of neck injury. Similarly, an alternative and published chest injury risk curve produced a risk estimate that was within the bounds of the NASS estimates. No published risk curve for femur compressive load could be found that would give risk estimates consistent with the range of the NASS estimates. Additional work on developing a femur compressive load risk curve is recommended.

  6. The Chaotic Light Curves of Accreting Black Holes

    NASA Technical Reports Server (NTRS)

    Kazanas, Demosthenes

    2007-01-01

    We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.

  7. Development of structural vulnerability curve associated with high magnitude torrent occurrences in Switzerland

    NASA Astrophysics Data System (ADS)

    Wing-Yuen Chow, Candace; Bründl, Michael; Keiler, Margreth

    2017-04-01

    In mountain regions, high economic losses have increased significantly in the past decades due to severe hazard processes, in spite of notable investments in hazard management. Assessing the vulnerability of built structures to high magnitude torrent events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. While vulnerability curves have been developed for different countries, the presented work contributes new data from Swiss-based case studies that address a known gap associated with the consequences of high magnitude events. Data for this stage of the investigation communicates the degree of loss associated with affected structures and has been provided by local authorities dealing with natural hazards (e.g. Amt für Wald des Kantons Bern (KAWA) and cantonal insurance providers). Information used for the empirical quantification of vulnerability to torrent processes is derived from detailed post-event documentation and the loss database and verified with field visits. Building the initial database supports data sharing and the systematic inclusion of additional case studies as they become available. The collection of this new data is fundamental to the development of a local vulnerability curve based on observed sediment deposition heights, a proxy for describing hazard intensity. The result will then be compared to curves derived from Austrian and Italian datasets.

  8. Candidates of eclipsing multiples based on extraneous eclipses on binary light curves: KIC 7622486, KIC 7668648, KIC 7670485 and KIC 8938628

    NASA Astrophysics Data System (ADS)

    Zhang, Jia; Qian, Sheng-Bang; He, Jian-Duo

    2017-02-01

    Four candidates of eclipsing multiples, based on new extraneous eclipses found on Kepler binary light curves, are presented and studied. KIC 7622486 is a double eclipsing binary candidate with orbital periods of 2.2799960 d and 40.246503 d. The two binary systems do not eclipse each other in the line of sight, but there is mutual gravitational influence between them which leads to the small but definite eccentricity of 0.0035(0.0022) associated with the short 2.2799960 d period orbit. KIC 7668648 is a hierarchical quadruple system candidate, with two sets of solid 203 ± 5 d period extraneous eclipses and another independent set of extraneous eclipses. A clear and credible extraneous eclipse is found on the binary light curve of KIC 7670485 which makes it a triple system candidate. Two sets of extraneous eclipses with periods of about 390 d and 220 d are found on KIC 8938628 binary curves, which not only confirm the previous conclusion of the 388.5 ± 0.3 triple system, but also indicate new additional objects that make KIC 8938628 a hierarchical quadruple system candidate. The results from these four candidates will contribute to the field of eclipsing multiples.

  9. Design and Optimization of a Light-Emitting Diode Projection Micro-Stereolithography Three-Dimensional Manufacturing System

    DTIC Science & Technology

    2012-12-11

    ment, and difficulties creating high aspect ratio features. In addition, conventional mask-based lithography cannot create curved surfaces in the...There are three types of digital mask technologies: (1) liquid crystal display (LCD); (2) digital micromirror device (DMD); and (3) LCoS. LCD is the

  10. Computer-based learning of spelling skills in children with and without dyslexia.

    PubMed

    Kast, Monika; Baschera, Gian-Marco; Gross, Markus; Jäncke, Lutz; Meyer, Martin

    2011-12-01

    Our spelling training software recodes words into multisensory representations comprising visual and auditory codes. These codes represent information about letters and syllables of a word. An enhanced version, developed for this study, contains an additional phonological code and an improved word selection controller relying on a phoneme-based student model. We investigated the spelling behavior of children by means of learning curves based on log-file data of the previous and the enhanced software version. First, we compared the learning progress of children with dyslexia working either with the previous software (n = 28) or the adapted version (n = 37). Second, we investigated the spelling behavior of children with dyslexia (n = 37) and matched children without dyslexia (n = 25). To gain deeper insight into which factors are relevant for acquiring spelling skills, we analyzed the influence of cognitive abilities, such as attention functions and verbal memory skills, on the learning behavior. All investigations of the learning process are based on learning curve analyses of the collected log-file data. The results evidenced that those children with dyslexia benefit significantly from the additional phonological cue and the corresponding phoneme-based student model. Actually, children with dyslexia improve their spelling skills to the same extent as children without dyslexia and were able to memorize phoneme to grapheme correspondence when given the correct support and adequate training. In addition, children with low attention functions benefit from the structured learning environment. Generally, our data showed that memory sources are supportive cognitive functions for acquiring spelling skills and for using the information cues of a multi-modal learning environment.

  11. On the Analysis and Construction of the Butterfly Curve Using "Mathematica"[R

    ERIC Educational Resources Information Center

    Geum, Y. H.; Kim, Y. I.

    2008-01-01

    The butterfly curve was introduced by Temple H. Fay in 1989 and defined by the polar curve r = e[superscript cos theta] minus 2 cos 4 theta plus sin[superscript 5] (theta divided by 12). In this article, we develop the mathematical model of the butterfly curve and analyse its geometric properties. In addition, we draw the butterfly curve and…

  12. Do reading additions improve reading in pre-presbyopes with low vision?

    PubMed

    Alabdulkader, Balsam; Leat, Susan

    2012-09-01

    This study compared three different methods of determining a reading addition and the possible improvement on reading performance in children and young adults with low vision. Twenty-eight participants with low vision, aged 8 to 32 years, took part in the study. Reading additions were determined with (a) a modified Nott dynamic retinoscopy, (b) a subjective method, and (c) an age-based formula. Reading performance was assessed with MNREAD-style reading charts at 12.5 cm, with and without each reading addition in random order. Outcome measures were reading speed, critical print size, MNREAD threshold, and the area under the reading speed curve. For the whole group, there was no significant improvement in reading performance with any of the additions. When participants with normal accommodation at 12.5 cm were excluded, the area under the reading speed curve was significantly greater with all reading additions compared with no addition (p = 0.031, 0.028, and 0.028, respectively). Also, the reading acuity threshold was significantly better with all reading additions compared with no addition (p = 0.014, 0.030, and 0.036, respectively). Distance and near visual acuity, age, and contrast sensitivity did not predict improvement with a reading addition. All, but one, of the participants who showed a significant improvement in reading with an addition had reduced accommodation. A reading addition may improve reading performance for young people with low vision and should be considered as part of a low vision assessment, particularly when accommodation is reduced.

  13. An Experimental Study on the Impact of Different-frequency Elastic Waves on Water Retention Curve

    NASA Astrophysics Data System (ADS)

    Deng, J. H.; Dai, J. Y.; Lee, J. W.; Lo, W. C.

    2017-12-01

    ABSTEACTOver the past few decades, theoretical and experimental studies on the connection between elastic wave attributes and the physical properties of a fluid-bearing porous medium have attracted the attention of many scholars in fields of porous medium flow and hydrogeology. It has been previously determined that the transmission of elastic waves in a porous medium containing two immiscible fluids will have an effect on the water retention curve, but it has not been found that the water retention curve will be affected by the frequency of elastic vibration waves or whether the effect on the soil is temporary or permanent. This research is based on a sand box test in which the soil is divided into three layers (a lower, middle, and upper layer). In this case, we discuss different impacts on the water retention curve during the drying process under sound waves (elastic waves) subject to three frequencies (150Hz, 300Hz, and 450Hz), respectively. The change in the water retention curve before and after the effect is then discussed. In addition, how sound waves affect the water retention curve at different depths is also observed. According to the experimental results, we discover that sound waves can cause soil either to expand or to contract. When the soil is induced to expand due to sound waves, it can contract naturally and return to the condition it was in before the influence of the sound waves. On the contrary, when the soil is induced to contract, it is unable to return to its initial condition. Due to the results discussed above, it is suggested that sound waves causing soil to expand have a temporary impact while those causing soil to contract have a permanent impact. In addition, our experimental results show how sound waves affect the water retention curve at different depths. The degree of soil expansion and contraction caused by the sound waves will differ at various soil depths. Nevertheless, the expanding or contracting of soil is only subject to the frequency of sound waves. Key words: Elastic waves, Water retention curve, Sand box test.

  14. WE-H-BRA-08: A Monte Carlo Cell Nucleus Model for Assessing Cell Survival Probability Based On Particle Track Structure Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B; Georgia Institute of Technology, Atlanta, GA; Wang, C

    Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities.more » These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell survival curves for high-LET radiation.« less

  15. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  16. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  17. Buckling Behavior of Long Anisotropic Plates Subjected to Fully Restrained Thermal Expansion

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.

    2001-01-01

    An approach for synthesizing buckling results and behavior for thin balanced and unbalanced symmetric laminates that are subjected to uniform heating or cooling and fully restrained against thermal expansion or contraction is presented. This approach uses a nondimensional analysis for infinitely long, flexurally anisotropic plates that are subjected to combined mechanical loads and is based on useful nondimensional parameters. In addition, stiffness-weighted laminate thermal-expansion parameters are derived that are used to determine critical temperatures in terms of physically intuitive mechanical buckling coefficients, and the effects of membrane orthotropy and membrane anisotropy are included. Many results are presented for some common laminates that are intended to facilitate a structural designer's transition to the use of the generic buckling design curves that are presented in the paper. Several generic buckling design curves are presented that provide physical insight into the buckling response in addition to providing useful design data. Examples are presented that demonstrate the use of the generic design curves. The analysis approach and generic results indicate the effects and characteristics of laminate thermal expansion, membrane orthotropy and anisotropy, and flexural orthotropy and anisotropy in a very general and unifying manner.

  18. Computer-aided diagnosis of prostate cancer using a deep convolutional neural network from multiparametric MRI.

    PubMed

    Song, Yang; Zhang, Yu-Dong; Yan, Xu; Liu, Hui; Zhou, Minxiong; Hu, Bingwen; Yang, Guang

    2018-04-16

    Deep learning is the most promising methodology for automatic computer-aided diagnosis of prostate cancer (PCa) with multiparametric MRI (mp-MRI). To develop an automatic approach based on deep convolutional neural network (DCNN) to classify PCa and noncancerous tissues (NC) with mp-MRI. Retrospective. In all, 195 patients with localized PCa were collected from a PROSTATEx database. In total, 159/17/19 patients with 444/48/55 observations (215/23/23 PCas and 229/25/32 NCs) were randomly selected for training/validation/testing, respectively. T 2 -weighted, diffusion-weighted, and apparent diffusion coefficient images. A radiologist manually labeled the regions of interest of PCas and NCs and estimated the Prostate Imaging Reporting and Data System (PI-RADS) scores for each region. Inspired by VGG-Net, we designed a patch-based DCNN model to distinguish between PCa and NCs based on a combination of mp-MRI data. Additionally, an enhanced prediction method was used to improve the prediction accuracy. The performance of DCNN prediction was tested using a receiver operating characteristic (ROC) curve, and the area under the ROC curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Moreover, the predicted result was compared with the PI-RADS score to evaluate its clinical value using decision curve analysis. Two-sided Wilcoxon signed-rank test with statistical significance set at 0.05. The DCNN produced excellent diagnostic performance in distinguishing between PCa and NC for testing datasets with an AUC of 0.944 (95% confidence interval: 0.876-0.994), sensitivity of 87.0%, specificity of 90.6%, PPV of 87.0%, and NPV of 90.6%. The decision curve analysis revealed that the joint model of PI-RADS and DCNN provided additional net benefits compared with the DCNN model and the PI-RADS scheme. The proposed DCNN-based model with enhanced prediction yielded high performance in statistical analysis, suggesting that DCNN could be used in computer-aided diagnosis (CAD) for PCa classification. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  19. Heat-curved girders : deflections and camber loss during and subsequent to bridge construction.

    DOT National Transportation Integrated Search

    1982-01-01

    The present AASHTO bridge design specifications require that additional camber be built into steel girders that are to be heat curved. The additional camber is provided to allow for subsequent losses due to the dissipation of residual stresses impose...

  20. Automated Epileptiform Spike Detection via Affinity Propagation-Based Template Matching

    PubMed Central

    Thomas, John; Jin, Jing; Dauwels, Justin; Cash, Sydney S.; Westover, M. Brandon

    2018-01-01

    Interictal epileptiform spikes are the key diagnostic biomarkers for epilepsy. The clinical gold standard of spike detection is visual inspection performed by neurologists. This is a tedious, time-consuming, and expert-centered process. The development of automated spike detection systems is necessary in order to provide a faster and more reliable diagnosis of epilepsy. In this paper, we propose an efficient template matching spike detector based on a combination of spike and background waveform templates. We generate a template library by clustering a collection of spikes and background waveforms extracted from a database of 50 patients with epilepsy. We benchmark the performance of five clustering techniques based on the receiver operating characteristic (ROC) curves. In addition, background templates are integrated with existing spike templates to improve the overall performance. The affinity propagation-based template matching system with a combination of spike and background templates is shown to outperform the other four conventional methods with the highest area-under-curve (AUC) of 0.953. PMID:29060543

  1. Application of standard addition for the determination of carboxypeptidase activity in Actinomucor elegans bran koji.

    PubMed

    Fu, J; Li, L; Yang, X Q; Zhu, M J

    2011-01-01

    Leucine carboxypeptidase (EC 3.4.16) activity in Actinomucor elegans bran koji was investigated via absorbance at 507 nm after stained by Cd-nihydrin solution, with calibration curve A, which was made by a set of known concentration standard leucine, calibration B, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations inactive crude enzyme extract, and calibration C, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations crude enzyme extract. The results indicated that application of pure amino acid standard curve was not a suitable way to determine carboxypeptidase in complicate mixture, and it probably led to overestimated carboxypeptidase activity. It was found that addition of crude exact into pure amino acid standard curve had a significant difference from pure amino acid standard curve method (p < 0.05). There was no significant enzyme activity difference (p > 0.05) between addition of active crude exact and addition of inactive crude kind, when the proper dilute multiple was used. It was concluded that the addition of crude enzyme extract to the calibration was needed to eliminate the interference of free amino acids and related compounds presented in crude enzyme extract.

  2. Aerodynamic Analysis of Body-Strake Configurations

    DTIC Science & Technology

    2006-06-01

    combination by Macha . Adjustment for finite body was done based on the empirical correlation for circular bodies by Jorgensen. Additional adjustment...these cases were based on the results obtained with plain cylinders. Schematic from Macha . Fig. 2 also compares Macha’s results for circular...is apparent that Macha included in his charts plain cylinder data at s/r=1.0 and that his curves consider these data. In Section 5.5 of Ref. 6 he

  3. Wind turbine power production and annual energy production depend on atmospheric stability and turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St. Martin, Clara M.; Lundquist, Julie K.; Clifton, Andrew

    Using detailed upwind and nacelle-based measurements from a General Electric (GE) 1.5sle model with a 77 m rotor diameter, we calculate power curves and annual energy production (AEP) and explore their sensitivity to different atmospheric parameters to provide guidelines for the use of stability and turbulence filters in segregating power curves. The wind measurements upwind of the turbine include anemometers mounted on a 135 m meteorological tower as well as profiles from a lidar. We calculate power curves for different regimes based on turbulence parameters such as turbulence intensity (TI) as well as atmospheric stability parameters such as the bulk Richardson number ( Rmore » B). We also calculate AEP with and without these atmospheric filters and highlight differences between the results of these calculations. The power curves for different TI regimes reveal that increased TI undermines power production at wind speeds near rated, but TI increases power production at lower wind speeds at this site, the US Department of Energy (DOE) National Wind Technology Center (NWTC). Similarly, power curves for different R B regimes reveal that periods of stable conditions produce more power at wind speeds near rated and periods of unstable conditions produce more power at lower wind speeds. AEP results suggest that calculations without filtering for these atmospheric regimes may overestimate the AEP. Because of statistically significant differences between power curves and AEP calculated with these turbulence and stability filters for this turbine at this site, we suggest implementing an additional step in analyzing power performance data to incorporate effects of atmospheric stability and turbulence across the rotor disk.« less

  4. Wind turbine power production and annual energy production depend on atmospheric stability and turbulence

    DOE PAGES

    St. Martin, Clara M.; Lundquist, Julie K.; Clifton, Andrew; ...

    2016-11-01

    Using detailed upwind and nacelle-based measurements from a General Electric (GE) 1.5sle model with a 77 m rotor diameter, we calculate power curves and annual energy production (AEP) and explore their sensitivity to different atmospheric parameters to provide guidelines for the use of stability and turbulence filters in segregating power curves. The wind measurements upwind of the turbine include anemometers mounted on a 135 m meteorological tower as well as profiles from a lidar. We calculate power curves for different regimes based on turbulence parameters such as turbulence intensity (TI) as well as atmospheric stability parameters such as the bulk Richardson number ( Rmore » B). We also calculate AEP with and without these atmospheric filters and highlight differences between the results of these calculations. The power curves for different TI regimes reveal that increased TI undermines power production at wind speeds near rated, but TI increases power production at lower wind speeds at this site, the US Department of Energy (DOE) National Wind Technology Center (NWTC). Similarly, power curves for different R B regimes reveal that periods of stable conditions produce more power at wind speeds near rated and periods of unstable conditions produce more power at lower wind speeds. AEP results suggest that calculations without filtering for these atmospheric regimes may overestimate the AEP. Because of statistically significant differences between power curves and AEP calculated with these turbulence and stability filters for this turbine at this site, we suggest implementing an additional step in analyzing power performance data to incorporate effects of atmospheric stability and turbulence across the rotor disk.« less

  5. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  6. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  7. Motion of a Point Mass in a Rotating Disc: A Quantitative Analysis of the Coriolis and Centrifugal Force

    NASA Astrophysics Data System (ADS)

    Haddout, Soufiane

    2016-06-01

    In Newtonian mechanics, the non-inertial reference frames is a generalization of Newton's laws to any reference frames. While this approach simplifies some problems, there is often little physical insight into the motion, in particular into the effects of the Coriolis force. The fictitious Coriolis force can be used by anyone in that frame of reference to explain why objects follow curved paths. In this paper, a mathematical solution based on differential equations in non-inertial reference is used to study different types of motion in rotating system. In addition, the experimental data measured on a turntable device, using a video camera in a mechanics laboratory was conducted to compare with mathematical solution in case of parabolically curved, solving non-linear least-squares problems, based on Levenberg-Marquardt's and Gauss-Newton algorithms.

  8. Distinct Lipoprotein Curves in Normal Weight, Overweight, and Obese Children and Adolescents.

    PubMed

    Interator, Hagar; Lebenthal, Yael; Hoshen, Moshe; Safra, Inbar; Balicer, Ran; Leshno, Moshe; Shamir, Raanan

    2017-12-01

    Pediatric lipoprotein curves are based on population-based samples. As obesity, may alter lipoprotein levels, cutoffs not adjusted for body mass index (BMI) are potentially inappropriate. We aimed to develop distinct serum lipid curves based on sex- and BMI-percentiles for children and adolescents. Cross-sectional analysis included all healthy children and adolescents (age range 2-17 years) with available serum lipid concentrations (n = 152,820 of approximately 1.2 million children and adolescents per study year). These children and adolescents were categorized according to sex- and age-stratified BMI-percentiles: 100,375 normal weight (5th-85th percentile), 26,028 overweight (85th-95th percentile) and 26,417 obese (≥95th percentile) individuals. Excluded were individuals with hyperlipidemia, gastrointestinal disease, thyroid disease and lipid-lowering medications. Lambda-Mu-Sigma, smoothed percentile lipid curves were computed. Obese children had a lipid profile pattern throughout childhood and adolescence similar to that of normal weight subjects but with a significant upward shift in total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), non-high-density lipoprotein cholesterol (non-HDL-C), and triglycerides (TGs) and a downward shift in high-density lipoprotein-cholesterol (HDL-C). Obese boys had 13 mg/dL higher TC levels (P < 0.001), 11 mg/dL higher LDL-C levels, 15 mg/dL higher non-HDL-C levels, and 5 mg/dL lower HDL-C levels (P < 0.001). Obese girls had 6 mg/dL higher TC levels, 7 mg/dL higher LDL-C levels, 11 mg/dl higher non-HDL-C levels, and 6 mg/dL lower HDL-C levels (P < 0.001). Across a large, nationally representative cohort of children and adolescents, lipoprotein levels were found to vary in relation to weight status. On the basis of these findings, it is suggested that when evaluating the lipid profile in the pediatric population, in addition to sex-based curves, clinical decision making may require consideration of BMI-stratified curves.

  9. Robust, open-source removal of systematics in Kepler data

    NASA Astrophysics Data System (ADS)

    Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.

    2017-10-01

    We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.

  10. Theoretical study on the dispersion curves of Lamb waves in piezoelectric-semiconductor sandwich plates GaAs-FGPM-AlAs: Legendre polynomial series expansion

    NASA Astrophysics Data System (ADS)

    Othmani, Cherif; Takali, Farid; Njeh, Anouar

    2017-06-01

    In this paper, the propagation of the Lamb waves in the GaAs-FGPM-AlAs sandwich plate is studied. Based on the orthogonal function, Legendre polynomial series expansion is applied along the thickness direction to obtain the Lamb dispersion curves. The convergence and accuracy of this polynomial method are discussed. In addition, the influences of the volume fraction p and thickness hFGPM of the FGPM middle layer on the Lamb dispersion curves are developed. The numerical results also show differences between the characteristics of Lamb dispersion curves in the sandwich plate for various gradient coefficients of the FGPM middle layer. In fact, if the volume fraction p increases the phase velocity will increases and the number of modes will decreases at a given frequency range. All the developments performed in this paper were implemented in Matlab software. The corresponding results presented in this work may have important applications in several industry areas and developing novel acoustic devices such as sensors, electromechanical transducers, actuators and filters.

  11. Activation energy of the low-load NaCl transition from nanoindentation loading curves.

    PubMed

    Kaupp, Gerd

    2014-01-01

    Access to activation energies E(a) of phase transitions is opened by unprecedented analyses of temperature dependent nanoindentation loading curves. It is based on kinks in linearized loading curves, with additional support by coincidence of kink and electrical conductivity of silicon loading curves. Physical properties of B1, B2, NaCl and further phases are discussed. The normalized low-load transition energy of NaCl (Wtrans/µN) increases with temperature and slightly decreases with load. Its semi-logarithmic plot versus T obtains activation energy E(a)/µN for calculation of the transition work for all interesting temperatures and pressures. Arrhenius-type activation energy (kJ/mol) is unavailable for indentation phase transitions. The E(a) per load normalization proves insensitive to creep-on-load, which excludes normalization to depth or volume for large temperature ranges. Such phase transition E(a)/µN is unprecedented material's property and will be of practical importance for the compatibility of composite materials under impact and further shearing interactions at elevated temperatures. © 2014 Wiley Periodicals, Inc.

  12. Accurate evaluation for the biofilm-activated sludge reactor using graphical techniques

    NASA Astrophysics Data System (ADS)

    Fouad, Moharram; Bhargava, Renu

    2018-05-01

    A complete graphical solution is obtained for the completely mixed biofilm-activated sludge reactor (hybrid reactor). The solution consists of a series of curves deduced from the principal equations of the hybrid system after converting them in dimensionless form. The curves estimate the basic parameters of the hybrid system such as suspended biomass concentration, sludge residence time, wasted mass of sludge, and food to biomass ratio. All of these parameters can be expressed as functions of hydraulic retention time, influent substrate concentration, substrate concentration in the bulk, stagnant liquid layer thickness, and the minimum substrate concentration which can maintain the biofilm growth in addition to the basic kinetics of the activated sludge process in which all these variables are expressed in a dimensionless form. Compared to other solutions of such system these curves are simple, easy to use, and provide an accurate tool for analyzing such system based on fundamental principles. Further, these curves may be used as a quick tool to get the effect of variables change on the other parameters and the whole system.

  13. STR melting curve analysis as a genetic screening tool for crime scene samples.

    PubMed

    Nguyen, Quang; McKinney, Jason; Johnson, Donald J; Roberts, Katherine A; Hardy, Winters R

    2012-07-01

    In this proof-of-concept study, high-resolution melt curve (HRMC) analysis was investigated as a postquantification screening tool to discriminate human CSF1PO and THO1 genotypes amplified with mini-STR primers in the presence of SYBR Green or LCGreen Plus dyes. A total of 12 CSF1PO and 11 HUMTHO1 genotypes were analyzed on the LightScanner HR96 and LS-32 systems and were correctly differentiated based upon their respective melt profiles. Short STR amplicon melt curves were affected by repeat number, and single-source and mixed DNA samples were additionally differentiated by the formation of heteroduplexes. Melting curves were shown to be unique and reproducible from DNA quantities ranging from 20 to 0.4 ng and distinguished identical from nonidentical genotypes from DNA derived from different biological fluids and compromised samples. Thus, a method is described which can assess both the quantity and the possible probative value of samples without full genotyping. 2012 American Academy of Forensic Sciences. Published 2012. This article is a U.S. Government work and is in the public domain in the U.S.A.

  14. PRROC: computing and visualizing precision-recall and receiver operating characteristic curves in R.

    PubMed

    Grau, Jan; Grosse, Ivo; Keilwagen, Jens

    2015-08-01

    Precision-recall (PR) and receiver operating characteristic (ROC) curves are valuable measures of classifier performance. Here, we present the R-package PRROC, which allows for computing and visualizing both PR and ROC curves. In contrast to available R-packages, PRROC allows for computing PR and ROC curves and areas under these curves for soft-labeled data using a continuous interpolation between the points of PR curves. In addition, PRROC provides a generic plot function for generating publication-quality graphics of PR and ROC curves. © The Author 2015. Published by Oxford University Press.

  15. The effect of Tricresyl-Phosphate (TCP) as an additive on wear of Iron (Fe)

    NASA Technical Reports Server (NTRS)

    Ghose, Hiren M.; Ferrante, John; Honecy, Frank C.

    1987-01-01

    The effect of tricresyl phosphate (TCP) as an antiwear additive in lubricant trimethyol propane triheptanoate (TMPTH) was investigated. The objective was to examine step loading wear by use of surface analysis, wetting, and chemical bonding changes in the lubricant. The investigation consisted of steploading wear studies by a pin or disk tribometer, the effects on wear related to wetting by contact angle and surface tension measurements of various liquid systems, the chemical bonding changes between lubricant and TCP chromatographic analysis, and by determining the reaction between the TCP and metal surfaces through wear scar analysis by Auger emission spectroscopy (AES). The steploading curve for the base fluid alone shows rapid increase of wear rate with load. The steploading curve for the base fluid in presence of 4.25 percent by volume TCP under dry air purge has shown a great reduction of wear rate with all loads studied. It has also been found that the addition of 4.25 percent by volume TCP plus 0.33 percent by volume water to the base lubricant under N2 purge also greatly reduces the wear rate with all loads studied. AES surface analysis reveals a phosphate type wear resistant film, which greatly increases load-bearing capacity, formed on the iron disk. Preliminary chromatographic studies suggest that this film forms either because of ester oxidation or TCP degradation. Wetting studies show direct correlation between the spreading coefficient and the wear rate.

  16. Evaluation of PCR and high-resolution melt curve analysis for differentiation of Salmonella isolates.

    PubMed

    Saeidabadi, Mohammad Sadegh; Nili, Hassan; Dadras, Habibollah; Sharifiyazdi, Hassan; Connolly, Joanne; Valcanis, Mary; Raidal, Shane; Ghorashi, Seyed Ali

    2017-06-01

    Consumption of poultry products contaminated with Salmonella is one of the major causes of foodborne diseases worldwide and therefore detection and differentiation of Salmonella spp. in poultry is important. In this study, oligonucleotide primers were designed from hemD gene and a PCR followed by high-resolution melt (HRM) curve analysis was developed for rapid differentiation of Salmonella isolates. Amplicons of 228 bp were generated from 16 different Salmonella reference strains and from 65 clinical field isolates mainly from poultry farms. HRM curve analysis of the amplicons differentiated Salmonella isolates and analysis of the nucleotide sequence of the amplicons from selected isolates revealed that each melting curve profile was related to a unique DNA sequence. The relationship between reference strains and tested specimens was also evaluated using a mathematical model without visual interpretation of HRM curves. In addition, the potential of the PCR-HRM curve analysis was evaluated for genotyping of additional Salmonella isolates from different avian species. The findings indicate that PCR followed by HRM curve analysis provides a rapid and robust technique for genotyping of Salmonella isolates to determine the serovar/serotype.

  17. Influence of pavement condition on horizontal curve safety.

    PubMed

    Buddhavarapu, Prasad; Banerjee, Ambarish; Prozzi, Jorge A

    2013-03-01

    Crash statistics suggest that horizontal curves are the most vulnerable sites for crash occurrence. These crashes are often severe and many involve at least some level of injury due to the nature of the collisions. Ensuring the desired pavement surface condition is one potentially effective strategy to reduce the occurrence of severe accidents on horizontal curves. This study sought to develop crash injury severity models by integrating crash and pavement surface condition databases. It focuses on developing a causal relationship between pavement condition indices and severity level of crashes occurring on two-lane horizontal curves in Texas. In addition, it examines the suitability of the existing Skid Index for safety maintenance of two-lane curves. Significant correlation is evident between pavement condition and crash injury severity on two-lane undivided horizontal curves in Texas. Probability of a crash becoming fatal is appreciably sensitive to certain pavement indices. Data suggested that road facilities providing a smoother and more comfortable ride are vulnerable to severe crashes on horizontal curves. In addition, the study found that longitudinal skid measurement barely correlates with injury severity of crashes occurring on curved portions. The study recommends exploring the option of incorporating lateral friction measurement into Pavement Management System (PMS) databases specifically at curved road segments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  19. A comparison of four different lens mappers.

    PubMed

    Larrue, Denis; Legeard, Morgane

    2014-11-01

    Recently, a number of lens mappers have become available for measuring the detailed optical properties of progressive addition lenses (PALs). The goal of this study was to compare the results obtained from several different lens mappers for a range of different lenses. The optical power maps of six lenses-two single-vision lenses, a parallel-sided slide, a flat prism, and two progressive lenses-were measured using four different lens mappers: the Dual Lens Mapper, the Nimo TR4005, the Rotlex Class Plus, and the Visionix VM2500. The repeatability of the instruments was also evaluated. All lens mappers gave very repeatable measurements; however, measurements among the lens mappers varied considerably. Differences appeared to be above the tolerance at the optical center for measurements of single-vision lenses, and these differences increase in the periphery up to 1.00 diopter. Similar differences were observed for the PALs, even increased by prism and base curve effect, with figures greater than 1 diopter in the periphery. The measurements made on the prism and lenses with different base curves suggest that base curve, thickness, and prismatic effect can all contribute to the differences among instruments. Measurements of a given lens taken with different lens mappers can vary substantially. Particular caution should be exercised when interpreting power maps for PALs taken with different instruments.

  20. From Experiment to Theory: What Can We Learn from Growth Curves?

    PubMed

    Kareva, Irina; Karev, Georgy

    2018-01-01

    Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.

  1. Developing Novel Reservoir Rule Curves Using Seasonal Inflow Projections

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-yi; Tung, Ching-pin

    2015-04-01

    Due to significant seasonal rainfall variations, reservoirs and their flexible operational rules are indispensable to Taiwan. Furthermore, with the intensifying impacts of climate change on extreme climate, the frequency of droughts in Taiwan has been increasing in recent years. Drought is a creeping phenomenon, the slow onset character of drought makes it difficult to detect at an early stage, and causes delays on making the best decision of allocating water. For these reasons, novel reservoir rule curves using projected seasonal streamflow are proposed in this study, which can potentially reduce the adverse effects of drought. This study dedicated establishing new rule curves which consider both current available storage and anticipated monthly inflows with leading time of two months to reduce the risk of water shortage. The monthly inflows are projected based on the seasonal climate forecasts from Central Weather Bureau (CWB), which a weather generation model is used to produce daily weather data for the hydrological component of the GWLF. To incorporate future monthly inflow projections into rule curves, this study designs a decision flow index which is a linear combination of current available storage and inflow projections with leading time of 2 months. By optimizing linear relationship coefficients of decision flow index, the shape of rule curves and the percent of water supply in each zone, the best rule curves to decrease water shortage risk and impacts can be developed. The Shimen Reservoir in the northern Taiwan is used as a case study to demonstrate the proposed method. Existing rule curves (M5 curves) of Shimen Reservoir are compared with two cases of new rule curves, including hindcast simulations and historic seasonal forecasts. The results show new rule curves can decrease the total water shortage ratio, and in addition, it can also allocate shortage amount to preceding months to avoid extreme shortage events. Even though some uncertainties in historic forecasts would result unnecessary discounts of water supply, it still performs better than M5 curves during droughts.

  2. CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.

    2013-12-01

    As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.

  3. Function follows form: combining nanoimprint and inkjet printing

    NASA Astrophysics Data System (ADS)

    Muehlberger, M.; Haslinger, M. J.; Kurzmann, J.; Ikeda, M.; Fuchsbauer, A.; Faury, T.; Koepplmayr, T.; Ausserhuber, H.; Kastner, J.; Woegerer, C.; Fechtig, D.

    2017-06-01

    We are investigating the possibilities and the technical requirements to do nanopatterning on arbitrary curved surfaces. This is done considering the opportunities and possibilities of additive manufacturing. One of the key elements is the necessity to deposit material in well-defined areas of various complex 3D objects. In order to achieve this we are developing a robot-based inkjet printing. We report on our progress with this respect and also on our efforts to perform nanoimprinting on curved, possibly 3D-printed objects using materials that can be deposited by inkjet printing. In the framework of this article, we provide an overview over our current status, the challenges and an outlook.

  4. Resonant-tunneling oscillators and multipliers for submillimeter receivers

    NASA Technical Reports Server (NTRS)

    Sollner, T. C. L. Gerhard

    1988-01-01

    Resonant tunneling through double-barrier heterostructures has attracted increasing interest recently, largely because of the fast charge transport it provides. In addition, the negative differential resistance regions that exist in the current-voltage (I-V) curve (peak-to-valley ratios of 3.5:1 at room temperature, and nearly 10:1 at 77 K, were measured) suggest that high-speed devices based on the character of the I-V curve should be possible. For example, the negative differential resistance region is capable of providing the gain necessary for high-frequency oscillations. In the laboratory attempts were made to increase the frequency and power of these oscillators and to demonstrate several different high-frequency devices.

  5. Neural tuning characteristics of auditory primary afferents in the chicken embryo.

    PubMed

    Jones, S M; Jones, T A

    1995-02-01

    Primary afferent activity was recorded from the cochlear ganglion in chicken embryos (Gallus domesticus) at 19 days of incubation (E19). The ganglion was accessed via the recessus scala tympani and impaled with glass micropipettes. Frequency tuning curves were obtained using a computerized threshold tracking procedure. Tuning curves were evaluated to determine characteristics frequencies (CFs), CF thresholds, slopes of low and high frequency flanks, and tip sharpness (Q10dB). The majority of tuning curves exhibited the typical 'V' shape described for older birds and, on average, appeared relatively mature based on mean values for CF thresholds (59.6 +/- 20.3 dBSPL) and tip sharpness (Q10dB = 5.2 +/- 3). The mean slopes of low (61.9 +/- 37 dB/octave) and high (64.6 +/- 33 dB/octave) frequency flanks although comparable were somewhat less than those reported for 21-day-old chickens. Approximately 14% of the tuning curves displayed an unusual 'saw-tooth' pattern. CFs ranged from 188 to 1623 Hz. The highest CF was well below those reported for post-hatch birds. In addition, a broader range of Q10dB values (1.2 to 16.9) may related to a greater variability in embryonic tuning curves. Overall, these data suggest that an impressive functional maturity exists in the embryo at E19. The most significant sign of immaturity was the limited expression of high frequencies. It is argued that the limited high CF in part may be due to the developing middle ear transfer function and/or to a functionally immature cochlear base.

  6. Neural tuning characteristics of auditory primary afferents in the chicken embryo

    NASA Technical Reports Server (NTRS)

    Jones, S. M.; Jones, T. A.

    1995-01-01

    Primary afferent activity was recorded from the cochlear ganglion in chicken embryos (Gallus domesticus) at 19 days of incubation (E19). The ganglion was accessed via the recessus scala tympani and impaled with glass micropipettes. Frequency tuning curves were obtained using a computerized threshold tracking procedure. Tuning curves were evaluated to determine characteristics frequencies (CFs), CF thresholds, slopes of low and high frequency flanks, and tip sharpness (Q10dB). The majority of tuning curves exhibited the typical 'V' shape described for older birds and, on average, appeared relatively mature based on mean values for CF thresholds (59.6 +/- 20.3 dBSPL) and tip sharpness (Q10dB = 5.2 +/- 3). The mean slopes of low (61.9 +/- 37 dB/octave) and high (64.6 +/- 33 dB/octave) frequency flanks although comparable were somewhat less than those reported for 21-day-old chickens. Approximately 14% of the tuning curves displayed an unusual 'saw-tooth' pattern. CFs ranged from 188 to 1623 Hz. The highest CF was well below those reported for post-hatch birds. In addition, a broader range of Q10dB values (1.2 to 16.9) may related to a greater variability in embryonic tuning curves. Overall, these data suggest that an impressive functional maturity exists in the embryo at E19. The most significant sign of immaturity was the limited expression of high frequencies. It is argued that the limited high CF in part may be due to the developing middle ear transfer function and/or to a functionally immature cochlear base.

  7. Dynamic Response and Optimal Design of Curved Metallic Sandwich Panels under Blast Loading

    PubMed Central

    Yang, Shu; Han, Shou-Hong; Lu, Zhen-Hua

    2014-01-01

    It is important to understand the effect of curvature on the blast response of curved structures so as to seek the optimal configurations of such structures with improved blast resistance. In this study, the dynamic response and protective performance of a type of curved metallic sandwich panel subjected to air blast loading were examined using LS-DYNA. The numerical methods were validated using experimental data in the literature. The curved panel consisted of an aluminum alloy outer face and a rolled homogeneous armour (RHA) steel inner face in addition to a closed-cell aluminum foam core. The results showed that the configuration of a “soft” outer face and a “hard” inner face worked well for the curved sandwich panel against air blast loading in terms of maximum deflection (MaxD) and energy absorption. The panel curvature was found to have a monotonic effect on the specific energy absorption (SEA) and a nonmonotonic effect on the MaxD of the panel. Based on artificial neural network (ANN) metamodels, multiobjective optimization designs of the panel were carried out. The optimization results revealed the trade-off relationships between the blast-resistant and the lightweight objectives and showed the great use of Pareto front in such design circumstances. PMID:25126606

  8. Dynamic response and optimal design of curved metallic sandwich panels under blast loading.

    PubMed

    Qi, Chang; Yang, Shu; Yang, Li-Jun; Han, Shou-Hong; Lu, Zhen-Hua

    2014-01-01

    It is important to understand the effect of curvature on the blast response of curved structures so as to seek the optimal configurations of such structures with improved blast resistance. In this study, the dynamic response and protective performance of a type of curved metallic sandwich panel subjected to air blast loading were examined using LS-DYNA. The numerical methods were validated using experimental data in the literature. The curved panel consisted of an aluminum alloy outer face and a rolled homogeneous armour (RHA) steel inner face in addition to a closed-cell aluminum foam core. The results showed that the configuration of a "soft" outer face and a "hard" inner face worked well for the curved sandwich panel against air blast loading in terms of maximum deflection (MaxD) and energy absorption. The panel curvature was found to have a monotonic effect on the specific energy absorption (SEA) and a nonmonotonic effect on the MaxD of the panel. Based on artificial neural network (ANN) metamodels, multiobjective optimization designs of the panel were carried out. The optimization results revealed the trade-off relationships between the blast-resistant and the lightweight objectives and showed the great use of Pareto front in such design circumstances.

  9. Sharply curved turn around duct flow predictions using spectral partitioning of the turbulent kinetic energy and a pressure modified wall law

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1986-01-01

    Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.

  10. Ground-Based Telescope Parametric Cost Model

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Rowell, Ginger Holmes

    2004-01-01

    A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.

  11. Corresponding-states laws for protein solutions.

    PubMed

    Katsonis, Panagiotis; Brandon, Simon; Vekilov, Peter G

    2006-09-07

    The solvent around protein molecules in solutions is structured and this structuring introduces a repulsion in the intermolecular interaction potential at intermediate separations. We use Monte Carlo simulations with isotropic, pair-additive systems interacting with such potentials. We test if the liquid-liquid and liquid-solid phase lines in model protein solutions can be predicted from universal curves and a pair of experimentally determined parameters, as done for atomic and colloid materials using several laws of corresponding states. As predictors, we test three properties at the critical point for liquid-liquid separation: temperature, as in the original van der Waals law, the second virial coefficient, and a modified second virial coefficient, all paired with the critical volume fraction. We find that the van der Waals law is best obeyed and appears more general than its original formulation: A single universal curve describes all tested nonconformal isotropic pair-additive systems. Published experimental data for the liquid-liquid equilibrium for several proteins at various conditions follow a single van der Waals curve. For the solid-liquid equilibrium, we find that no single system property serves as its predictor. We go beyond corresponding-states correlations and put forth semiempirical laws, which allow prediction of the critical temperature and volume fraction solely based on the range of attraction of the intermolecular interaction potential.

  12. Buckling Behavior of Long Anisotropic Plates Subjected to Elastically Restrained Thermal Expansion

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.

    2002-01-01

    An approach for synthesizing buckling results for, and behavior of, thin balanced and unbalanced symmetric laminates that are subjected to uniform heating or cooling and elastically restrained against thermal expansion or contraction is presented. This approach uses a nondimensional analysis for infinitely long, flexurally anisotropic plates that are subjected to combined mechanical loads and is based on useful nondimensional parameters. In addition, stiffness-weighted laminate thermal-expansion parameters and compliance coefficients are derived that are used to determine critical temperatures in terms of physically intuitive mechanical-buckling coefficients. The effects of membrane orthotropy and membrane anisotropy are included in the general formulation. Many results are presented for some common laminates that are intended to facilitate a structural designer's transition to the use of generic buckling design curves. Several curves that illustrate the fundamental parameters used in the analysis are presented, for nine contemporary material systems, that provide physical insight into the buckling response in addition to providing useful design data. Examples are presented that demonstrate the use of generic design curves. The analysis approach and generic results indicate the effects and characteristics of elastically restrained laminate thermal expansion or contraction, membrane orthotropy and anisotropy, and flexural orthotropy and anisotropy in a very general and unifying manner.

  13. A User Authentication Scheme Based on Elliptic Curves Cryptography for Wireless Ad Hoc Networks

    PubMed Central

    Chen, Huifang; Ge, Linlin; Xie, Lei

    2015-01-01

    The feature of non-infrastructure support in a wireless ad hoc network (WANET) makes it suffer from various attacks. Moreover, user authentication is the first safety barrier in a network. A mutual trust is achieved by a protocol which enables communicating parties to authenticate each other at the same time and to exchange session keys. For the resource-constrained WANET, an efficient and lightweight user authentication scheme is necessary. In this paper, we propose a user authentication scheme based on the self-certified public key system and elliptic curves cryptography for a WANET. Using the proposed scheme, an efficient two-way user authentication and secure session key agreement can be achieved. Security analysis shows that our proposed scheme is resilient to common known attacks. In addition, the performance analysis shows that our proposed scheme performs similar or better compared with some existing user authentication schemes. PMID:26184224

  14. A User Authentication Scheme Based on Elliptic Curves Cryptography for Wireless Ad Hoc Networks.

    PubMed

    Chen, Huifang; Ge, Linlin; Xie, Lei

    2015-07-14

    The feature of non-infrastructure support in a wireless ad hoc network (WANET) makes it suffer from various attacks. Moreover, user authentication is the first safety barrier in a network. A mutual trust is achieved by a protocol which enables communicating parties to authenticate each other at the same time and to exchange session keys. For the resource-constrained WANET, an efficient and lightweight user authentication scheme is necessary. In this paper, we propose a user authentication scheme based on the self-certified public key system and elliptic curves cryptography for a WANET. Using the proposed scheme, an efficient two-way user authentication and secure session key agreement can be achieved. Security analysis shows that our proposed scheme is resilient to common known attacks. In addition, the performance analysis shows that our proposed scheme performs similar or better compared with some existing user authentication schemes.

  15. Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs

    PubMed Central

    Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.

    2009-01-01

    Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962

  16. Ratio of sequential chromatograms for quantitative analysis and peak deconvolution: Application to standard addition method and process monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.

    1990-08-01

    This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less

  17. SU-E-T-75: A Simple Technique for Proton Beam Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgdorf, B; Kassaee, A; Garver, E

    2015-06-15

    Purpose: To develop a measurement-based technique to verify the range of proton beams for quality assurance (QA). Methods: We developed a simple technique to verify the proton beam range with in-house fabricated devices. Two separate devices were fabricated; a clear acrylic rectangular cuboid and a solid polyvinyl chloride (PVC) step wedge. For efficiency in our clinic, we used the rectangular cuboid for double scattering (DS) beams and the step wedge for pencil beam scanning (PBS) beams. These devices were added to our QA phantom to measure dose points along the distal fall-off region (between 80% and 20%) in addition tomore » dose at mid-SOBP (spread out Bragg peak) using a two-dimensional parallel plate chamber array (MatriXX™, IBA Dosimetry, Schwarzenbruck, Germany). This method relies on the fact that the slope of the distal fall-off is linear and does not vary with small changes in energy. Using a multi-layer ionization chamber (Zebra™, IBA Dosimetry), percent depth dose (PDD) curves were measured for our standard daily QA beams. The range (energy) for each beam was then varied (i.e. ±2mm and ±5mm) and additional PDD curves were measured. The distal fall-off of all PDD curves was fit to a linear equation. The distal fall-off measured dose for a particular beam was used in our linear equation to determine the beam range. Results: The linear fit of the fall-off region for the PDD curves, when varying the range by a few millimeters for a specific QA beam, yielded identical slopes. The calculated range based on measured point dose(s) in the fall-off region using the slope resulted in agreement of ±1mm of the expected beam range. Conclusion: We developed a simple technique for accurately verifying the beam range for proton therapy QA programs.« less

  18. Three 3D graphical representations of DNA primary sequences based on the classifications of DNA bases and their applications.

    PubMed

    Xie, Guosen; Mo, Zhongxi

    2011-01-21

    In this article, we introduce three 3D graphical representations of DNA primary sequences, which we call RY-curve, MK-curve and SW-curve, based on three classifications of the DNA bases. The advantages of our representations are that (i) these 3D curves are strictly non-degenerate and there is no loss of information when transferring a DNA sequence to its mathematical representation and (ii) the coordinates of every node on these 3D curves have clear biological implication. Two applications of these 3D curves are presented: (a) a simple formula is derived to calculate the content of the four bases (A, G, C and T) from the coordinates of nodes on the curves; and (b) a 12-component characteristic vector is constructed to compare similarity among DNA sequences from different species based on the geometrical centers of the 3D curves. As examples, we examine similarity among the coding sequences of the first exon of beta-globin gene from eleven species and validate similarity of cDNA sequences of beta-globin gene from eight species. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Comparison of BRDF-Predicted and Observed Light Curves of GEO Satellites

    NASA Astrophysics Data System (ADS)

    Ceniceros, A.; Dao, P.; Gaylor, D.; Rast, R.; Anderson, J.; Pinon, E., III

    Although the amount of light received by sensors on the ground from Resident Space Objects (RSOs) in geostationary orbit (GEO) is small, information can still be extracted in the form of light curves (temporal brightness or apparent magnitude). Previous research has shown promising results in determining RSO characteristics such as shape, size, reflectivity, and attitude by processing simulated light curve data with various estimation algorithms. These simulated light curves have been produced using one of several existing analytic Bidirectional Reflectance Distribution Function (BRDF) models. These BRDF models have generally come from researchers in computer graphics and machine vision and have not been shown to be realistic for telescope observations of RSOs in GEO. While BRDFs have been used for SSA analysis and characterization, there is a lack of research on the validation of BRDFs with regards to real data. In this paper, we compared telescope data provided by the Air Force Research Laboratory (AFRL) with predicted light curves from the Ashikhmin-Premoze BRDF and two additional popular illumination models, Ashikhmin-Shirley and Cook-Torrance. We computed predicted light curves based on two line mean elements (TLEs), shape model, attitude profile, observing ground station location, observation time and BRDF. The predicted light curves were then compared with AFRL telescope data. The selected BRDFS provided accurate apparent magnitude trends and behavior, but uncertainties due to lack of attitude information and deficiencies in our satellite model prevented us from obtaining a better match to the real data. The current findings present a foundation for ample future research.

  20. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    PubMed

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  1. Guidelines for using the Delphi Technique to develop habitat suitability index curves

    USGS Publications Warehouse

    Crance, Johnie H.

    1987-01-01

    Habitat Suitability Index (SI) curves are one method of presenting species habitat suitability criteria. The curves are often used with the Habitat Evaluation Procedures (HEP) and are necessary components of the Instream Flow Incremental Methodology (IFIM) (Armour et al. 1984). Bovee (1986) described three categories of SI curves or habitat suitability criteria based on the procedures and data used to develop the criteria. Category I curves are based on professional judgment, with 1ittle or no empirical data. Both Category II (utilization criteria) and Category III (preference criteria) curves have as their source data collected at locations where target species are observed or collected. Having Category II and Category III curves for all species of concern would be ideal. In reality, no SI curves are available for many species, and SI curves that require intensive field sampling often cannot be developed under prevailing constraints on time and costs. One alternative under these circumstances is the development and interim use of SI curves based on expert opinion. The Delphi technique (Pill 1971; Delbecq et al. 1975; Linstone and Turoff 1975) is one method used for combining the knowledge and opinions of a group of experts. The purpose of this report is to describe how the Delphi technique may be used to develop expert-opinion-based SI curves.

  2. The effects of cations and anions on hydrogen chemisorption at Pt

    NASA Technical Reports Server (NTRS)

    Huang, J. C.; Ogrady, W. E.; Yeager, E.

    1977-01-01

    Experimental evidence based on linear sweep voltammetry is presented to substantiate the view that ionic adsorption substantially shifts electrode potentials in addition to the relative heights of the hydrogen adsorption peaks. HClO4 and HF are chosen as better reference electrolytes for anion studies. The voltammetry curves for 0.1M HF and 0.1M HClO4 as well as the effect of adding successively increasing amounts of H2SO4 to these electrolytes are discussed. The measurements are also extended to alkaline solutions. Mechanisms whereby the addition of various cations and anions to electrolytes such as HF and HClO4 can induce changes in the structure of the hydrogen adsorption region in the voltammetry curves are identified: (1) blocking of sites by anion adsorption and coupling of hydrogen adsorption and anion desorption, (2) modification in the hydrogen adsorption energies for sites adjacent to adsorbed anions, (3) changes in the potential distribution across the interface, and (4) surface restructuring.

  3. Precipitation frequency analysis based on regional climate simulations in Central Alberta

    NASA Astrophysics Data System (ADS)

    Kuo, Chun-Chao; Gan, Thian Yew; Hanrahan, Janel L.

    2014-03-01

    A Regional Climate Model (RCM), MM5 (the Fifth Generation Pennsylvania State University/National Center for Atmospheric Research mesoscale model), is used to simulate summer precipitation in Central Alberta. MM5 was set up with a one-way, three-domain nested framework, with domain resolutions of 27, 9, and 3 km, respectively, and forced with ERA-Interim reanalysis data of ECMWF (European Centre for Medium-Range Weather Forecasts). The objective is to develop high resolution, grid-based Intensity-Duration-Frequency (IDF) curves based on the simulated annual maximums of precipitation (AMP) data for durations ranging from 15-min to 24-h. The performance of MM5 was assessed in terms of simulated rainfall intensity, precipitable water, and 2-m air temperature. Next, the grid-based IDF curves derived from MM5 were compared to IDF curves derived from six RCMs of the North American Regional Climate Change Assessment Program (NARCCAP) set up with 50-km grids, driven with NCEP-DOE (National Centers for Environmental Prediction-Department of Energy) Reanalysis II data, and regional IDF curves derived from observed rain gauge data (RG-IDF). The analyzed results indicate that 6-h simulated precipitable water and 2-m temperature agree well with the ERA-Interim reanalysis data. However, compared to RG-IDF curves, IDF curves based on simulated precipitation data of MM5 are overestimated especially for IDF curves of 2-year return period. In contract, IDF curves developed from NARCCAP data suffer from under-estimation and differ more from RG-IDF curves than the MM5 IDF curves. The over-estimation of IDF curves of MM5 was corrected by a quantile-based, bias correction method. By dynamically downscale the ERA-Interim and after bias correction, it is possible to develop IDF curves useful for regions with limited or no rain gauge data. This estimation process can be further extended to predict future grid-based IDF curves subjected to possible climate change impacts based on climate change projections of GCMs (general circulation models) of IPCC (Intergovernmental Panel on Climate Change).

  4. Additional boundary conditions and surface exciton dispersion relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rimbey, P.R.

    1977-01-15

    The surface-exciton dispersion curves in ZnO are derived from the surface impedances developed by Fuchs and Kliewer (FK) and Rimbey and Mahan (RM) including retardation. There exists a distinctive splitting between the two dispersions, the FK additional boundary conditions having longitudinal character, the RM additional boundary conditions being transverse. Surface-mode attenuation due to spatial dispersion is more pronouced in the RM formalism, although inclusion of a phenomenological damping parameter does not alter either dispersion curve. (AIP)

  5. Efficient Implementation of the Pairing on Mobilephones Using BREW

    NASA Astrophysics Data System (ADS)

    Yoshitomi, Motoi; Takagi, Tsuyoshi; Kiyomoto, Shinsaku; Tanaka, Toshiaki

    Pairing based cryptosystems can accomplish novel security applications such as ID-based cryptosystems, which have not been constructed efficiently without the pairing. The processing speed of the pairing based cryptosystems is relatively slow compared with the other conventional public key cryptosystems. However, several efficient algorithms for computing the pairing have been proposed, namely Duursma-Lee algorithm and its variant ηT pairing. In this paper, we present an efficient implementation of the pairing over some mobilephones. Moreover, we compare the processing speed of the pairing with that of the other standard public key cryptosystems, i. e. RSA cryptosystem and elliptic curve cryptosystem. Indeed the processing speed of our implementation in ARM9 processors on BREW achieves under 100 milliseconds using the supersingular curve over F397. In addition, the pairing is more efficient than the other public key cryptosystems, and the pairing can be achieved enough also on BREW mobilephones. It has become efficient enough to implement security applications, such as short signature, ID-based cryptosystems or broadcast encryption, using the pairing on BREW mobilephones.

  6. Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.

    PubMed

    Bandos, Andriy I; Guo, Ben; Gur, David

    2017-02-01

    The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply unreliable AUC-based inferences. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  7. New Bouncing Curved Arrow Technique for the Depiction of Organic Mechanisms

    ERIC Educational Resources Information Center

    Straumanis, Andrei R.; Ruder, Suzanne M.

    2009-01-01

    Many students fail to develop a conceptual understanding of organic chemistry. Evidence suggests this failure goes hand-in-hand with a failure to grasp the techniques, meaning, and usefulness of curved arrow notation. Use of curved arrow notation to illustrate electrophilic addition appears to be a critical juncture in student understanding.…

  8. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  9. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography.

    PubMed

    Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young

    2016-01-01

    Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.'s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.'s protocol and existing similar protocols.

  10. Diatomic predissociation line widths

    NASA Technical Reports Server (NTRS)

    Child, M. S.

    1973-01-01

    Predissociation by rotation and curve crossing in diatomic molecules is discussed. The pattern of predissociation line widths is seen as providing a highly sensitive yardstick for the determination of unknown potential curves. In addition, the computation of such a pattern for given potential curves is considered a matter of routine, unless the predissociation happens to occur from an adiabatic potential curve. Analytic formulas are used to provide physical insight into the details of the predissociation pattern, to the extent that a direct inversion procedure is developed for determination of the repulsive potential curves for Type 1 predissociations.

  11. [Individual learning curve for radical robot-assisted prostatectomy based on the example of three professionals working in one clinic].

    PubMed

    Rasner, P I; Pushkar', D Iu; Kolontarev, K B; Kotenkov, D V

    2014-01-01

    The appearance of new surgical technique always requires evaluation of its effectiveness and ease of acquisition. A comparative study of the results of the first three series of successive robot-assisted radical prostatectomy (RARP) performed on at time by three surgeons, was conducted. The series consisted of 40 procedures, and were divided into 4 groups of 10 operations for the analysis. When comparing data, statistically significant improvement of intra- and postoperative performance in each series was revealed, with increase in the number of operations performed, and in each subsequent series compared with the preceding one. We recommend to perform the planned conversion at the first operation. In our study, previous laparoscopic experience did not provide any significant advantages in the acquisition of robot-assisted technology. To characterize the individual learning curve, we recommend the use of the number of operations that the surgeon looked in the life-surgery regimen and/or in which he participated as an assistant before his own surgical activity, as well as the indicator "technical defect". In addition to the term "individual learning curve", we propose to introduce the terms "surgeon's individual training phase", and "clinic's learning curve".

  12. Analysis of diffusion in curved surfaces and its application to tubular membranes

    PubMed Central

    Klaus, Colin James Stockdale; Raghunathan, Krishnan; DiBenedetto, Emmanuele; Kenworthy, Anne K.

    2016-01-01

    Diffusion of particles in curved surfaces is inherently complex compared with diffusion in a flat membrane, owing to the nonplanarity of the surface. The consequence of such nonplanar geometry on diffusion is poorly understood but is highly relevant in the case of cell membranes, which often adopt complex geometries. To address this question, we developed a new finite element approach to model diffusion on curved membrane surfaces based on solutions to Fick’s law of diffusion and used this to study the effects of geometry on the entry of surface-bound particles into tubules by diffusion. We show that variations in tubule radius and length can distinctly alter diffusion gradients in tubules over biologically relevant timescales. In addition, we show that tubular structures tend to retain concentration gradients for a longer time compared with a comparable flat surface. These findings indicate that sorting of particles along the surfaces of tubules can arise simply as a geometric consequence of the curvature without any specific contribution from the membrane environment. Our studies provide a framework for modeling diffusion in curved surfaces and suggest that biological regulation can emerge purely from membrane geometry. PMID:27733625

  13. Solidification and Microstructure of Ni-Containing Al-Si-Cu Alloy

    NASA Astrophysics Data System (ADS)

    Fang, Li; Ren, Luyang; Geng, Xinyu; Hu, Henry; Nie, Xueyuan; Tjong, Jimi

    2018-01-01

    2 wt. % nickel (Ni) addition was introduced into a conventional cast aluminum alloy A380. The influence of transition alloying element nickel on the solidification behavior of cast aluminum alloy A380 was investigated via thermal analyses based on temperature measurements recorded on cooling curves. The corresponding first and second derivatives of the cooling curves were derived to reveal the details of phase changes during solidification. The nucleation of the primary α-Al phase and eutectic phases were analyzed. The microstructure analyses by scanning electron microscopy (SEM) with energy dispersive X-ray spectroscopy (EDS) indicate that different types and amount of eutectic phases are present in the tested two alloys. The introduction of Ni forms the complex Ni-containing intermetallic phases with Cu and Al.

  14. Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard.

    PubMed

    Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A

    2016-03-15

    The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Graphene based tunable fractal Hilbert curve array broadband radar absorbing screen for radar cross section reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Xianjun, E-mail: xianjun.huang@manchester.ac.uk; College of Electronic Science and Engineering, National University of Defense Technology, Changsha 410073; Hu, Zhirun

    2014-11-15

    This paper proposes a new type of graphene based tunable radar absorbing screen. The absorbing screen consists of Hilbert curve metal strip array and chemical vapour deposition (CVD) graphene sheet. The graphene based screen is not only tunable when the chemical potential of the graphene changes, but also has broadband effective absorption. The absorption bandwidth is from 8.9GHz to 18.1GHz, ie., relative bandwidth of more than 68%, at chemical potential of 0eV, which is significantly wider than that if the graphene sheet had not been employed. As the chemical potential varies from 0 to 0.4eV, the central frequency of themore » screen can be tuned from 13.5GHz to 19.0GHz. In the proposed structure, Hilbert curve metal strip array was designed to provide multiple narrow band resonances, whereas the graphene sheet directly underneath the metal strip array provides tunability and averagely required surface resistance so to significantly extend the screen operation bandwidth by providing broadband impedance matching and absorption. In addition, the thickness of the screen has been optimized to achieve nearly the minimum thickness limitation for a nonmagnetic absorber. The working principle of this absorbing screen is studied in details, and performance under various incident angles is presented. This work extends applications of graphene into tunable microwave radar cross section (RCS) reduction applications.« less

  16. How are flood risk estimates affected by the choice of return-periods?

    NASA Astrophysics Data System (ADS)

    Ward, P. J.; de Moel, H.; Aerts, J. C. J. H.

    2011-12-01

    Flood management is more and more adopting a risk based approach, whereby flood risk is the product of the probability and consequences of flooding. One of the most common approaches in flood risk assessment is to estimate the damage that would occur for floods of several exceedance probabilities (or return periods), to plot these on an exceedance probability-loss curve (risk curve) and to estimate risk as the area under the curve. However, there is little insight into how the selection of the return-periods (which ones and how many) used to calculate risk actually affects the final risk calculation. To gain such insights, we developed and validated an inundation model capable of rapidly simulating inundation extent and depth, and dynamically coupled this to an existing damage model. The method was applied to a section of the River Meuse in the southeast of the Netherlands. Firstly, we estimated risk based on a risk curve using yearly return periods from 2 to 10 000 yr (€ 34 million p.a.). We found that the overall risk is greatly affected by the number of return periods used to construct the risk curve, with over-estimations of annual risk between 33% and 100% when only three return periods are used. In addition, binary assumptions on dike failure can have a large effect (a factor two difference) on risk estimates. Also, the minimum and maximum return period considered in the curve affects the risk estimate considerably. The results suggest that more research is needed to develop relatively simple inundation models that can be used to produce large numbers of inundation maps, complementary to more complex 2-D-3-D hydrodynamic models. It also suggests that research into flood risk could benefit by paying more attention to the damage caused by relatively high probability floods.

  17. A FEM-based method to determine the complex material properties of piezoelectric disks.

    PubMed

    Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C

    2014-08-01

    Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is considered in the adjustment procedure, the obtained material properties allow simulating the displacement amplitude accurately. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Application of the Advanced Distillation Curve Method to the Comparison of Diesel Fuel Oxygenates: 2,5,7,10-Tetraoxaundecane (TOU), 2,4,7,9-Tetraoxadecane (TOD), and Ethanol/Fatty Acid Methyl Ester (FAME) Mixtures.

    PubMed

    Burger, Jessica L; Lovestead, Tara M; LaFollette, Mark; Bruno, Thomas J

    2017-08-17

    Although they are amongst the most efficient engine types, compression-ignition engines have difficulties achieving acceptable particulate emission and NO x formation. Indeed, catalytic after-treatment of diesel exhaust has become common and current efforts to reformulate diesel fuels have concentrated on the incorporation of oxygenates into the fuel. One of the best ways to characterize changes to a fuel upon the addition of oxygenates is to examine the volatility of the fuel mixture. In this paper, we present the volatility, as measured by the advanced distillation curve method, of a prototype diesel fuel with novel diesel fuel oxygenates: 2,5,7,10-tetraoxaundecane (TOU), 2,4,7,9-tetraoxadecane (TOD), and ethanol/fatty acid methyl ester (FAME) mixtures. We present the results for the initial boiling behavior, the distillation curve temperatures, and track the oxygenates throughout the distillations. These diesel fuel blends have several interesting thermodynamic properties that have not been seen in our previous oxygenate studies. Ethanol reduces the temperatures observed early in the distillation (near ethanol's boiling temperature). After these early distillation points (once the ethanol has distilled out), B100 has the greatest impact on the remaining distillation curve and shifts the curve to higher temperatures than what is seen for diesel fuel/ethanol blends. In fact, for the 15% B100 mixture most of the distillation curve reaches temperatures higher than those seen diesel fuel alone. In addition, blends with TOU and TOD also exhibited uncommon characteristics. These additives are unusual because they distill over most the distillation curve (up to 70%). The effects of this can be seen both in histograms of oxygenate concentration in the distillate cuts and in the distillation curves. Our purpose for studying these oxygenate blends is consistent with our vision for replacing fit-for-purpose properties with fundamental properties to enable the development of equations of state that can describe the thermodynamic properties of complex mixtures, with specific attention paid to additives.

  19. The Effects of Autocorrelation on the Curve-of-Factors Growth Model

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Beretvas, S. Natasha; Pituch, Keenan A.

    2011-01-01

    This simulation study examined the performance of the curve-of-factors model (COFM) when autocorrelation and growth processes were present in the first-level factor structure. In addition to the standard curve-of factors growth model, 2 new models were examined: one COFM that included a first-order autoregressive autocorrelation parameter, and a…

  20. Quantifying the uncertainty in discharge data using hydraulic knowledge and uncertain gaugings: a Bayesian method named BaRatin

    NASA Astrophysics Data System (ADS)

    Le Coz, Jérôme; Renard, Benjamin; Bonnifait, Laurent; Branger, Flora; Le Boursicaud, Raphaël; Horner, Ivan; Mansanarez, Valentin; Lang, Michel; Vigneau, Sylvain

    2015-04-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach (cf. Le Coz et al., 2014) to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. The rating curve uncertainties combine the parametric uncertainties and the remnant uncertainties that reflect the limited accuracy of the mathematical model used to simulate the physical stage-discharge relation. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.). An operational version of the BaRatin software and its graphical interface are made available free of charge on request to the authors. J. Le Coz, B. Renard, L. Bonnifait, F. Branger, R. Le Boursicaud (2014). Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: a Bayesian approach, Journal of Hydrology, 509, 573-587.

  1. STACCATO: a novel solution to supernova photometric classification with biased training sets

    NASA Astrophysics Data System (ADS)

    Revsbech, E. A.; Trotta, R.; van Dyk, D. A.

    2018-01-01

    We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. - a diffusion map combined with a random forest classifier - to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.

  2. Static and fatigue interlaminar tensile characterization of laminated composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koudela, K.L.; Strait, L.H.; Caiazzo, A.A.

    1997-12-31

    Spool and curved-beam specimens were evaluated to determine the viability of using either one or both of these configurations to characterize the static and fatigue interlaminar tensile behavior of carbon/epoxy laminates. Unidirectional curved-beam and quasi-isotropic spool specimens were fabricated, nondestructively inspected, and statically tested to failure. Tension-tension fatigue tests were conducted at 10 Hz and an R-ratio ({sigma}{sub min}/{sigma}{sub max}) equal to 0.1 for each specimen configuration. The interlaminar tensile strength of the spool specimen was 12% larger than the strength obtained using curved-beam specimens. In addition, data scatter associated with spool specimens was significantly less than the scatter associatedmore » with curved-beam specimens. The difference in data scatter was attributed to the influence of the fabrication process on the quality of the laminates tested. The fatigue limit at 0{sup 7} cycles for both specimen types was shown to be at least 40% of the average interlaminar tensile strength. Based on the results of this study, it was concluded that either the spool or the curved-beam specimens can be used to characterize the interlaminar tensile static and fatigue behavior of carbon/epoxy laminates. However, to obtain the most representative results, the test specimen configuration should be selected so that the specimen fabrication process closely simulates the actual component fabrication process.« less

  3. Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)

    NASA Astrophysics Data System (ADS)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.

    2017-10-01

    When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach), which departs from a (in operational hydrology) commonly used definition of consistency. A period is considered to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the rating curve model behaves satisfactorily. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each country, regional information is maximally used to estimate observational uncertainty. Based on this uncertainty, a BReach analysis is performed and, subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear to be consistent with this knowledge of historical changes and thus facilitates a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model evaluation design about consistent time periods to analyze).

  4. The Eclipsing Binary On-Line Atlas (EBOLA)

    NASA Astrophysics Data System (ADS)

    Bradstreet, D. H.; Steelman, D. P.; Sanders, S. J.; Hargis, J. R.

    2004-05-01

    In conjunction with the upcoming release of \\it Binary Maker 3.0, an extensive on-line database of eclipsing binaries is being made available. The purposes of the atlas are: \\begin {enumerate} Allow quick and easy access to information on published eclipsing binaries. Amass a consistent database of light and radial velocity curve solutions to aid in solving new systems. Provide invaluable querying capabilities on all of the parameters of the systems so that informative research can be quickly accomplished on a multitude of published results. Aid observers in establishing new observing programs based upon stars needing new light and/or radial velocity curves. Encourage workers to submit their published results so that others may have easy access to their work. Provide a vast but easily accessible storehouse of information on eclipsing binaries to accelerate the process of understanding analysis techniques and current work in the field. \\end {enumerate} The database will eventually consist of all published eclipsing binaries with light curve solutions. The following information and data will be supplied whenever available for each binary: original light curves in all bandpasses, original radial velocity observations, light curve parameters, RA and Dec, V-magnitudes, spectral types, color indices, periods, binary type, 3D representation of the system near quadrature, plots of the original light curves and synthetic models, plots of the radial velocity observations with theoretical models, and \\it Binary Maker 3.0 data files (parameter, light curve, radial velocity). The pertinent references for each star are also given with hyperlinks directly to the papers via the NASA Abstract website for downloading, if available. In addition the Atlas has extensive searching options so that workers can specifically search for binaries with specific characteristics. The website has more than 150 systems already uploaded. The URL for the site is http://ebola.eastern.edu/.

  5. Technical refinement and learning curve for attenuating neurapraxia during robotic-assisted radical prostatectomy to improve sexual function.

    PubMed

    Alemozaffar, Mehrdad; Duclos, Antoine; Hevelone, Nathanael D; Lipsitz, Stuart R; Borza, Tudor; Yu, Hua-Yin; Kowalczyk, Keith J; Hu, Jim C

    2012-06-01

    While radical prostatectomy surgeon learning curves have characterized less blood loss, shorter operative times, and fewer positive margins, there is a dearth of studies characterizing learning curves for improving sexual function. Additionally, while learning curve studies often define volume thresholds for improvement, few of these studies demonstrate specific technical modifications that allow reproducibility of improved outcomes. Demonstrate and quantify the learning curve for improving sexual function outcomes based on technical refinements that reduce neurovascular bundle displacement during nerve-sparing robot-assisted radical prostatectomy (RARP). We performed a retrospective study of 400 consecutive RARPs, categorized into groups of 50, performed after elimination of continuous surgeon/assistant neurovascular bundle countertraction. Our approach to RARP has been described previously. A single-console robotic system was used for all cases. Expanded Prostate Cancer Index Composite sexual function was measured within 1 yr of RARP. Linear regression was performed to determine factors influencing the recovery of sexual function. Greater surgeon experience was associated with better 5-mo sexual function (p = 0.007) and a trend for better 12-mo sexual function (p = 0.061), with improvement plateauing after 250-300 cases. Additionally, younger patient age (both p<0.02) and better preoperative sexual function (<0.001) were associated with better 5- and 12-mo sexual function. Moreover, trainee robotic console time during nerve sparing was associated with worse 12-mo sexual function (p=0.021), while unilateral nerve sparing/non-nerve sparing was associated with worse 5-mo sexual function (p = 0.009). Limitations include the retrospective single-surgeon design. With greater surgeon experience, attenuating lateral displacement of the neurovascular bundle and resultant neurapraxia improve postoperative sexual function. However, to maximize outcomes, appropriate patient selection must be exercised when allowing trainee nerve-sparing involvement. Copyright © 2012 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  6. On the reduction of occultation light curves. [stellar occultations by planets

    NASA Technical Reports Server (NTRS)

    Wasserman, L.; Veverka, J.

    1973-01-01

    The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.

  7. Clinical prognostic rules for severe acute respiratory syndrome in low- and high-resource settings.

    PubMed

    Cowling, Benjamin J; Muller, Matthew P; Wong, Irene O L; Ho, Lai-Ming; Lo, Su-Vui; Tsang, Thomas; Lam, Tai Hing; Louie, Marie; Leung, Gabriel M

    2006-07-24

    An accurate prognostic model for patients with severe acute respiratory syndrome (SARS) could provide a practical clinical decision aid. We developed and validated prognostic rules for both high- and low-resource settings based on data available at the time of admission. We analyzed data on all 1755 and 291 patients with SARS in Hong Kong (derivation cohort) and Toronto (validation cohort), respectively, using a multivariable logistic scoring method with internal and external validation. Scores were assigned on the basis of patient history in a basic model, and a full model additionally incorporated radiological and laboratory results. The main outcome measure was death. Predictors for mortality in the basic model included older age, male sex, and the presence of comorbid conditions. Additional predictors in the full model included haziness or infiltrates on chest radiography, less than 95% oxygen saturation on room air, high lactate dehydrogenase level, and high neutrophil and low platelet counts. The basic model had an area under the receiver operating characteristic (ROC) curve of 0.860 in the derivation cohort, which was maintained on external validation with an area under the ROC curve of 0.882. The full model improved discrimination with areas under the ROC curve of 0.877 and 0.892 in the derivation and validation cohorts, respectively. The model performs well and could be useful in assessing prognosis for patients who are infected with re-emergent SARS.

  8. Influence of kinetics on the determination of the surface reactivity of oxide suspensions by acid-base titration.

    PubMed

    Duc, M; Adekola, F; Lefèvre, G; Fédoroff, M

    2006-11-01

    The effect of acid-base titration protocol and speed on pH measurement and surface charge calculation was studied on suspensions of gamma-alumina, hematite, goethite, and silica, whose size and porosity have been well characterized. The titration protocol has an important effect on surface charge calculation as well as on acid-base constants obtained by fitting of the titration curves. Variations of pH versus time after addition of acid or base to the suspension were interpreted as diffusion processes. Resulting apparent diffusion coefficients depend on the nature of the oxide and on its porosity.

  9. [Colorimetric characterization of LCD based on wavelength partition spectral model].

    PubMed

    Liu, Hao-Xue; Cui, Gui-Hua; Huang, Min; Wu, Bing; Xu, Yan-Fang; Luo, Ming

    2013-10-01

    To establish a colorimetrical characterization model of LCDs, an experiment with EIZO CG19, IBM 19, DELL 19 and HP 19 LCDs was designed and carried out to test the interaction between RGB channels, and then to test the spectral additive property of LCDs. The RGB digital values of single channel and two channels were given and the corresponding tristimulus values were measured, then a chart was plotted and calculations were made to test the independency of RGB channels. The results showed that the interaction between channels was reasonably weak and spectral additivity property was held well. We also found that the relations between radiations and digital values at different wavelengths varied, that is, they were the functions of wavelength. A new calculation method based on piecewise spectral model, in which the relation between radiations and digital values was fitted by a cubic polynomial in each piece of wavelength with measured spectral radiation curves, was proposed and tested. The spectral radiation curves of RGB primaries with any digital values can be found out with only a few measurements and fitted cubic polynomial in this way and then any displayed color can be turned out by the spectral additivity property of primaries at given digital values. The algorithm of this method was discussed in detail in this paper. The computations showed that the proposed method was simple and the number of measurements needed was reduced greatly while keeping a very high computation precision. This method can be used as a colorimetrical characterization model.

  10. The Lattice Dynamics of Colloidal Crystals.

    NASA Astrophysics Data System (ADS)

    Hurd, Alan James

    Colloidal crystals are ordered arrays of highly charged microspheres in water that exhibit spectacular optical diffraction effects by virtue of a large lattice parameter. The microspheres perform Brownian motion that is influenced by the interparticle and fluid forces. The purpose of this study was to understand the nature of the collective motions in colloidal crystals in terms of classical lattice dynamics. In the theoretical analysis, the particle displacements due to Brownian motion were formally decomposed into phonon -like lattice disturbances analogous to the phonons in atomic and molecular solids except that they are heavily damped. The analysis was based on a harmonic solid model with special attention paid to the hydrodynamic interaction between particles. A hydrodynamic model using the Oseen interaction was worked for a three-dimensional lattice but it failed in two important respects: it overestimated the friction factor for long wavelength modes and did not predict a previously observed propagating transverse mode. Both of these failures were corrected by a hydrodynamic model based on periodic solutions to the Stokes equation. In addition, the effects of fluid inertia and constraining walls were considered. Intensity autocorrelation spectroscopy was used to probe the lattice dynamics by measuring the phonon dispersion curves. A thin-film cell was used to reduce multiple scattering to acceptable levels. An experiment to measure wall effects on Brownian motion was necessary to determine the decrease in diffusion rate inherent in the thin-film geometry. The wall effects were found to agree with macroscopic hydrodynamics. An additional experiment measured the elastic anisotropy of the crystal lattice from the thermal diffuse scattering. The theoretical dispersion curves were found to agree well with the measured curves.

  11. Statistical Analyses for Probabilistic Assessments of the Reactor Pressure Vessel Structural Integrity: Building a Master Curve on an Extract of the 'Euro' Fracture Toughness Dataset, Controlling Statistical Uncertainty for Both Mono-Temperature and multi-temperature tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josse, Florent; Lefebvre, Yannick; Todeschini, Patrick

    2006-07-01

    Assessing the structural integrity of a nuclear Reactor Pressure Vessel (RPV) subjected to pressurized-thermal-shock (PTS) transients is extremely important to safety. In addition to conventional deterministic calculations to confirm RPV integrity, Electricite de France (EDF) carries out probabilistic analyses. Probabilistic analyses are interesting because some key variables, albeit conventionally taken at conservative values, can be modeled more accurately through statistical variability. One variable which significantly affects RPV structural integrity assessment is cleavage fracture initiation toughness. The reference fracture toughness method currently in use at EDF is the RCCM and ASME Code lower-bound K{sub IC} based on the indexing parameter RT{submore » NDT}. However, in order to quantify the toughness scatter for probabilistic analyses, the master curve method is being analyzed at present. Furthermore, the master curve method is a direct means of evaluating fracture toughness based on K{sub JC} data. In the framework of the master curve investigation undertaken by EDF, this article deals with the following two statistical items: building a master curve from an extract of a fracture toughness dataset (from the European project 'Unified Reference Fracture Toughness Design curves for RPV Steels') and controlling statistical uncertainty for both mono-temperature and multi-temperature tests. Concerning the first point, master curve temperature dependence is empirical in nature. To determine the 'original' master curve, Wallin postulated that a unified description of fracture toughness temperature dependence for ferritic steels is possible, and used a large number of data corresponding to nuclear-grade pressure vessel steels and welds. Our working hypothesis is that some ferritic steels may behave in slightly different ways. Therefore we focused exclusively on the basic french reactor vessel metal of types A508 Class 3 and A 533 grade B Class 1, taking the sampling level and direction into account as well as the test specimen type. As for the second point, the emphasis is placed on the uncertainties in applying the master curve approach. For a toughness dataset based on different specimens of a single product, application of the master curve methodology requires the statistical estimation of one parameter: the reference temperature T{sub 0}. Because of the limited number of specimens, estimation of this temperature is uncertain. The ASTM standard provides a rough evaluation of this statistical uncertainty through an approximate confidence interval. In this paper, a thorough study is carried out to build more meaningful confidence intervals (for both mono-temperature and multi-temperature tests). These results ensure better control over uncertainty, and allow rigorous analysis of the impact of its influencing factors: the number of specimens and the temperatures at which they have been tested. (authors)« less

  12. DFT and ab initio study of the unimolecular decomposition of the lowest singlet and triplet states of nitromethane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manaa, M.R.; Fried, L.E.

    1998-11-26

    The fully optimized potential energy curves for the unimolecular decomposition of the lowest singlet and triplet states of nitromethane through the C-NO{sub 2} bond dissociation pathway are calculated using various DFT and high-level ab initio electronic structure methods. The authors perform gradient corrected density functional theory (DFT) and multiconfiguration self-consistent field (MCSCF) to conclusively demonstrate that the triplet state of nitromethane is bound. The adiabatic curve of this state exhibits a 33 kcal/mol energy barrier as determined at the MCSCF level. DFT methods locate this barrier at a shorter C-N bond distance with 12--16 kcal/mol lower energy than does MCSCF.more » In addition to MCSCF and DFT, quadratic configuration interactions with single and double substitutions (QCISD) calculations are also performed for the singlet curve. The potential energy profiles of this state predicted by FT methods based on Becke`s 1988 exchange functional differ by as much as 17 kcal/mol from the predictions of MCSCF and QCISD in the vicinity of the equilibrium structure. The computational methods predict bond dissociation energies 5--9 kcal/mol lower than the experimental value. DFT techniques based on Becke`s 3-parameter exchange functional show the best overall agreement with the higher level methods.« less

  13. Next-Generation Intensity-Duration-Frequency Curves for Hydrologic Design in Snow-Dominated Environments

    NASA Astrophysics Data System (ADS)

    Yan, Hongxiang; Sun, Ning; Wigmosta, Mark; Skaggs, Richard; Hou, Zhangshuan; Leung, Ruby

    2018-02-01

    There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In these regions, precipitation-based IDF curves may lead to substantial overestimation/underestimation of design basis events and subsequent overdesign/underdesign of infrastructure. To overcome this deficiency, we proposed next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme events at 376 Snowpack Telemetry (SNOTEL) stations across the western United States that each had at least 30 years of high-quality records. We found standard precipitation-based IDF curves at 45% of the stations were subject to underdesign, many with significant underestimation of 100 year extreme events, for which the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 125% due to snowmelt and ROS events. The regions with the greatest potential for underdesign were in the Pacific Northwest, the Sierra Nevada Mountains, and the Middle and Southern Rockies. We also found the potential for overdesign at 20% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in the development of IDF curves, and they suggest use of the more robust NG-IDF curves for hydrologic design in snow-dominated environments.

  14. V1 orientation plasticity is explained by broadly tuned feedforward inputs and intracortical sharpening.

    PubMed

    Teich, Andrew F; Qian, Ning

    2010-03-01

    Orientation adaptation and perceptual learning change orientation tuning curves of V1 cells. Adaptation shifts tuning curve peaks away from the adapted orientation, reduces tuning curve slopes near the adapted orientation, and increases the responses on the far flank of tuning curves. Learning an orientation discrimination task increases tuning curve slopes near the trained orientation. These changes have been explained previously in a recurrent model (RM) of orientation selectivity. However, the RM generates only complex cells when they are well tuned, so that there is currently no model of orientation plasticity for simple cells. In addition, some feedforward models, such as the modified feedforward model (MFM), also contain recurrent cortical excitation, and it is unknown whether they can explain plasticity. Here, we compare plasticity in the MFM, which simulates simple cells, and a recent modification of the RM (MRM), which displays a continuum of simple-to-complex characteristics. Both pre- and postsynaptic-based modifications of the recurrent and feedforward connections in the models are investigated. The MRM can account for all the learning- and adaptation-induced plasticity, for both simple and complex cells, while the MFM cannot. The key features from the MRM required for explaining plasticity are broadly tuned feedforward inputs and sharpening by a Mexican hat intracortical interaction profile. The mere presence of recurrent cortical interactions in feedforward models like the MFM is insufficient; such models have more rigid tuning curves. We predict that the plastic properties must be absent for cells whose orientation tuning arises from a feedforward mechanism.

  15. Total serum IgE level influences oral food challenge tests for IgE-mediated food allergies.

    PubMed

    Horimukai, K; Hayashi, K; Tsumura, Y; Nomura, I; Narita, M; Ohya, Y; Saito, H; Matsumoto, K

    2015-03-01

    Probability curves predicting oral food challenge test (OFC) results based on specific IgE levels are widely used to prevent serious allergic reactions. Although several confounding factors are known to affect probability curves, the main factors that affect OFC outcomes are currently unclear. We hypothesized that an increased total IgE level would reduce allergic reactivity. Medical records of 337 and 266 patients who underwent OFCs for 3.5 g boiled hen's egg white and 3.1 ml raw cow's milk, respectively, were examined retrospectively. We subdivided the patients into three groups based on total IgE levels and age by percentile (<25th, 25-75th, and >75th percentiles), and logistic regression analyses were performed on each group. Patients with higher total IgE levels were significantly less responsive. In addition, age did not significantly affect the OFC results. Therefore, total IgE levels should be taken into account when predicting OFC results based on food-specific IgE levels. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Newer developments on self-modeling curve resolution implementing equality and unimodality constraints.

    PubMed

    Beyramysoltan, Samira; Abdollahi, Hamid; Rajkó, Róbert

    2014-05-27

    Analytical self-modeling curve resolution (SMCR) methods resolve data sets to a range of feasible solutions using only non-negative constraints. The Lawton-Sylvestre method was the first direct method to analyze a two-component system. It was generalized as a Borgen plot for determining the feasible regions in three-component systems. It seems that a geometrical view is required for considering curve resolution methods, because the complicated (only algebraic) conceptions caused a stop in the general study of Borgen's work for 20 years. Rajkó and István revised and elucidated the principles of existing theory in SMCR methods and subsequently introduced computational geometry tools for developing an algorithm to draw Borgen plots in three-component systems. These developments are theoretical inventions and the formulations are not always able to be given in close form or regularized formalism, especially for geometric descriptions, that is why several algorithms should have been developed and provided for even the theoretical deductions and determinations. In this study, analytical SMCR methods are revised and described using simple concepts. The details of a drawing algorithm for a developmental type of Borgen plot are given. Additionally, for the first time in the literature, equality and unimodality constraints are successfully implemented in the Lawton-Sylvestre method. To this end, a new state-of-the-art procedure is proposed to impose equality constraint in Borgen plots. Two- and three-component HPLC-DAD data set were simulated and analyzed by the new analytical curve resolution methods with and without additional constraints. Detailed descriptions and explanations are given based on the obtained abstract spaces. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Preventing conflicts among bid curves used with transactive controllers in a market-based resource allocation system

    DOEpatents

    Fuller, Jason C.; Chassin, David P.; Pratt, Robert G.; Hauer, Matthew; Tuffner, Francis K.

    2017-03-07

    Disclosed herein are representative embodiments of methods, apparatus, and systems for distributing a resource (such as electricity) using a resource allocation system. One of the disclosed embodiments is a method for operating a transactive thermostatic controller configured to submit bids to a market-based resource allocation system. According to the exemplary method, a first bid curve is determined, the first bid curve indicating a first set of bid prices for corresponding temperatures and being associated with a cooling mode of operation for a heating and cooling system. A second bid curve is also determined, the second bid curve indicating a second set of bid prices for corresponding temperatures and being associated with a heating mode of operation for a heating and cooling system. In this embodiment, the first bid curve, the second bid curve, or both the first bid curve and the second bid curve are modified to prevent overlap of any portion of the first bid curve and the second bid curve.

  18. Simulator evaluation of manually flown curved instrument approaches. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sager, D.

    1973-01-01

    Pilot performance in flying horizontally curved instrument approaches was analyzed by having nine test subjects fly curved approaches in a fixed-base simulator. Approaches were flown without an autopilot and without a flight director. Evaluations were based on deviation measurements made at a number of points along the curved approach path and on subject questionnaires. Results indicate that pilots can fly curved approaches, though less accurately than straight-in approaches; that a moderate wind does not effect curve flying performance; and that there is no performance difference between 60 deg. and 90 deg. turns. A tradeoff of curve path parameters and a paper analysis of wind compensation were also made.

  19. Customized versus population-based growth curves: prediction of low body fat percent at term corrected gestational age following preterm birth.

    PubMed

    Law, Tameeka L; Katikaneni, Lakshmi D; Taylor, Sarah N; Korte, Jeffrey E; Ebeling, Myla D; Wagner, Carol L; Newman, Roger B

    2012-07-01

    Compare customized versus population-based growth curves for identification of small-for-gestational-age (SGA) and body fat percent (BF%) among preterm infants. Prospective cohort study of 204 preterm infants classified as SGA or appropriate-for-gestational-age (AGA) by population-based and customized growth curves. BF% was determined by air-displacement plethysmography. Differences between groups were compared using bivariable and multivariable linear and logistic regression analyses. Customized curves reclassified 30% of the preterm infants as SGA. SGA infants identified by customized method only had significantly lower BF% (13.8 ± 6.0) than the AGA (16.2 ± 6.3, p = 0.02) infants and similar to the SGA infants classified by both methods (14.6 ± 6.7, p = 0.51). Customized growth curves were a significant predictor of BF% (p = 0.02), whereas population-based growth curves were not a significant independent predictor of BF% (p = 0.50) at term corrected gestational age. Customized growth potential improves the differentiation of SGA infants and low BF% compared with a standard population-based growth curve among a cohort of preterm infants.

  20. Incremental triangulation by way of edge swapping and local optimization

    NASA Technical Reports Server (NTRS)

    Wiltberger, N. Lyn

    1994-01-01

    This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.

  1. Upper bound of abutment scour in laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen

    2016-01-01

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used those data to develop envelope curves that define the upper bound of abutment scour. To expand on this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment scour data from other sources and evaluate upper bound patterns with this larger data set. To facilitate this analysis, 446 laboratory and 331 field measurements of abutment scour were compiled into a digital database. This extensive database was used to evaluate the South Carolina abutment scour envelope curves and to develop additional envelope curves that reflected the upper bound of abutment scour depth for the laboratory and field data. The envelope curves provide simple but useful supplementary tools for assessing the potential maximum abutment scour depth in the field setting.

  2. Ligand and membrane-binding behavior of the phosphatidylinositol transfer proteins PITPα and PITPβ.

    PubMed

    Baptist, Matilda; Panagabko, Candace; Cockcroft, Shamshad; Atkinson, Jeffrey

    2016-12-01

    Phosphatidylinositol transfer proteins (PITPs) are believed to be lipid transfer proteins because of their ability to transfer either phosphatidylinositol (PI) or phosphatidylcholine (PC) between membrane compartments, in vitro. However, the detailed mechanism of this transfer process is not fully established. To further understand the transfer mechanism of PITPs we examined the interaction of PITPs with membranes using dual polarization interferometry (DPI), which measures protein binding affinity on a flat immobilized lipid surface. In addition, a fluorescence resonance energy transfer (FRET)-based assay was also employed to monitor how quickly PITPs transfer their ligands to lipid vesicles. DPI analysis revealed that PITPβ had a higher affinity to membranes compared with PITPα. Furthermore, the FRET-based transfer assay revealed that PITPβ has a higher ligand transfer rate compared with PITPα. However, both PITPα and PITPβ demonstrated a preference for highly curved membrane surfaces during ligand transfer. In other words, ligand transfer rate was higher when the accepting vesicles were highly curved.

  3. Influence of bulk turbulence and entrance boundary layer thickness on the curved duct flow field

    NASA Technical Reports Server (NTRS)

    Crawford, R. A.

    1988-01-01

    The influence of bulk turbulence and boundary layer thickness on the secondary flow development in a square, 90 degree turning duct was investigated. A three-dimensional laser velocimetry system was utilized to measure the mean and fluctuating components of velocity at six cross-planes in the duct. The results from this investigation, with entrance boundary layer thickness of 20 percent, were compared with the thin boundary layer results documented in NASA CR-174811. The axial velocity profiles, cross-flow velocities, and turbulence intensities were compared and evaluated with regard to the influence of bulk turbulence intensity and boundary layer thickness, and the influence was significant. The results of this investigation expand the 90 degree curved duct experimental data base to higher turbulence levels and thicker entrance boundary layers. The experimental results provide a challenging benchmark data base for computational fluid dynamics code development and validation. The variation of inlet bulk turbulence intensity provides additional information to aid in turbulence model evaluation.

  4. Tracking of cell nuclei for assessment of in vitro uptake kinetics in ultrasound-mediated drug delivery using fibered confocal fluorescence microscopy.

    PubMed

    Derieppe, Marc; de Senneville, Baudouin Denis; Kuijf, Hugo; Moonen, Chrit; Bos, Clemens

    2014-10-01

    Previously, we demonstrated the feasibility to monitor ultrasound-mediated uptake of a cell-impermeable model drug in real time with fibered confocal fluorescence microscopy. Here, we present a complete post-processing methodology, which corrects for cell displacements, to improve the accuracy of pharmacokinetic parameter estimation. Nucleus detection was performed based on the radial symmetry transform algorithm. Cell tracking used an iterative closest point approach. Pharmacokinetic parameters were calculated by fitting a two-compartment model to the time-intensity curves of individual cells. Cells were tracked successfully, improving time-intensity curve accuracy and pharmacokinetic parameter estimation. With tracking, 93 % of the 370 nuclei showed a fluorescence signal variation that was well-described by a two-compartment model. In addition, parameter distributions were narrower, thus increasing precision. Dedicated image analysis was implemented and enabled studying ultrasound-mediated model drug uptake kinetics in hundreds of cells per experiment, using fiber-based confocal fluorescence microscopy.

  5. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1987-01-01

    New, improved curve fits for the thermodynamic properties of equilibrium air have been developed. The curve fits are for pressure, speed of sound, temperature, entropy, enthalpy, density, and internal energy. These curve fits can be readily incorporated into new or existing computational fluid dynamics codes if real gas effects are desired. The curve fits are constructed from Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits. These improvements are due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25 000 K and densities from 10 to the -7 to 10 to the 3d power amagats.

  6. 76 FR 1145 - Alabama Power Company; Notice of Application for Amendment of License and Soliciting Comments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ... drought-based temporary variance of the Martin Project rule curve and minimum flow releases at the Yates... requesting a drought- based temporary variance to the Martin Project rule curve. The rule curve variance...

  7. An assessment of mode-coupling and falling-friction mechanisms in railway curve squeal through a simplified approach

    NASA Astrophysics Data System (ADS)

    Ding, Bo; Squicciarini, Giacomo; Thompson, David; Corradi, Roberto

    2018-06-01

    Curve squeal is one of the most annoying types of noise caused by the railway system. It usually occurs when a train or tram is running around tight curves. Although this phenomenon has been studied for many years, the generation mechanism is still the subject of controversy and not fully understood. A negative slope in the friction curve under full sliding has been considered to be the main cause of curve squeal for a long time but more recently mode coupling has been demonstrated to be another possible explanation. Mode coupling relies on the inclusion of both the lateral and vertical dynamics at the contact and an exchange of energy occurs between the normal and the axial directions. The purpose of this paper is to assess the role of the mode-coupling and falling-friction mechanisms in curve squeal through the use of a simple approach based on practical parameter values representative of an actual situation. A tramway wheel is adopted to study the effect of the adhesion coefficient, the lateral contact position, the contact angle and the damping ratio. Cases corresponding to both inner and outer wheels in the curve are considered and it is shown that there are situations in which both wheels can squeal due to mode coupling. Additionally, a negative slope is introduced in the friction curve while keeping active the vertical dynamics in order to analyse both mechanisms together. It is shown that, in the presence of mode coupling, the squealing frequency can differ from the natural frequency of either of the coupled wheel modes. Moreover, a phase difference between wheel vibration in the vertical and lateral directions is observed as a characteristic of mode coupling. For both these features a qualitative comparison is shown with field measurements which show the same behaviour.

  8. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography

    PubMed Central

    Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young

    2016-01-01

    Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.’s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.’s protocol and existing similar protocols. PMID:27163786

  9. Optimization of equivalent uniform dose using the L-curve criterion.

    PubMed

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R

    2007-10-07

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.

  10. Experimental investigation of CNT effect on curved beam strength and interlaminar fracture toughness of CFRP laminates

    NASA Astrophysics Data System (ADS)

    Arca, M. A.; Coker, D.

    2014-06-01

    High mechanical properties and light weight structures of composite materials and advances in manufacturing processes have increased the use of composite materials in the aerospace and wind energy industries as a primary load carrying structures in complex shapes. However, use of composite materials in complex geometries such as L-shaped laminates creates weakness at the radius which causes delamination. Carbon nanotubes (CNTs) is preferred as a toughening materials in composite matrices due to their high mechanical properties and aspect ratios. However, effect of CNTs on curved beam strength (CBS) is not investigated in literature comprehensively. The objective of this study is to investigate the effect of CNT on Mode I and Mode II fracture toughness and CBS. L-shaped beams are fabric carbon/epoxy composite laminates manufactured by hand layup technique. Curved beam composite laminates were subjected to four point bending loading according to ASTM D6415/D6415M-06a. Double cantilever beam (DCB) tests and end notch flexure (ENF) tests were conducted to determine mode-I and mode-II fracture toughness, respectively. Preliminary results show that 3% CNT addition to the resin increased the mode-I fracture toughness by %25 and mode-II fracture toughness by %10 compared to base laminates. In contrast, no effect on curved beam strength was found.

  11. Unsupervised classification of cirrhotic livers using MRI data

    NASA Astrophysics Data System (ADS)

    Lee, Gobert; Kanematsu, Masayuki; Kato, Hiroki; Kondo, Hiroshi; Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Hoshi, Hiroaki

    2008-03-01

    Cirrhosis of the liver is a chronic disease. It is characterized by the presence of widespread nodules and fibrosis in the liver which results in characteristic texture patterns. Computerized analysis of hepatic texture patterns is usually based on regions-of-interest (ROIs). However, not all ROIs are typical representatives of the disease stage of the liver from which the ROIs originated. This leads to uncertainties in the ROI labels (diseased or non-diseased). On the other hand, supervised classifiers are commonly used in determining the assignment rule. This presents a problem as the training of a supervised classifier requires the correct labels of the ROIs. The main purpose of this paper is to investigate the use of an unsupervised classifier, the k-means clustering, in classifying ROI based data. In addition, a procedure for generating a receiver operating characteristic (ROC) curve depicting the classification performance of k-means clustering is also reported. Hepatic MRI images of 44 patients (16 cirrhotic; 28 non-cirrhotic) are used in this study. The MRI data are derived from gadolinium-enhanced equilibrium phase images. For each patient, 10 ROIs selected by an experienced radiologist and 7 texture features measured on each ROI are included in the MRI data. Results of the k-means classifier are depicted using an ROC curve. The area under the curve (AUC) has a value of 0.704. This is slightly lower than but comparable to that of LDA and ANN classifiers which have values 0.781 and 0.801, respectively. Methods in constructing ROC curve in relation to k-means clustering have not been previously reported in the literature.

  12. Computerized breast parenchymal analysis on DCE-MRI

    NASA Astrophysics Data System (ADS)

    Li, Hui; Giger, Maryellen L.; Yuan, Yading; Jansen, Sanaz A.; Lan, Li; Bhooshan, Neha; Newstead, Gillian M.

    2009-02-01

    Breast density has been shown to be associated with the risk of developing breast cancer, and MRI has been recommended for high-risk women screening, however, it is still unknown how the breast parenchymal enhancement on DCE-MRI is associated with breast density and breast cancer risk. Ninety-two DCE-MRI exams of asymptomatic women with normal MR findings were included in this study. The 3D breast volume was automatically segmented using a volume-growing based algorithm. The extracted breast volume was classified into fibroglandular and fatty regions based on the discriminant analysis method. The parenchymal kinetic curves within the breast fibroglandular region were extracted and categorized by use of fuzzy c-means clustering, and various parenchymal kinetic characteristics were extracted from the most enhancing voxels. Correlation analysis between the computer-extracted percent dense measures and radiologist-noted BIRADS density ratings yielded a correlation coefficient of 0.76 (p<0.0001). From kinetic analyses, 70% (64/92) of most enhancing curves showed persistent curve type and reached peak parenchymal intensity at the last postcontrast time point; with 89% (82/92) of most enhancing curves reaching peak intensity at either 4th or 5th post-contrast time points. Women with dense breast (BIRADS 3 and 4) were found to have more parenchymal enhancement at their peak time point (Ep) with an average Ep of 116.5% while those women with fatty breasts (BIRADS 1 and 2) demonstrated an average Ep of 62.0%. In conclusion, breast parenchymal enhancement may be associated with breast density and may be potential useful as an additional characteristic for assessing breast cancer risk.

  13. Analytical Expressions for the Mixed-Order Kinetics Parameters of TL Glow Peaks Based on the two Heating Rates Method.

    PubMed

    Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad

    2018-03-24

    The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.

  14. Next-Generation Intensity-Duration-Frequency Curves for Hydrologic Design in Snow-Dominated Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hongxiang; Sun, Ning; Wigmosta, Mark

    There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In these regions, precipitation-based IDF curves may lead to substantial over-/under-estimation of design basis events and subsequent over-/under-design of infrastructure. To overcome this deficiency, we proposed next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme eventsmore » at 376 Snowpack Telemetry (SNOTEL) stations across the western United States that each had at least 30 years of high-quality records. We found standard precipitation-based IDF curves at 45% of the stations were subject to under-design, many with significant under-estimation of 100-year extreme events, for which the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 125% due to snowmelt and ROS events. The regions with the greatest potential for under-design were in the Pacific Northwest, the Sierra Nevada Mountains, and the Middle and Southern Rockies. We also found the potential for over-design at 20% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in the development of IDF curves, and they suggest use of the more robust NG-IDF curves for hydrologic design in snow-dominated environments.« less

  15. A framework for the use of single-chemical transcriptomics data in predicting the hazards associated with complex mixtures of polycyclic aromatic hydrocarbons.

    PubMed

    Labib, Sarah; Williams, Andrew; Kuo, Byron; Yauk, Carole L; White, Paul A; Halappanavar, Sabina

    2017-07-01

    The assumption of additivity applied in the risk assessment of environmental mixtures containing carcinogenic polycyclic aromatic hydrocarbons (PAHs) was investigated using transcriptomics. MutaTMMouse were gavaged for 28 days with three doses of eight individual PAHs, two defined mixtures of PAHs, or coal tar, an environmentally ubiquitous complex mixture of PAHs. Microarrays were used to identify differentially expressed genes (DEGs) in lung tissue collected 3 days post-exposure. Cancer-related pathways perturbed by the individual or mixtures of PAHs were identified, and dose-response modeling of the DEGs was conducted to calculate gene/pathway benchmark doses (BMDs). Individual PAH-induced pathway perturbations (the median gene expression changes for all genes in a pathway relative to controls) and pathway BMDs were applied to models of additivity [i.e., concentration addition (CA), generalized concentration addition (GCA), and independent action (IA)] to generate predicted pathway-specific dose-response curves for each PAH mixture. The predicted and observed pathway dose-response curves were compared to assess the sensitivity of different additivity models. Transcriptomics-based additivity calculation showed that IA accurately predicted the pathway perturbations induced by all mixtures of PAHs. CA did not support the additivity assumption for the defined mixtures; however, GCA improved the CA predictions. Moreover, pathway BMDs derived for coal tar were comparable to BMDs derived from previously published coal tar-induced mouse lung tumor incidence data. These results suggest that in the absence of tumor incidence data, individual chemical-induced transcriptomics changes associated with cancer can be used to investigate the assumption of additivity and to predict the carcinogenic potential of a mixture.

  16. Manufacturing complexity analysis

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1977-01-01

    The analysis of the complexity of a typical system is presented. Starting with the subsystems of an example system, the step-by-step procedure for analysis of the complexity of an overall system is given. The learning curves for the various subsystems are determined as well as the concurrent numbers of relevant design parameters. Then trend curves are plotted for the learning curve slopes versus the various design-oriented parameters, e.g. number of parts versus slope of learning curve, or number of fasteners versus slope of learning curve, etc. Representative cuts are taken from each trend curve, and a figure-of-merit analysis is made for each of the subsystems. Based on these values, a characteristic curve is plotted which is indicative of the complexity of the particular subsystem. Each such characteristic curve is based on a universe of trend curve data taken from data points observed for the subsystem in question. Thus, a characteristic curve is developed for each of the subsystems in the overall system.

  17. Annually resolved atmospheric radiocarbon records reconstructed from tree-rings

    NASA Astrophysics Data System (ADS)

    Wacker, Lukas; Bleicher, Niels; Büntgen, Ulf; Friedrich, Michael; Friedrich, Ronny; Diego Galván, Juan; Hajdas, Irka; Jull, Anthony John; Kromer, Bernd; Miyake, Fusa; Nievergelt, Daniel; Reinig, Frederick; Sookdeo, Adam; Synal, Hans-Arno; Tegel, Willy; Wesphal, Torsten

    2017-04-01

    The IntCal13 calibration curve is mainly based on data measured by decay counting with a resolution of 10 years. Thus high frequency changes like the 11-year solar cycles or cosmic ray events [1] are not visible, or at least not to their full extent. New accelerator mass spectrometry (AMS) systems today are capable of measuring at least as precisely as decay counters [2], with the advantage of using 1000 times less material. The low amount of material required enables more efficient sample preparation. Thus, an annually resolved re-measurement of the tree-ring based calibration curve can now be envisioned. We will demonstrate with several examples the multitude of benefits resulting from annually resolved radiocarbon records from tree-rings. They will not only allow for more precise radiocarbon dating but also contain valuable new astrophysical information. The examples shown will additionally indicate that it can be critical to compare AMS measurements with a calibration curve that is mainly based on decay counting. We often see small offsets between the two measurement techniques, while the reason is yet unknown. [1] Miyake F, Nagaya K, Masuda K, Nakamura T. 2012. A signature of cosmic-ray increase in AD 774-775 from tree rings in Japan. Nature 486(7402):240-2. [2] Wacker L, Bonani G, Friedrich M, Hajdas I, Kromer B, Nemec M, Ruff M, Suter M, Synal H-A, Vockenhuber C. 2010. MICADAS: Routine and high-precision radiocarbon dating. Radiocarbon 52(2):252-62.

  18. Estimation of chloroform inhalation dose by other routes based on the relationship of area under the blood concentration-time curve (AUC)-inhalation dose to chloroform distribution in the blood of rats.

    PubMed

    Take, Makoto; Takeuchi, Tetsuya; Haresaku, Mitsuru; Matsumoto, Michiharu; Nagano, Kasuke; Yamamoto, Seigo; Takamura-Enya, Takeji; Fukushima, Shoji

    2014-01-01

    The present study investigated the time-course changes of concentration of chloroform (CHCl3) in the blood during and after exposure of male rats to CHCl3 by inhalation. Increasing the dose of CHCl3 in the inhalation exposed groups caused a commensurate increase in the concentration of CHCl3 in the blood and the area under the blood concentration-time curve (AUC). There was good correlation (r = 0.988) between the inhalation dose and the AUC/kg body weight. Based on the AUC/kg body weight-inhalation dose curve and the AUC/kg body weight after oral administration, inhalation equivalent doses of orally administered CHCl3 were calculated. Calculation of inhalation equivalent doses allows the body burden due to CHCl3 by inhalation exposure and oral exposure to be directly compared. This type of comparison facilitates risk assessment in humans exposed to CHCl3 by different routes. Our results indicate that when calculating inhalation equivalent doses of CHCl3, it is critical to include the AUC from the exposure period in addition to the AUC after the end of the exposure period. Thus, studies which measure the concentration of volatile organic compounds in the blood during the inhalation exposure period are crucial. The data reported here makes an important contribution to the physiologically based pharmacokinetic (PBPK) database of CHCl3 in rodents.

  19. The curved 14C vs. δ13C relationship in dissolved inorganic carbon: A useful tool for groundwater age- and geochemical interpretations

    USGS Publications Warehouse

    Han, Liang-Feng; Plummer, Niel; Aggarwal, Pradeep

    2014-01-01

    Determination of the 14C content of dissolved inorganic carbon (DIC) is useful for dating of groundwater. However, in addition to radioactive decay, the 14C content in DIC (14CDIC) can be affected by many geochemical and physical processes and numerous models have been proposed to refine radiocarbon ages of DIC in groundwater systems. Changes in the δ13C content of DIC (δ13CDIC) often can be used to deduce the processes that affect the carbon isotopic composition of DIC and the 14C value during the chemical evolution of groundwater. This paper shows that a curved relationship of 14CDIC vs. δ13CDIC will be observed for groundwater systems if (1) the change in δ13C value in DIC is caused by a first-order or pseudo-first-order process, e.g. isotopic exchange between DIC and solid carbonate, (2) the reaction/process progresses with the ageing of the groundwater, i.e. with decay of 14C in DIC, and (3) the magnitude of the rate of change in δ13C of DIC is comparable with that of 14C decay. In this paper, we use a lumped parameter method to derive a model based on the curved relationship between 14CDICand δ13CDIC. The derived model, if used for isotopic exchange between DIC and solid carbonate, is identical to that derived by Gonfiantini and Zuppi (2003). The curved relationship of 14CDIC vs. δ13CDIC can be applied to interpret the age of the DIC in groundwater. Results of age calculations using the method discussed in this paper are compared with those obtained by using other methods that calculate the age of DIC based on adjusted initial radiocarbon values for individual samples. This paper shows that in addition to groundwater age interpretation, the lumped parameter method presented here also provides a useful tool for geochemical interpretations, e.g. estimation of apparent rates of geochemical reactions and revealing the complexity of the geochemical environment.

  20. Estimating iron and aluminum content of acid mine discharge from a north-central Pennsylvania coal field by use of acidity titration curves

    USGS Publications Warehouse

    Ott, A.N.

    1986-01-01

    Determination of acidity provides a value that denotes the quantitative capacity of the sample water to neutralize a strong base to a particular pH. However, much additional information can be obtained from this determination if a titration curve is constructed from recorded data of titrant increments and their corresponding pH values. The curve can be used to identify buffer capabilities, the acidity with respect to any pH value within the curve limit, and, in the case of acid mine drainage from north-central Pennsylvania, the identification and estimation of the concentration of dissolved ferrous iron, ferric iron, and aluminum. Through use of titration curves, a relationship was observed for the acid mine drainage between: (1) the titratable acidity (as milligrams per liter calcium carbonate) to pH 4.0 and the concentration of dissolved ferric iron; and (2) the titratable acidity (as milligrams per liter calcium carbonate) from pH 4.0 to 5.0 and the concentration of dissolved aluminum. The presence of dissolved ferrous iron can be detected by the buffering effect exhibited in the area between pH 5.5 to 7.5. The concentration of ferrous iron is estimated by difference between the concentrations of ferric iron in an oxidized and unoxidized sample. Interferences in any of the titrations from manganese, magnesium, and aluminate, appear to be negligible within the pH range of interest.

  1. Optical Monitoring of NGC4151 During 110 Years

    NASA Astrophysics Data System (ADS)

    Oknyanskij, V. L.; Metlova, N. V.; Huseynov, N. A.; Guo, Di-Fu; Lyuty, V. M.

    We present the historical light curve of NGC 4151 for 1906-2016. The light curve (Oknyanskij and Lyuty, 2007) is primarily based on our published photoelectric data (1968-2007, about 1040 nightly mean measurements (Oknyanskij and Lyuty, 2007)) and photographic estimates (mostly Odessa and Moscow plates taken in 1906 - 1982 (Oknyanskij, 1978, 1983), about 350 measurements). Additionally, we include all data obtained prior to 1968 (de Vaucouleurs and de Vaucouleurs, 1968; Barnes 1968; Sandage, 1967; Wisniewski and Kleinmann, 1968; Fitch et al., 1967) in total, 19 photoelectric observations from 1958-1967, were reduced by us to the same diaphragm aperture as that used in our measurements) as well as photographic data (Pacholczyk et al., 1983) (Harvard and Steward observatories' patrol plates taken in 1910-1968, about 210 measurements). The light curve includes our old and new photometrical data obtained during last years at SAI, ShAO and Weihai Observatory as well as other published data (Roberts and Rumstey, 2012; Schnulle et al., 2015). All these data were reduced to an uniform photometric system.Applying Fourier (CLEAN algorithm) we have found periodic component ˜16 years in the 110 years light curve. 40 years ago about the same "period" was firstly reviled from Odessa's photometrical data (Oknyanskij, 1977; 1978). This "period" seen in the light curve was then found independently in the spectral variability and interpreted as a case of the supermassive binary black hole (Bon et al., 2012). We interpret these circles as some accretion dynamic time.

  2. Buckling Behavior of Long Anisotropic Plates Subjected to Fully Restrained Thermal Expansion

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.

    2003-01-01

    An approach for synthesizing buckling results and behavior for thin, balanced and unbalanced symmetric laminates that are subjected to uniform heating or cooling and which are fully-restrained against thermal expansion or contraction is presented. This approach uses a nondimensional analysis for infinitely long, flexurally anisotropic plates that are subjected to combined mechanical loads and is based on useful nondimensional parameters. In addition, stiffness-weighted laminate thermal-expansion parameters are derived and used to determine critical temperature changes in terms of physically intuitive mechanical buckling coefficients. The effects of membrane orthotropy and anisotropy are included. Many results are presented for some common laminates that are intended to facilitate a structural designer's transition to the use of the generic buckling design curves that are presented in the paper. Several generic buckling design curves are presented that provide physical insight into buckling response and provide useful design data. Examples are presented that demonstrate the use of generic design curves. The analysis approach and generic results indicate the effects and characteristics of laminate thermal expansion, membrane orthotropy and anisotropy, and flexural orthotropy and anisotropy in a very general, unifying manner.

  3. Practical sliced configuration spaces for curved planar pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacks, E.

    1999-01-01

    In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less

  4. Grouping like catchments: A novel means to compare 40+ watersheds in the Northeastern U.S.

    NASA Astrophysics Data System (ADS)

    Shaw, S. B.; Walter, M.; Marjerison, R. D.

    2008-12-01

    One difficulty in understanding the effect of multi-scale patterns in watersheds comes from finding a concise way to identify and compare features across many basins. Comparing raw data (i.e. discharge time series) requires one to account for highly variable climate drivers while extracting meaningful metrics from the data series. Comparing model parameters imposes model assumptions that may obscure fundamental differences, potentially making it an exercise in comparing calibration factors. As a possible middle ground, we have found that the probability of a given basin-wide runoff response can be predicted by combining rainfall frequency with 1. a curve establishing a relationship between basin storage and base flow and 2. the baseflow flow-duration curve. In addition to providing a means to predict runoff, these curves succinctly present empirical runoff-response information, allowing ready graphical comparison of multiple watersheds. From 40+ watersheds throughout the Northeastern U.S., we demonstrate the potential to group watersheds and identify critical hydrologic features, providing particular insight into the influence of land use patterns as well as basin scale.

  5. Antigen-antibody biorecognition events as discriminated by noise analysis of force spectroscopy curves.

    PubMed

    Bizzarri, Anna Rita; Cannistraro, Salvatore

    2014-08-22

    Atomic force spectroscopy is able to extract kinetic and thermodynamic parameters of biomolecular complexes provided that the registered unbinding force curves could be reliably attributed to the rupture of the specific complex interactions. To this aim, a commonly used strategy is based on the analysis of the stretching features of polymeric linkers which are suitably introduced in the biomolecule-substrate immobilization procedure. Alternatively, we present a method to select force curves corresponding to specific biorecognition events, which relies on a careful analysis of the force fluctuations of the biomolecule-functionalized cantilever tip during its approach to the partner molecules immobilized on a substrate. In the low frequency region, a characteristic 1/f (α) noise with α equal to one (flickering noise) is found to replace white noise in the cantilever fluctuation power spectrum when, and only when, a specific biorecognition process between the partners occurs. The method, which has been validated on a well-characterized antigen-antibody complex, represents a fast, yet reliable alternative to the use of linkers which may involve additional surface chemistry and reproducibility concerns.

  6. Transparent Helium in Stripped Envelope Supernovae

    NASA Astrophysics Data System (ADS)

    Piro, Anthony L.; Morozova, Viktoriya S.

    2014-09-01

    Using simple arguments based on photometric light curves and velocity evolution, we propose that some stripped envelope supernovae (SNe) show signs that a significant fraction of their helium is effectively transparent. The main pieces of evidence are the relatively low velocities with little velocity evolution, as are expected deep inside an exploding star, along with temperatures that are too low to ionize helium. This means that the helium should not contribute to the shaping of the main SN light curve, and thus the total helium mass may be difficult to measure from simple light curve modeling. Conversely, such modeling may be more useful for constraining the mass of the carbon/oxygen core of the SN progenitor. Other stripped envelope SNe show higher velocities and larger velocity gradients, which require an additional opacity source (perhaps the mixing of heavier elements or radioactive nickel) to prevent the helium from being transparent. We discuss ways in which similar analysis can provide insights into the differences and similarities between SNe Ib and Ic, which will lead to a better understanding of their respective formation mechanisms.

  7. An agent-based computational model for tuberculosis spreading on age-structured populations

    NASA Astrophysics Data System (ADS)

    Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.

    2015-06-01

    In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.

  8. Triangulation-based edge measurement using polyview optics

    NASA Astrophysics Data System (ADS)

    Li, Yinan; Kästner, Markus; Reithmeier, Eduard

    2018-04-01

    Laser triangulation sensors as non-contact measurement devices are widely used in industry and research for profile measurements and quantitative inspections. Some technical applications e.g. edge measurements usually require a configuration of a single sensor and a translation stage or a configuration of multiple sensors, so that they can measure a large measurement range that is out of the scope of a single sensor. However, the cost of both configurations is high, due to the additional rotational axis or additional sensor. This paper provides a special measurement system for measurement of great curved surfaces based on a single sensor configuration. Utilizing a self-designed polyview optics and calibration process, the proposed measurement system allows an over 180° FOV (field of view) with a precise measurement accuracy as well as an advantage of low cost. The detailed capability of this measurement system based on experimental data is discussed in this paper.

  9. A Dirichlet process model for classifying and forecasting epidemic curves.

    PubMed

    Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V

    2014-01-09

    A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.

  10. Estimation of median growth curves for children up two years old based on biresponse local linear estimator

    NASA Astrophysics Data System (ADS)

    Chamidah, Nur; Rifada, Marisa

    2016-03-01

    There is significant of the coeficient correlation between weight and height of the children. Therefore, the simultaneous model estimation is better than partial single response approach. In this study we investigate the pattern of sex difference in growth curve of children from birth up to two years of age in Surabaya, Indonesia based on biresponse model. The data was collected in a longitudinal representative sample of the Surabaya population of healthy children that consists of two response variables i.e. weight (kg) and height (cm). While a predictor variable is age (month). Based on generalized cross validation criterion, the modeling result based on biresponse model by using local linear estimator for boy and girl growth curve gives optimal bandwidth i.e 1.41 and 1.56 and the determination coefficient (R2) i.e. 99.99% and 99.98%,.respectively. Both boy and girl curves satisfy the goodness of fit criterion i.e..the determination coefficient tends to one. Also, there is difference pattern of growth curve between boy and girl. The boy median growth curves is higher than those of girl curve.

  11. Reference curves for the Australian/Canadian Hand Osteoarthritis Index in the middle-aged Dutch population.

    PubMed

    Kroon, Féline P B; Ramiro, Sofia; Royston, Patrick; Le Cessie, Saskia; Rosendaal, Frits R; Kloppenburg, Margreet

    2017-05-01

    The aim was to establish reference curves of the Australian/Canadian Hand Osteoarthritis Index (AUSCAN), a widely used questionnaire assessing hand complaints. Analyses were performed in a population-based sample, The Netherlands Epidemiology of Obesity study (n = 6671, aged 45-65 years). Factors associated with AUSCAN scores were analysed with ordered logistic regression, because AUSCAN data were zero inflated, dividing AUSCAN into three categories (0 vs 1-5 vs >5). Age- and sex-specific reference curves for the AUSCAN (range 0-60; higher is worse) were developed using quantile regression in conjunction with fractional polynomials. Observed scores in relevant subgroups were compared with the reference curves. The median age was 56 [interquartile range (IQR): 50-61] years; 56% were women and 12% had hand OA according to ACR criteria. AUSCAN scores were low (median 1; IQR: 0-4). Reference curves where higher for women, and increased moderately with age: 95% percentiles for AUSCAN in men and women were, respectively, 5.0 and 12.3 points for a 45-year-old, and 15.2 and 33.6 points for a 65-year-old individual. Additional associated factors included hand OA, inflammatory rheumatic diseases, FM, socio-economic status and BMI. Median AUSCAN pain subscale scores of women with hand OA lay between the 75th and 90th centiles of the general population. AUSCAN scores in the middle-aged Dutch population were low overall, and higher in women than in men. AUSCAN reference curves could serve as a benchmark in research and clinical practice settings. However, the AUSCAN does not measure hand complaints specific for hand OA. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Visible Wavelength Exoplanet Phase Curves from Global Albedo Maps

    NASA Astrophysics Data System (ADS)

    Webber, Matthew; Cahoy, Kerri Lynn

    2015-01-01

    To investigate the effect of three-dimensional global albedo maps we use an albedo model that: calculates albedo spectra for each points across grid in longitude and latitude on the planetary disk, uses the appropriate angles for the source-observer geometry for each location, and then weights and sums these spectra using the Tschebychev-Gauss integration method. This structure permits detailed 3D modeling of an illuminated planetary disk and computes disk-integrated phase curves. Different pressure-temperature profiles are used for each location based on geometry and dynamics. We directly couple high-density pressure maps from global dynamic radiative-transfer models to compute global cloud maps. Cloud formation is determined from the correlation of the species condensation curves with the temperature-pressure profiles. We use the detailed cloud patterns, of spatial-varying composition and temperature, to determine the observable albedo spectra and phase curves for exoplanets Kepler-7b and HD189733b. These albedo spectra are used to compute planet-star flux ratios using PHOENIX stellar models, exoplanet orbital parameters, and telescope transmission functions. Insight from the Earthshine spectrum and solid surface albedo functions (e.g. water, ice, snow, rocks) are used with our planetary grid to determine the phase curve and flux ratios of non-uniform Earth and Super Earth-like exoplanets with various rotation rates and stellar types. Predictions can be tailored to the visible and Near-InfraRed (NIR) spectral windows for the Kepler space telescope, Hubble space telescope, and future observatories (e.g. WFIRST, JWST, Exo-C, Exo-S). Additionally, we constrain the effect of exoplanet urban-light on the shape of the night-side phase curve for Earths and Super-Earths.

  13. Magnetic MOF microreactors for recyclable size-selective biocatalysis† †Electronic supplementary information (ESI) available: Experimental procedures, calibration curves and additional figures relating to capsule characterisation and biocatalysis. See DOI: 10.1039/c4sc03367a Click here for additional data file.

    PubMed Central

    Huo, Jia; Aguilera-Sigalat, Jordi; El-Hankari, Samir

    2015-01-01

    In this contribution we report a synthetic strategy for the encapsulation of functional biomolecules within MOF-based microcapsules. We employ an agarose hydrogel droplet Pickering-stabilised by UiO-66 and magnetite nanoparticles as a template around which to deposit a hierarchically structured ZIF-8 shell. The resulting microcapsules are robust, highly microporous and readily attracted to a magnet, where the hydrogel core provides a facile means to encapsulate enzymes for recyclable size-selective biocatalysis. PMID:28717454

  14. Fetal growth in muskoxen determined by transabdominal ultrasonography.

    PubMed Central

    Pharr, J W; Rowell, J E; Flood, P F

    1994-01-01

    A 5 MHz commercial sector scanner was used to monitor 13 muskox pregnancies and establish normal fetal growth curves. Examinations were carried out between 40 and 197 days of gestation and pregnancy could be detected throughout the period. Early pregnancies were found by scanning lateral to the udder but as pregnancy progressed the fetus was found closer to the dam's umbilicus. Measurements of cranial and abdominal diameters taken at about two week intervals in seven uncomplicated pregnancies in four cows were used to construct fetal growth curves. These can be reliably used in the reproductive management of muskoxen. In addition a series of regressions based on measurements of the fetuses of muskoxen killed in the Arctic are provided. These allow cranial and abdominal diameters to be related to fetal weight and crown-rump length. Images Fig. 1. Fig. 2. PMID:7954117

  15. Soft ferromagnetic properties of Ni44Fe6Mn32Al18 doped Co partially

    NASA Astrophysics Data System (ADS)

    Notonegoro, Hamdan Akbar; Kurniawan, Budhy; Kurniawan, Candra; Manaf, Azwar

    2017-01-01

    Research in finding suitable magnetocaloric material around room temperature made ferromagnetic (FM) (Ni-Mn)-based Heusler alloys receive considerable attention as a potential candidate for the magnetic refrigerator. This compound are associated with the shape-memory effect, magnetic superelasticity, and more others magneto-functional properties. The compounds were prepared by vacuum arc melter (VAM) under argon atmosphere which sintering and annealing process were running with quartz cube in vacuum condition. A small amount of coercivity value at σ = 0 in the hysteresis curve occurred whereas magnetization of the sample in various temperature does not reach saturation. The Currie temperature Tc of the sample was obtained at 358 K. Nevertheless, this is dubious value because at T = 300 K the curves had swooped down. Additional measurements necessary to taken as a comparison to verify this value.

  16. A comparative study of electric load curve changes in an urban low-voltage substation in Spain during the economic crisis (2008-2013).

    PubMed

    Lara-Santillán, Pedro M; Mendoza-Villena, Montserrat; Fernández-Jiménez, L Alfredo; Mañana-Canteli, Mario

    2014-01-01

    This paper presents a comparative study of the electricity consumption (EC) in an urban low-voltage substation before and during the economic crisis (2008-2013). This low-voltage substation supplies electric power to near 400 users. The EC was measured for an 11-year period (2002-2012) with a sampling time of 1 minute. The study described in the paper consists of detecting the changes produced in the load curves of this substation along the time due to changes in the behaviour of consumers. The EC was compared using representative curves per time period (precrisis and crisis). These representative curves were obtained after a computational process, which was based on a search for days with similar curves to the curve of a determined (base) date. This similitude was assessed by the proximity on the calendar, day of the week, daylight time, and outdoor temperature. The last selection parameter was the error between the nearest neighbour curves and the base date curve. The obtained representative curves were linearized to determine changes in their structure (maximum and minimum consumption values, duration of the daily time slot, etc.). The results primarily indicate an increase in the EC in the night slot during the summer months in the crisis period.

  17. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    NASA Astrophysics Data System (ADS)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.

  18. Quality Quandaries: Predicting a Population of Curves

    DOE PAGES

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    2017-12-19

    We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.

  19. Quality Quandaries: Predicting a Population of Curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip

    We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.

  20. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  1. Interpretation of OAO-2 ultraviolet light curves of beta Doradus

    NASA Technical Reports Server (NTRS)

    Hutchinson, J. L.; Lillie, C. F.; Hill, S. J.

    1975-01-01

    Middle-ultraviolet light curves of beta Doradus, obtained by OAO-2, are presented along with other evidence indicating that the small additional bumps observed on the rising branches of these curves have their origin in shock-wave phenomena in the upper atmosphere of this classical Cepheid. A simple piston-driven spherical hydrodynamic model of the atmosphere is developed to explain the bumps, and the calculations are compared with observations. The model is found to be consistent with the shapes of the light curves as well as with measurements of the H-alpha radial velocities.

  2. Developing Turbulent Flow in Strongly Curved Passages of Square and Circular Cross-Section

    DTIC Science & Technology

    1984-03-01

    laser-velocimetry study known to us for developing tur- bulent flow in curved pipes, Enayet , et al. E113 investigated the motion in a 90* bend with Rc...flows are very similar, being De - Re (D/Rc) 1 / 2 6.8 x 104in Rowe’s bend and 2.6 x 104 in the bend of Enayet , et al., the difference in the maximum...a curved duct of square cross section. In addition to the data taken at three longitudioal stations in the curved pipe, (0 9 300, 60° and 900), Enayet

  3. Quantifying stream nutrient uptake from ambient to saturation with instantaneous tracer additions

    NASA Astrophysics Data System (ADS)

    Covino, T. P.; McGlynn, B. L.; McNamara, R.

    2009-12-01

    Stream nutrient tracer additions and spiraling metrics are frequently used to quantify stream ecosystem behavior. However, standard approaches limit our understanding of aquatic biogeochemistry. Specifically, the relationship between in-stream nutrient concentration and stream nutrient spiraling has not been characterized. The standard constant rate (steady-state) approach to stream spiraling parameter estimation, either through elevating nutrient concentration or adding isotopically labeled tracers (e.g. 15N), provides little information regarding the stream kinetic curve that represents the uptake-concentration relationship analogous to the Michaelis-Menten curve. These standard approaches provide single or a few data points and often focus on estimating ambient uptake under the conditions at the time of the experiment. Here we outline and demonstrate a new method using instantaneous nutrient additions and dynamic analyses of breakthrough curve (BTC) data to characterize the full relationship between spiraling metrics and nutrient concentration. We compare the results from these dynamic analyses to BTC-integrated, and standard steady-state approaches. Our results indicate good agreement between these three approaches but we highlight the advantages of our dynamic method. Specifically, our new dynamic method provides a cost-effective and efficient approach to: 1) characterize full concentration-spiraling metric curves; 2) estimate ambient spiraling metrics; 3) estimate Michaelis-Menten parameters maximum uptake (Umax) and the half-saturation constant (Km) from developed uptake-concentration kinetic curves, and; 4) measure dynamic nutrient spiraling in larger rivers where steady-state approaches are impractical.

  4. An improved method to determine neuromuscular properties using force laws - From single muscle to applications in human movements.

    PubMed

    Siebert, T; Sust, M; Thaller, S; Tilp, M; Wagner, H

    2007-04-01

    We evaluate an improved method for individually determining neuromuscular properties in vivo. The method is based on Hill's equation used as a force law combined with Newton's equation of motion. To ensure the range of validity of Hill's equation, we first perform detailed investigations on in vitro single muscles. The force-velocity relation determined with the model coincides well with results obtained by standard methods (r=.99) above 20% of the isometric force. In addition, the model-predicted force curves during work loop contractions very well agree with measurements (mean difference: 2-3%). Subsequently, we deduce theoretically under which conditions it is possible to combine several muscles of the human body to model muscles. This leads to a model equation for human leg extension movements containing parameters for the muscle properties and for the activation. To numerically determine these invariant neuromuscular properties we devise an experimental method based on concentric and isometric leg extensions. With this method we determine individual muscle parameters from experiments such that the simulated curves agree well with experiments (r=.99). A reliability test with 12 participants revealed correlations r=.72-.91 for the neuromuscular parameters (p<.01). Predictions of similar movements under different conditions show mean errors of about 5%. In addition, we present applications in sports practise and theory.

  5. NEXT Performance Curve Analysis and Validation

    NASA Technical Reports Server (NTRS)

    Saripalli, Pratik; Cardiff, Eric; Englander, Jacob

    2016-01-01

    Performance curves of the NEXT thruster are highly important in determining the thruster's ability in performing towards mission-specific goals. New performance curves are proposed and examined here. The Evolutionary Mission Trajectory Generator (EMTG) is used to verify variations in mission solutions based on both available thruster curves and the new curves generated. Furthermore, variations in BOL and EOL curves are also examined. Mission design results shown here validate the use of EMTG and the new performance curves.

  6. MICA: Multiple interval-based curve alignment

    NASA Astrophysics Data System (ADS)

    Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf

    2018-01-01

    MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA

  7. Controlling Surface Plasmons Through Covariant Transformation of the Spin-Dependent Geometric Phase Between Curved Metamaterials

    NASA Astrophysics Data System (ADS)

    Zhong, Fan; Li, Jensen; Liu, Hui; Zhu, Shining

    2018-06-01

    General relativity uses curved space-time to describe accelerating frames. The movement of particles in different curved space-times can be regarded as equivalent physical processes based on the covariant transformation between different frames. In this Letter, we use one-dimensional curved metamaterials to mimic accelerating particles in curved space-times. The different curved shapes of structures are used to mimic different accelerating frames. The different geometric phases along the structure are used to mimic different movements in the frame. Using the covariant principle of general relativity, we can obtain equivalent nanostructures based on space-time transformations, such as the Lorentz transformation and conformal transformation. In this way, many covariant structures can be found that produce the same surface plasmon fields when excited by spin photons. A new kind of accelerating beam, the Rindler beam, is obtained based on the Rindler metric in gravity. Very large effective indices can be obtained in such systems based on geometric-phase gradient. This general covariant design method can be extended to many other optical media.

  8. Comparison of random regression test-day models for Polish Black and White cattle.

    PubMed

    Strabel, T; Szyda, J; Ptak, E; Jamrozik, J

    2005-10-01

    Test-day milk yields of first-lactation Black and White cows were used to select the model for routine genetic evaluation of dairy cattle in Poland. The population of Polish Black and White cows is characterized by small herd size, low level of production, and relatively early peak of lactation. Several random regression models for first-lactation milk yield were initially compared using the "percentage of squared bias" criterion and the correlations between true and predicted breeding values. Models with random herd-test-date effects, fixed age-season and herd-year curves, and random additive genetic and permanent environmental curves (Legendre polynomials of different orders were used for all regressions) were chosen for further studies. Additional comparisons included analyses of the residuals and shapes of variance curves in days in milk. The low production level and early peak of lactation of the breed required the use of Legendre polynomials of order 5 to describe age-season lactation curves. For the other curves, Legendre polynomials of order 3 satisfactorily described daily milk yield variation. Fitting third-order polynomials for the permanent environmental effect made it possible to adequately account for heterogeneous residual variance at different stages of lactation.

  9. p-Curve and p-Hacking in Observational Research.

    PubMed

    Bruns, Stephan B; Ioannidis, John P A

    2016-01-01

    The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable.

  10. Development and Characterization of a Rate-Dependent Three-Dimensional Macroscopic Plasticity Model Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2015-01-01

    Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.

  11. Macular Diagnostic Ability in OCT for Assessing Glaucoma in High Myopia.

    PubMed

    Hung, Kuo-Chi; Wu, Pei-Chang; Poon, Yi-Chieh; Chang, Hsueh-Wen; Lai, Ing-Chou; Tsai, Jen-Chia; Lin, Pei-Wen; Teng, Mei-Ching

    2016-02-01

    To compare the diagnostic abilities of spectral-domain optical coherence tomography (SD-OCT; Spectralis OCT) and time-domain OCT (TD-OCT; Stratus OCT). Changes in macular parameters in highly myopic eyes of glaucoma patients and highly myopic eyes of glaucoma suspects were evaluated and compared. We collected data from 72 highly myopic eyes (spherical equivalent, ≤-6.0D). Forty-one eyes had perimetric glaucoma and 31 eyes were suspected to have glaucoma (control group). All eyes underwent SD-OCT and TD-OCT imaging. Area under the receiver operating characteristic (AUROC) curve and sensitivity were examined on macular volume and thickness parameters at a fixed specificity and compared between groups. The highest TD-OCT AUROC curves were found using outer inferior sector macular thickness (AUROC curve, 0.911) and volume (AUROC curve, 0.909). The highest SD-OCT AUROC curves were found using outer inferior region thickness (AUROC curve, 0.836) and volume (AUROC curve, 0.834). The difference between the two imaging modalities was not statistically significant (thickness, p = 0.141; volume, p = 0.138). The sensitivity of TD-OCT macular outer inferior average thickness was highest and was 88.2%, with a specificity of 80.4%. The sensitivity of TD-OCT average volume measurements in this same region was 76.5%, with a specificity of 91.3%. The SD-OCT average thickness measurements also had the highest sensitivity in this region, which was 78.6%, with a specificity of 82.1%. The SD-OCT volume measurements had a sensitivity of 67.9%, with a specificity of 92.3%. Both SD-OCT and TD-OCT measurements of outer inferior macular thickness and volume can differentiate between eyes of glaucoma patients and glaucoma suspects with high myopia. These independent predictors all had good sensitivity. Based on our results, SD-OCT and TD-OCT have similar diagnostic abilities. These parameters may provide useful additional data in highly myopic eyes to complement standard glaucoma diagnosis tools.

  12. Experimental constraints on melting temperatures in the MgO-SiO2 system at lower mantle pressures

    NASA Astrophysics Data System (ADS)

    Baron, Marzena A.; Lord, Oliver T.; Myhill, Robert; Thomson, Andrew R.; Wang, Weiwei; Trønnes, Reidar G.; Walter, Michael J.

    2017-08-01

    Eutectic melting curves in the system MgO-SiO2 have been experimentally determined at lower mantle pressures using laser-heated diamond anvil cell (LH-DAC) techniques. We investigated eutectic melting of bridgmanite plus periclase in the MgO-MgSiO3 binary, and melting of bridgmanite plus stishovite in the MgSiO3-SiO2 binary, as analogues for natural peridotite and basalt, respectively. The melting curve of model basalt occurs at lower temperatures, has a shallower dT / dP slope and slightly less curvature than the model peridotitic melting curve. Overall, melting temperatures detected in this study are in good agreement with previous experiments and ab initio simulations at ∼25 GPa (Liebske and Frost, 2012; de Koker et al., 2013). However, at higher pressures the measured eutectic melting curves are systematically lower in temperature than curves extrapolated on the basis of thermodynamic modelling of low-pressure experimental data, and those calculated from atomistic simulations. We find that our data are inconsistent with previously computed melting temperatures and melt thermodynamic properties of the SiO2 endmember, and indicate a maximum in short-range ordering in MgO-SiO2 melts close to Mg2SiO4 composition. The curvature of the model peridotite eutectic relative to an MgSiO3 melt adiabat indicates that crystallization in a global magma ocean would begin at ∼100 GPa rather than at the bottom of the mantle, allowing for an early basal melt layer. The model peridotite melting curve lies ∼ 500 K above the mantle geotherm at the core-mantle boundary, indicating that it will not be molten unless the addition of other components reduces the solidus sufficiently. The model basalt melting curve intersects the geotherm at the base of the mantle, and partial melting of subducted oceanic crust is expected.

  13. Study of magnetization switching in coupled magnetic nanostructured systems

    NASA Astrophysics Data System (ADS)

    Radu, Cosmin

    A study of magnetization dynamics experiments in nanostructured materials using the rf susceptibility tunnel diode oscillator (TDO) method is presented along with a extensive theoretical analysis. An original, computer controlled experimental setup that measures the change in susceptibility with the variation in external magnetic field and sample temperature was constructed. The TDO-based experiment design and construction is explained in detail, showing all the elements of originality. This experimental technique has proven reliable for characterizing samples with uncoupled magnetic structure and various magnetic anisotropies like: CrO2, FeCo/IrMn and Co/SiO2 thin films. The TDO was subsequently used to explore the magnetization switching in coupled magnetic systems, like synthetic antiferromagnet (SAF) structures. Magnetoresistive random access memory (MRAM) is an important example of devices where the use of SAF structure is essential. To support the understanding of the SAF magnetic behavior, its configuration and application are reviewed and more details are provided in an appendix. Current problems in increasing the scalability and decreasing the error rate of MRAM devices are closely connected to the switching properties of the SAF structures. Several theoretical studies that were devoted to the understanding of the concepts of SAF critical curve are reviewed. As one can notice, there was no experimental determination of SAF critical curve, due to the difficulties in characterizing a magnetic coupled structure. Depending of the coupling strength between the two ferromagnetic layers, on the SAF critical curve one distinguishes several new features, inexistent in the case of uncoupled systems. Knowing the configuration of the SAF critical curve is of great importance in order to control its switching characteristics. For the first time a method of experimentally recording the critical curve for SAF is proposed in this work. In order to overcome technological limitations, a new way of recording the critical curve by using an additional magnetic bias field was explored. Keywords: magnetization dynamics, magnetic susceptibility, tunnel diode oscillator, critical curve, synthetic antiferromagnet, coupled magnetic structures, MRAM.

  14. Chemosensitivity testing of human tumors using a microplate adenosine triphosphate luminescence assay: clinical correlation for cisplatin resistance of ovarian carcinoma.

    PubMed

    Andreotti, P E; Cree, I A; Kurbacher, C M; Hartmann, D M; Linder, D; Harel, G; Gleiberman, I; Caruso, P A; Ricks, S H; Untch, M

    1995-11-15

    An ATP luminescence assay (TCA 100) was used to measure chemotherapeutic drug sensitivity and resistance of dissociated tumor cells cultured for 6 days in serum-free medium and 96-well polypropylene microplates. Studies were performed with surgical, needle biopsy, pleural, or ascitic fluid specimens using 10,000-20,000 cells/well. ATP measurements were used to determine tumor growth inhibition. Single agent and drug combinations were evaluated using the area under the curve and 50% inhibitory concentration (IC50) results for a series of test drug concentrations. The ATP luminometry method had high sensitivity, linearity, and precision for measuring the activity of single agents and drug combinations. Assay reproducibility was high with intraassay and interassay coefficients of variation of 10-15% for percentage of tumor growth inhibition, 5-10% for area under curve, and 15-20% for IC50 results. Good correlation (r = 0.93) between the area under the curve, and IC50 results was observed. Cytological studies with 124 specimens demonstrated selective growth of malignant cells in the serum-free culture system. Studies with malignant and benign specimens also showed selective growth of malignant cells in the serum-free medium used for assay. The assay had a success rate of 87% based on criteria for specimen histopathology, magnitude of cell growth, and dose-response drug activity. Cisplatin results for ovarian carcinoma are presented for 81 specimens from 70 untreated patients and 33 specimens from 30 refractory patients. A model for interpretation of these results based on the correlation of clinical response with the area under the curve and IC50 results indicates that the assay has > 90% accuracy for cisplatin resistance of ovarian carcinoma. Additional studies are in progress to evaluate the clinical efficacy of this assay.

  15. Beyond the SCS curve number: A new stochastic spatial runoff approach

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  16. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    PubMed

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0.880, 0.994, and 0.856, respectively. For MR(glc), these correlations were 0.963, 0.994, and 0.966, respectively. In response monitoring, these correlations were 0.947, 0.982, and 0.949, respectively. Additional scaling by 1 late arterial sample showed a significant improvement (P < 0.001). The fitted input function calibrated for AA and iDV performed similarly to IDIF. Performance improved significantly using 1 late arterial sample. The proposed model can be used when an IDIF is not available or when serial arterial sampling is not feasible.

  17. Bracing and exercise-based treatment for idiopathic scoliosis.

    PubMed

    Kalichman, Leonid; Kendelker, Liron; Bezalel, Tomer

    2016-01-01

    Various conservative therapies are available for treating adolescent idiopathic scoliosis (AIS), however, the disparities between them and the evidence of their efficacy and effectiveness is still unclear. To evaluate the effectiveness of different conservative treatments on AIS. A literature-based narrative review of the English language medical literature. The most appropriate treatment for each patient should be chosen individually and based on various parameters. Bracing has been found to be a most effective conservative treatment for AIS. There is limited evidence that specific physical exercises also an effective intervention for AIS. Exercise-based physical therapy, if correctly administered, can prevent a worsening of the curve and may decrease need for bracing. In addition, physical exercises were found to be the only treatment improving respiratory function. Combining bracing with exercise increases treatment efficacy compared with a single treatment. Additional, well-designed and good quality studies are required to assess the effectiveness of different conservative methods in treating AIS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Analysis of the sources of uncertainty for EDR2 film‐based IMRT quality assurance

    PubMed Central

    Shi, Chengyu; Papanikolaou, Nikos; Yan, Yulong; Weng, Xuejun; Jiang, gyu

    2006-01-01

    In our institution, patient‐specific quality assurance (QA) for intensity‐modulated radiation therapy (IMRT) is usually performed by measuring the dose to a point using an ion chamber and by measuring the dose to a plane using film. In order to perform absolute dose comparison measurements using film, an accurate calibration curve should be used. In this paper, we investigate the film response curve uncertainty factors, including film batch differences, film processor temperature effect, film digitization, and treatment unit. In addition, we reviewed 50 patient‐specific IMRT QA procedures performed in our institution in order to quantify the sources of error in film‐based dosimetry. Our study showed that the EDR2 film dosimetry can be done with less than 3% uncertainty. The EDR2 film response was not affected by the choice of treatment unit provided the nominal energy was the same. This investigation of the different sources of uncertainties in the film calibration procedure can provide a better understanding of the film‐based dosimetry and can improve quality control for IMRT QA. PACS numbers: 87.86.Cd, 87.53.Xd, 87.57.Nk PMID:17533329

  19. Predicting drug-target interactions by dual-network integrated logistic matrix factorization

    NASA Astrophysics Data System (ADS)

    Hao, Ming; Bryant, Stephen H.; Wang, Yanli

    2017-01-01

    In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.

  20. A serum protein-based algorithm for the detection of Alzheimer disease.

    PubMed

    O'Bryant, Sid E; Xiao, Guanghua; Barber, Robert; Reisch, Joan; Doody, Rachelle; Fairchild, Thomas; Adams, Perrie; Waring, Steven; Diaz-Arrastia, Ramon

    2010-09-01

    To develop an algorithm that separates patients with Alzheimer disease (AD) from controls. Longitudinal case-control study. The Texas Alzheimer's Research Consortium project. Patients  We analyzed serum protein-based multiplex biomarker data from 197 patients diagnosed with AD and 203 controls. Main Outcome Measure  The total sample was randomized equally into training and test sets and random forest methods were applied to the training set to create a biomarker risk score. The biomarker risk score had a sensitivity and specificity of 0.80 and 0.91, respectively, and an area under the curve of 0.91 in detecting AD. When age, sex, education, and APOE status were added to the algorithm, the sensitivity, specificity, and area under the curve were 0.94, 0.84, and 0.95, respectively. These initial data suggest that serum protein-based biomarkers can be combined with clinical information to accurately classify AD. A disproportionate number of inflammatory and vascular markers were weighted most heavily in the analyses. Additionally, these markers consistently distinguished cases from controls in significant analysis of microarray, logistic regression, and Wilcoxon analyses, suggesting the existence of an inflammatory-related endophenotype of AD that may provide targeted therapeutic opportunities for this subset of patients.

  1. Brownian motion curve-based textural classification and its application in cancer diagnosis.

    PubMed

    Mookiah, Muthu Rama Krishnan; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K

    2011-06-01

    To develop an automated diagnostic methodology based on textural features of the oral mucosal epithelium to discriminate normal and oral submucous fibrosis (OSF). A total of 83 normal and 29 OSF images from histopathologic sections of the oral mucosa are considered. The proposed diagnostic mechanism consists of two parts: feature extraction using Brownian motion curve (BMC) and design ofa suitable classifier. The discrimination ability of the features has been substantiated by statistical tests. An error back-propagation neural network (BPNN) is used to classify OSF vs. normal. In development of an automated oral cancer diagnostic module, BMC has played an important role in characterizing textural features of the oral images. Fisher's linear discriminant analysis yields 100% sensitivity and 85% specificity, whereas BPNN leads to 92.31% sensitivity and 100% specificity, respectively. In addition to intensity and morphology-based features, textural features are also very important, especially in histopathologic diagnosis of oral cancer. In view of this, a set of textural features are extracted using the BMC for the diagnosis of OSF. Finally, a textural classifier is designed using BPNN, which leads to a diagnostic performance with 96.43% accuracy. (Anal Quant

  2. Improved detection of genetic markers of antimicrobial resistance by hybridization probe-based melting curve analysis using primers to mask proximal mutations: examples include the influenza H275Y substitution.

    PubMed

    Whiley, David M; Jacob, Kevin; Nakos, Jennifer; Bletchly, Cheryl; Nimmo, Graeme R; Nissen, Michael D; Sloots, Theo P

    2012-06-01

    Numerous real-time PCR assays have been described for detection of the influenza A H275Y alteration. However, the performance of these methods can be undermined by sequence variation in the regions flanking the codon of interest. This is a problem encountered more broadly in microbial diagnostics. In this study, we developed a modification of hybridization probe-based melting curve analysis, whereby primers are used to mask proximal mutations in the sequence targets of hybridization probes, so as to limit the potential for sequence variation to interfere with typing. The approach was applied to the H275Y alteration of the influenza A (H1N1) 2009 strain, as well as a Neisseria gonorrhoeae mutation associated with antimicrobial resistance. Assay performances were assessed using influenza A and N. gonorrhoeae strains characterized by DNA sequencing. The modified hybridization probe-based approach proved successful in limiting the effects of proximal mutations, with the results of melting curve analyses being 100% consistent with the results of DNA sequencing for all influenza A and N. gonorrhoeae strains tested. Notably, these included influenza A and N. gonorrhoeae strains exhibiting additional mutations in hybridization probe targets. Of particular interest was that the H275Y assay correctly typed influenza A strains harbouring a T822C nucleotide substitution, previously shown to interfere with H275Y typing methods. Overall our modified hybridization probe-based approach provides a simple means of circumventing problems caused by sequence variation, and offers improved detection of the influenza A H275Y alteration and potentially other resistance mechanisms.

  3. Exploring Algorithms for Stellar Light Curves With TESS

    NASA Astrophysics Data System (ADS)

    Buzasi, Derek

    2018-01-01

    The Kepler and K2 missions have produced tens of thousands of stellar light curves, which have been used to measure rotation periods, characterize photometric activity levels, and explore phenomena such as differential rotation. The quasi-periodic nature of rotational light curves, combined with the potential presence of additional periodicities not due to rotation, complicates the analysis of these time series and makes characterization of uncertainties difficult. A variety of algorithms have been used for the extraction of rotational signals, including autocorrelation functions, discrete Fourier transforms, Lomb-Scargle periodograms, wavelet transforms, and the Hilbert-Huang transform. In addition, in the case of K2 a number of different pipelines have been used to produce initial detrended light curves from the raw image frames.In the near future, TESS photometry, particularly that deriving from the full-frame images, will dramatically further expand the number of such light curves, but details of the pipeline to be used to produce photometry from the FFIs remain under development. K2 data offers us an opportunity to explore the utility of different reduction and analysis tool combinations applied to these astrophysically important tasks. In this work, we apply a wide range of algorithms to light curves produced by a number of popular K2 pipeline products to better understand the advantages and limitations of each approach and provide guidance for the most reliable and most efficient analysis of TESS stellar data.

  4. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  5. Spiral blood flows in an idealized 180-degree curved artery model

    NASA Astrophysics Data System (ADS)

    Bulusu, Kartik V.; Kulkarni, Varun; Plesniak, Michael W.

    2017-11-01

    Understanding of cardiovascular flows has been greatly advanced by the Magnetic Resonance Velocimetry (MRV) technique and its potential for three-dimensional velocity encoding in regions of anatomic interest. The MRV experiments were performed on a 180-degree curved artery model using a Newtonian blood analog fluid at the Richard M. Lucas Center at Stanford University employing a 3 Tesla General Electric (Discovery 750 MRI system) whole body scanner with an eight-channel cardiac coil. Analysis in two regions of the model-artery was performed for flow with Womersley number=4.2. In the entrance region (or straight-inlet pipe) the unsteady pressure drop per unit length, in-plane vorticity and wall shear stress for the pulsatile, carotid artery-based flow rate waveform were calculated. Along the 180-degree curved pipe (curvature ratio =1/7) the near-wall vorticity and the stretching of the particle paths in the vorticity field are visualized. The resultant flow behavior in the idealized curved artery model is associated with parameters such as Dean number and Womersley number. Additionally, using length scales corresponding to the axial and secondary flow we attempt to understand the mechanisms leading to the formation of various structures observed during the pulsatile flow cycle. Supported by GW Center for Biomimetics and Bioinspired Engineering (COBRE), MRV measurements in collaboration with Prof. John K. Eaton and, Dr. Chris Elkins at Stanford University.

  6. Waveform fitting and geometry analysis for full-waveform lidar feature extraction

    NASA Astrophysics Data System (ADS)

    Tsai, Fuan; Lai, Jhe-Syuan; Cheng, Yi-Hsiu

    2016-10-01

    This paper presents a systematic approach that integrates spline curve fitting and geometry analysis to extract full-waveform LiDAR features for land-cover classification. The cubic smoothing spline algorithm is used to fit the waveform curve of the received LiDAR signals. After that, the local peak locations of the waveform curve are detected using a second derivative method. According to the detected local peak locations, commonly used full-waveform features such as full width at half maximum (FWHM) and amplitude can then be obtained. In addition, the number of peaks, time difference between the first and last peaks, and the average amplitude are also considered as features of LiDAR waveforms with multiple returns. Based on the waveform geometry, dynamic time-warping (DTW) is applied to measure the waveform similarity. The sum of the absolute amplitude differences that remain after time-warping can be used as a similarity feature in a classification procedure. An airborne full-waveform LiDAR data set was used to test the performance of the developed feature extraction method for land-cover classification. Experimental results indicate that the developed spline curve- fitting algorithm and geometry analysis can extract helpful full-waveform LiDAR features to produce better land-cover classification than conventional LiDAR data and feature extraction methods. In particular, the multiple-return features and the dynamic time-warping index can improve the classification results significantly.

  7. Impact of Perceptual Speed Calming Curve Countermeasures on Drivers’ Anticipation and Mitigation Ability : A Driving Simulator Study

    DOT National Transportation Integrated Search

    2018-02-01

    Horizontal curves are unavoidable in rural roads and are a serious crash risk to vehicle occupants. This study investigates the impact and effectiveness of three curve-based perceptual speed-calming countermeasures (advance curve warning signs, chevr...

  8. A Comparative Study of Electric Load Curve Changes in an Urban Low-Voltage Substation in Spain during the Economic Crisis (2008–2013)

    PubMed Central

    Lara-Santillán, Pedro M.; Mendoza-Villena, Montserrat; Fernández-Jiménez, L. Alfredo; Mañana-Canteli, Mario

    2014-01-01

    This paper presents a comparative study of the electricity consumption (EC) in an urban low-voltage substation before and during the economic crisis (2008–2013). This low-voltage substation supplies electric power to near 400 users. The EC was measured for an 11-year period (2002–2012) with a sampling time of 1 minute. The study described in the paper consists of detecting the changes produced in the load curves of this substation along the time due to changes in the behaviour of consumers. The EC was compared using representative curves per time period (precrisis and crisis). These representative curves were obtained after a computational process, which was based on a search for days with similar curves to the curve of a determined (base) date. This similitude was assessed by the proximity on the calendar, day of the week, daylight time, and outdoor temperature. The last selection parameter was the error between the nearest neighbour curves and the base date curve. The obtained representative curves were linearized to determine changes in their structure (maximum and minimum consumption values, duration of the daily time slot, etc.). The results primarily indicate an increase in the EC in the night slot during the summer months in the crisis period. PMID:24895677

  9. Using Design-Based Latent Growth Curve Modeling with Cluster-Level Predictor to Address Dependency

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L.

    2014-01-01

    The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…

  10. Limitation of the Cavitron technique by conifer pit aspiration.

    PubMed

    Beikircher, B; Ameglio, T; Cochard, H; Mayr, S

    2010-07-01

    The Cavitron technique facilitates time and material saving for vulnerability analysis. The use of rotors with small diameters leads to high water pressure gradients (DeltaP) across samples, which may cause pit aspiration in conifers. In this study, the effect of pit aspiration on Cavitron measurements was analysed and a modified 'conifer method' was tested which avoids critical (i.e. pit aspiration inducing) DeltaP. Four conifer species were used (Juniperus communis, Picea abies, Pinus sylvestris, and Larix decidua) for vulnerability analysis based on the standard Cavitron technique and the conifer method. In addition, DeltaP thresholds for pit aspiration were determined and water extraction curves were constructed. Vulnerability curves obtained with the standard method showed generally a less negative P for the induction of embolism than curves of the conifer method. Differences were species-specific with the smallest effects in Juniperus. Larix showed the most pronounced shifts in P(50) (pressure at 50% loss of conductivity) between the standard (-1.5 MPa) and the conifer (-3.5 MPa) methods. Pit aspiration occurred at the lowest DeltaP in Larix and at the highest in Juniperus. Accordingly, at a spinning velocity inducing P(50), DeltaP caused only a 4% loss of conductivity induced by pit aspiration in Juniperus, but about 60% in Larix. Water extraction curves were similar to vulnerability curves indicating that spinning itself did not affect pits. Conifer pit aspiration can have major influences on Cavitron measurements and lead to an overestimation of vulnerability thresholds when a small rotor is used. Thus, the conifer method presented here enables correct vulnerability analysis by avoiding artificial conductivity losses.

  11. The learning curve of laparoendoscopic single-Site (LESS) fundoplication: definable, short, and safe.

    PubMed

    Ross, Sharona B; Choung, Edward; Teta, Anthony F; Colibao, Lotiffa; Luberice, Kenneth; Paul, Harold; Rosemurgy, Alexander S

    2013-01-01

    This study of laparoendoscopic single-site (LESS) fundoplication for gastroesophageal reflux disease was undertaken to determine the "learning curve" for implementing LESS fundoplication. One hundred patients, 38% men, with a median age of 61 years and median body mass index of 26 kg/m(2) , underwent LESS fundoplications. The operative times, placement of additional trocars, conversions to "open" operations, and complications were compared among patient quartiles to establish a learning curve. Median data are reported. The median operative times and complications did not differ among 25-patient cohorts. Additional trocars were placed in 27% of patients, 67% of whom were in the first 25-patient cohort. Patients undergoing LESS fundoplication had a dramatic relief in the frequency and severity of all symptoms of reflux across all cohorts equally (P < .05), particularly for heartburn and regurgitation, without causing dysphagia. LESS fundoplication ameliorates symptoms of gastroesophageal reflux disease without apparent scarring. Notably, few operations required additional trocars after the first 25-patient cohort. Patient selection became more inclusive (eg, more "redo" fundoplications) with increasing experience, whereas operative times and complications remained relatively unchanged. The learning curve of LESS fundoplication is definable, short, and safe. We believe that patients will seek LESS fundoplication because of the efficacy and superior cosmetic outcomes; surgeons will need to meet this demand.

  12. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  13. Electrical characteristics of Graphene based Field Effect Transistor (GFET) biosensor for ADH detection

    NASA Astrophysics Data System (ADS)

    Selvarajan, Reena Sri; Hamzah, Azrul Azlan; Majlis, Burhanuddin Yeop

    2017-08-01

    First pristine graphene was successfully produced by mechanical exfoliation and electrically characterized in 2004 by Andre Geim and Konstantin Novoselov at University of Manchester. Since its discovery in 2004, graphene also known as `super' material that has enticed many researchers and engineers to explore its potential in ultrasensitive detection of analytes in biosensing applications. Among myriad reported sensors, biosensors based on field effect transistors (FETs) have attracted much attention. Thus, implementing graphene as conducting channel material hastens the opportunities for production of ultrasensitive biosensors for future device applications. Herein, we have reported electrical characteristics of graphene based field effect transistor (GFET) for ADH detection. GFET was modelled and simulated using Lumerical DEVICE charge transport solver (DEVICE CT). Electrical characteristics comprising of transfer and output characteristics curves are reported in this study. The device shows ambipolar curve and achieved a minimum conductivity of 0.23912 e5A at Dirac point. However, the curve shifts to the left and introduces significant changes in the minimum conductivity as drain voltage is increased. Output characteristics of GFET exhibits linear Id - Vd dependence characteristics for gate voltage ranging from 0 to 1.5 V. In addition, behavior of electrical transport through GFET was analyzed for various simulation temperatures. It clearly proves that the electrical transport in GFET is dependent on the simulation temperature as it may vary the maximum resistance in channel of the device. Therefore, this unique electrical characteristics of GFET makes it as a promising candidate for ultrasensitive detection of small biomolecules such as ADH in biosensing applications.

  14. High resolution melt curve analysis based on methylation status for human semen identification.

    PubMed

    Fachet, Caitlyn; Quarino, Lawrence; Karnas, K Joy

    2017-03-01

    A high resolution melt curve assay to differentiate semen from blood, saliva, urine, and vaginal fluid based on methylation status at the Dapper Isoform 1 (DACT1) gene was developed. Stains made from blood, saliva, urine, semen, and vaginal fluid were obtained from volunteers and DNA was isolated using either organic extraction (saliva, urine, and vaginal fluid) or Chelex ® 100 extraction (blood and semen). Extracts were then subjected to bisulfite modification in order to convert unmethylated cytosines to uracil, consequently creating sequences whose amplicons have melt curves that vary depending on their initial methylation status. When primers designed to amplify the promoter region of the DACT1 gene were used, DNA from semen samples was distinguishable from other fluids by a having a statistically significant lower melting temperature. The assay was found to be sperm-significant since semen from a vasectomized man produced a melting temperature similar to the non-semen body fluids. Blood and semen stains stored up to 5 months and tested at various intervals showed little variation in melt temperature indicating the methylation status was stable during the course of the study. The assay is a more viable method for forensic science practice than most molecular-based methods for body fluid stain identification since it is time efficient and utilizes instrumentation common to forensic biology laboratories. In addition, the assay is advantageous over traditional presumptive chemical methods for body fluid identification since results are confirmatory and the assay offers the possibility of multiplexing which may test for multiple body fluids simultaneously.

  15. The challenge of forecasting impacts of flash floods: test of a simplified hydraulic approach and validation based on insurance claim data

    NASA Astrophysics Data System (ADS)

    Le Bihan, Guillaume; Payrastre, Olivier; Gaume, Eric; Moncoulon, David; Pons, Frédéric

    2017-11-01

    Up to now, flash flood monitoring and forecasting systems, based on rainfall radar measurements and distributed rainfall-runoff models, generally aimed at estimating flood magnitudes - typically discharges or return periods - at selected river cross sections. The approach presented here goes one step further by proposing an integrated forecasting chain for the direct assessment of flash flood possible impacts on inhabited areas (number of buildings at risk in the presented case studies). The proposed approach includes, in addition to a distributed rainfall-runoff model, an automatic hydraulic method suited for the computation of flood extent maps on a dense river network and over large territories. The resulting catalogue of flood extent maps is then combined with land use data to build a flood impact curve for each considered river reach, i.e. the number of inundated buildings versus discharge. These curves are finally used to compute estimated impacts based on forecasted discharges. The approach has been extensively tested in the regions of Alès and Draguignan, located in the south of France, where well-documented major flash floods recently occurred. The article presents two types of validation results. First, the automatically computed flood extent maps and corresponding water levels are tested against rating curves at available river gauging stations as well as against local reference or observed flood extent maps. Second, a rich and comprehensive insurance claim database is used to evaluate the relevance of the estimated impacts for some recent major floods.

  16. Cross-country transferability of multi-variable damage models

    NASA Astrophysics Data System (ADS)

    Wagenaar, Dennis; Lüdtke, Stefan; Kreibich, Heidi; Bouwer, Laurens

    2017-04-01

    Flood damage assessment is often done with simple damage curves based only on flood water depth. Additionally, damage models are often transferred in space and time, e.g. from region to region or from one flood event to another. Validation has shown that depth-damage curve estimates are associated with high uncertainties, particularly when applied in regions outside the area where the data for curve development was collected. Recently, progress has been made with multi-variable damage models created with data-mining techniques, i.e. Bayesian Networks and random forest. However, it is still unknown to what extent and under which conditions model transfers are possible and reliable. Model validations in different countries will provide valuable insights into the transferability of multi-variable damage models. In this study we compare multi-variable models developed on basis of flood damage datasets from Germany as well as from The Netherlands. Data from several German floods was collected using computer aided telephone interviews. Data from the 1993 Meuse flood in the Netherlands is available, based on compensations paid by the government. The Bayesian network and random forest based models are applied and validated in both countries on basis of the individual datasets. A major challenge was the harmonization of the variables between both datasets due to factors like differences in variable definitions, and regional and temporal differences in flood hazard and exposure characteristics. Results of model validations and comparisons in both countries are discussed, particularly in respect to encountered challenges and possible solutions for an improvement of model transferability.

  17. Simulation-Based Training - Evaluation of the Course Concept "Laparoscopic Surgery Curriculum" by the Participants.

    PubMed

    Köckerling, Ferdinand; Pass, Michael; Brunner, Petra; Hafermalz, Matthias; Grund, Stefan; Sauer, Joerg; Lange, Volker; Schröder, Wolfgang

    2016-01-01

    The learning curve in minimally invasive surgery is much longer than in open surgery. This is thought to be due to the higher demands made on the surgeon's skills. Therefore, the question raised at the outset of training in laparoscopic surgery is how such skills can be acquired by undergoing training outside the bounds of clinical activities to try to shorten the learning curve. Simulation-based training courses are one such model. In 2011, the surgery societies of Germany adopted the "laparoscopic surgery curriculum" as a recommendation for the learning content of systematic training courses for laparoscopic surgery. The curricular structure provides for four 2-day training courses. These courses offer an interrelated content, with each course focusing additionally on specific topics of laparoscopic surgery based on live operations, lectures, and exercises carried out on bio simulators. Between 1st January, 2012 and 31st March, 2016, a total of 36 training courses were conducted at the Vivantes Endoscopic Training Center in accordance with the "laparoscopic surgery curriculum." The training courses were attended by a total of 741 young surgeons and were evaluated as good to very good during continuous evaluation by the participants. Training courses based on the "laparoscopic surgery curriculum" for acquiring skills in laparoscopy are taken up and positively evaluated by young surgeons.

  18. Are precipitation-based intensity-duration-frequency curves appropriate for cost effective and resilient infrastructure design in snow-dominated regions? Next-generation curves with inclusion of rain-on-snow events

    NASA Astrophysics Data System (ADS)

    Yan, H.; Sun, N.; Wigmosta, M. S.; Hou, Z.

    2017-12-01

    There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in the snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In this study, we examined next-generation IDF (NG-IDF) curves with inclusion of snowmelt and ROS events to improve infrastructure design in snow-dominated regions. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme events at 377 Snowpack Telemetry (SNOTEL) stations across the western United States with at least 30 years of high quality record. We found 38% of the stations were subject to under-design, many with significant underestimation of 100-year extreme events, where the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 121% due to snowmelt and ROS events. The regions with the greatest potential for under-design were in the Pacific Northwest, the Sierra Nevada, and the Middle and Southern Rockies. We also found the potential for over-design at 27% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in development of IDF curves for engineering design procedures in snow-dominated regions.

  19. Eyewitness identification: Bayesian information gain, base-rate effect equivalency curves, and reasonable suspicion.

    PubMed

    Wells, Gary L; Yang, Yueran; Smalarz, Laura

    2015-04-01

    We provide a novel Bayesian treatment of the eyewitness identification problem as it relates to various system variables, such as instruction effects, lineup presentation format, lineup-filler similarity, lineup administrator influence, and show-ups versus lineups. We describe why eyewitness identification is a natural Bayesian problem and how numerous important observations require careful consideration of base rates. Moreover, we argue that the base rate in eyewitness identification should be construed as a system variable (under the control of the justice system). We then use prior-by-posterior curves and information-gain curves to examine data obtained from a large number of published experiments. Next, we show how information-gain curves are moderated by system variables and by witness confidence and we note how information-gain curves reveal that lineups are consistently more proficient at incriminating the guilty than they are at exonerating the innocent. We then introduce a new type of analysis that we developed called base rate effect-equivalency (BREE) curves. BREE curves display how much change in the base rate is required to match the impact of any given system variable. The results indicate that even relatively modest changes to the base rate can have more impact on the reliability of eyewitness identification evidence than do the traditional system variables that have received so much attention in the literature. We note how this Bayesian analysis of eyewitness identification has implications for the question of whether there ought to be a reasonable-suspicion criterion for placing a person into the jeopardy of an identification procedure. (c) 2015 APA, all rights reserved).

  20. A curved ultrasonic actuator optimized for spherical motors: design and experiments.

    PubMed

    Leroy, Edouard; Lozada, José; Hafez, Moustapha

    2014-08-01

    Multi-degree-of-freedom angular actuators are commonly used in numerous mechatronic areas such as omnidirectional robots, robot articulations or inertially stabilized platforms. The conventional method to design these devices consists in placing multiple actuators in parallel or series using gimbals which are bulky and difficult to miniaturize. Motors using a spherical rotor are interesting for miniature multidegree-of-freedom actuators. In this paper, a new actuator is proposed. It is based on a curved piezoelectric element which has its inner contact surface adapted to the diameter of the rotor. This adaptation allows to build spherical motors with a fully constrained rotor and without a need for additional guiding system. The work presents a design methodology based on modal finite element analysis. A methodology for mode selection is proposed and a sensitivity analysis of the final geometry to uncertainties and added masses is discussed. Finally, experimental results that validate the actuator concept on a single degree-of-freedom ultrasonic motor set-up are presented. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Modeling of Non-isothermal Austenite Formation in Spring Steel

    NASA Astrophysics Data System (ADS)

    Huang, He; Wang, Baoyu; Tang, Xuefeng; Li, Junling

    2017-12-01

    The austenitization kinetics description of spring steel 60Si2CrA plays an important role in providing guidelines for industrial production. The dilatometric curves of 60Si2CrA steel were measured using a dilatometer DIL805A at heating rates of 0.3 K to 50 K/s (0.3 °C/s to 50 °C/s). Based on the dilatometric curves, a unified kinetics model using the internal state variable (ISV) method was derived to describe the non-isothermal austenitization kinetics of 60Si2CrA, and the abovementioned model models the incubation and transition periods. The material constants in the model were determined using a genetic algorithm-based optimization technique. Additionally, good agreement between predicted and experimental volume fractions of transformed austenite was obtained, indicating that the model is effective for describing the austenitization kinetics of 60Si2CrA steel. Compared with other modeling methods of austenitization kinetics, this model, which uses the ISV method, has some advantages, such as a simple formula and explicit physics meaning, and can be probably used in engineering practice.

  2. A study of microstructural characteristics and differential thermal analysis of Ni-based superalloys

    NASA Technical Reports Server (NTRS)

    Aggarwal, M. D.; Lal, R. B.; Oyekenu, Samuel A.; Parr, Richard; Gentz, Stephen

    1989-01-01

    The objective of this work is to correlate the mechanical properties of the Ni-based superalloy MAR M246(Hf) used in the Space Shuttle Main Engine with its structural characteristics by systematic study of optical photomicrographs and differential thermal analysis. The authors developed a method of predicting the liquidus and solidus temperature of various nickel based superalloys (MAR-M247, Waspaloy, Udimet-41, polycrystalline and single crystals of CMSX-2 and CMSX-3) and comparing the predictions with the experimental differential thermal analysis (DTA) curves using Perkin-Elmer DTA 1700. The method of predicting these temperatures is based on the additive effect of the components dissolved in nickel. The results were compared with the experimental values.

  3. STUDYING ATMOSPHERE-DOMINATED HOT JUPITER KEPLER PHASE CURVES: EVIDENCE THAT INHOMOGENEOUS ATMOSPHERIC REFLECTION IS COMMON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shporer, Avi; Hu, Renyu

    2015-10-15

    We identify three Kepler transiting planets, Kepler-7b, Kepler-12b, and Kepler-41b, whose orbital phase-folded light curves are dominated by planetary atmospheric processes including thermal emission and reflected light, while the impact of non-atmospheric (i.e., gravitational) processes, including beaming (Doppler boosting) and tidal ellipsoidal distortion, is negligible. Therefore, those systems allow a direct view of their atmospheres without being hampered by the approximations used in the inclusion of both atmospheric and non-atmospheric processes when modeling the phase-curve shape. We present here the analysis of Kepler-12b and Kepler-41b atmosphere based on their Kepler phase curve, while the analysis of Kepler-7b was already presentedmore » elsewhere. The model we used efficiently computes reflection and thermal emission contributions to the phase curve, including inhomogeneous atmospheric reflection due to longitudinally varying cloud coverage. We confirm Kepler-12b and Kepler-41b show a westward phase shift between the brightest region on the planetary surface and the substellar point, similar to Kepler-7b. We find that reflective clouds located on the west side of the substellar point can explain the phase shift. The existence of inhomogeneous atmospheric reflection in all three of our targets, selected due to their atmosphere-dominated Kepler phase curve, suggests this phenomenon is common. Therefore, it is also likely to be present in planetary phase curves that do not allow a direct view of the planetary atmosphere as they contain additional orbital processes. We discuss the implications of a bright-spot shift on the analysis of phase curves where both atmospheric and gravitational processes appear, including the mass discrepancy seen in some cases between the companion’s mass derived from the beaming and ellipsoidal photometric amplitudes. Finally, we discuss the potential detection of non-transiting but otherwise similar planets, whose mass is too small to show a gravitational photometric signal, but their atmosphere is reflective enough to show detectable phase modulations.« less

  4. The DOHA algorithm: a new recipe for cotrending large-scale transiting exoplanet survey light curves

    NASA Astrophysics Data System (ADS)

    Mislis, D.; Pyrzas, S.; Alsubai, K. A.; Tsvetanov, Z. I.; Vilchez, N. P. E.

    2017-03-01

    We present DOHA, a new algorithm for cotrending photometric light curves obtained by transiting exoplanet surveys. The algorithm employs a novel approach to the traditional 'differential photometry' technique, by selecting the most suitable comparison star for each target light curve, using a two-step correlation search. Extensive tests on real data reveal that DOHA corrects both intra-night variations and long-term systematics affecting the data. Statistical studies conducted on a sample of ∼9500 light curves from the Qatar Exoplanet Survey reveal that DOHA-corrected light curves show an rms improvement of a factor of ∼2, compared to the raw light curves. In addition, we show that the transit detection probability in our sample can increase considerably, even up to a factor of 7, after applying DOHA.

  5. Estimating the R-curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1985-01-01

    A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.

  6. Resistance Curves in the Tensile and Compressive Longitudinal Failure of Composites

    NASA Technical Reports Server (NTRS)

    Camanho, Pedro P.; Catalanotti, Giuseppe; Davila, Carlos G.; Lopes, Claudio S.; Bessa, Miguel A.; Xavier, Jose C.

    2010-01-01

    This paper presents a new methodology to measure the crack resistance curves associated with fiber-dominated failure modes in polymer-matrix composites. These crack resistance curves not only characterize the fracture toughness of the material, but are also the basis for the identification of the parameters of the softening laws used in the analytical and numerical simulation of fracture in composite materials. The method proposed is based on the identification of the crack tip location by the use of Digital Image Correlation and the calculation of the J-integral directly from the test data using a simple expression derived for cross-ply composite laminates. It is shown that the results obtained using the proposed methodology yield crack resistance curves similar to those obtained using FEM-based methods in compact tension carbon-epoxy specimens. However, it is also shown that the Digital Image Correlation based technique can be used to extract crack resistance curves in compact compression tests for which FEM-based techniques are inadequate.

  7. A 1D-2D coupled SPH-SWE model applied to open channel flow simulations in complicated geometries

    NASA Astrophysics Data System (ADS)

    Chang, Kao-Hua; Sheu, Tony Wen-Hann; Chang, Tsang-Jung

    2018-05-01

    In this study, a one- and two-dimensional (1D-2D) coupled model is developed to solve the shallow water equations (SWEs). The solutions are obtained using a Lagrangian meshless method called smoothed particle hydrodynamics (SPH) to simulate shallow water flows in converging, diverging and curved channels. A buffer zone is introduced to exchange information between the 1D and 2D SPH-SWE models. Interpolated water discharge values and water surface levels at the internal boundaries are prescribed as the inflow/outflow boundary conditions in the two SPH-SWE models. In addition, instead of using the SPH summation operator, we directly solve the continuity equation by introducing a diffusive term to suppress oscillations in the predicted water depth. The performance of the two approaches in calculating the water depth is comprehensively compared through a case study of a straight channel. Additionally, three benchmark cases involving converging, diverging and curved channels are adopted to demonstrate the ability of the proposed 1D and 2D coupled SPH-SWE model through comparisons with measured data and predicted mesh-based numerical results. The proposed model provides satisfactory accuracy and guaranteed convergence.

  8. Diffusion Monte Carlo method for evaluating Hamaker constants

    NASA Astrophysics Data System (ADS)

    Maezono, Ryo; Hongo, Kenta

    We evaluated Hamaker's constants for Si6H12 (CHS) to investigate its wettability, which is industrially useful but no references available. The constant is fundamental for wettability, but not directly accessible by experiments. Ab initio estimations are therefore in demand, and surely give an impact for broader fields such as tribology where the wettability plays an important role. The evaluation of binding curves itself could be a big challenge if it is applied to a practical molecule such as CHS, because highly accurate descriptions of electron correlations in vdW bindings get tough for such larger sizes with anisotropy. We applied DMC to overcome this difficulty, showing a new direction for wettability issues. Since ab intio estimations rely on simple assumptions such as additivity (and hence we denote it as Aadd), it would include biases. Taking a benzene as a benchmark, we compared Aadd evaluated from several available binding curves with other reported AL (estimations based on Lifshitz theory). By the comparison, we get trends of biases in Aa dd due to non-additivity and anisotropy because AL is expected to capture these effects to some extent in macroscopic manner. The expected trends here surprisingly well explain the series of results for CHS.

  9. Automatic Analysis of Swift-XRT data

    NASA Astrophysics Data System (ADS)

    Evans, P. A.; Tyler, L. G.; Beardmore, A. P.; Osborne, J. P.

    2008-08-01

    The Swift spacecraft detects and autonomously observes ˜100 Gamma Ray Bursts (GRBs) per year, ˜96% of which are detected by the X-ray telescope (XRT). GRBs are accompanied by optical transients and the field of ground-based follow-up of GRBs has expanded significantly over the last few years, with rapid response instruments capable of responding to Swift triggers on timescales of minutes. To make the most efficient use of limited telescope time, follow-up astronomers need accurate positions of GRBs as soon as possible after the trigger. Additionally, information such as the X-ray light curve, is of interest when considering observing strategy. The Swift team at Leicester University have developed techniques to improve the accuracy of the GRB positions available from the XRT, and to produce science-grade X-ray light curves of GRBs. These techniques are fully automated, and are executed as soon as data are available.

  10. Robust Electrical Transfer System (RETS) for Solar Array Drive Mechanism SlipRing Assembly

    NASA Astrophysics Data System (ADS)

    Bommottet, Daniel; Bossoney, Luc; Schnyder, Ralph; Howling, Alan; Hollenstein, Christoph

    2013-09-01

    Demands for robust and reliable power transmission systems for sliprings for SADM (Solar Array Drive Mechanism) are increasing steadily. As a consequence, it is required to know their performances regarding the voltage breakdown limit.An understanding of the overall shape of the breakdown voltage versus pressure curve is established, based on experimental measurements of DC (Direct Current) gas breakdown in complex geometries compared with a numerical simulation model.In addition a detailed study was made of the functional behaviour of an entire wing of satellite in a like- operational mode, comprising the solar cells, the power transmission lines, the SRA (SlipRing Assembly), the power S3R (Sequential Serial/shunt Switching Regulators) and the satellite load to simulate the electrical power consumption.A test bench able to measure automatically the: a)breakdown voltage versus pressure curve and b)the functional switching performances, was developed and validated.

  11. Combining freeform optics and curved detectors for wide field imaging: a polynomial approach over squared aperture.

    PubMed

    Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe

    2017-06-26

    In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.

  12. Analysis of HD 73045 light curve data

    NASA Astrophysics Data System (ADS)

    Das, Mrinal Kanti; Bhatraju, Naveen Kumar; Joshi, Santosh

    2018-04-01

    In this work we analyzed the Kepler light curve data of HD 73045. The raw data has been smoothened using standard filters. The power spectrum has been obtained by using a fast Fourier transform routine. It shows the presence of more than one period. In order to take care of any non-stationary behavior, we carried out a wavelet analysis to obtain the wavelet power spectrum. In addition, to identify the scale invariant structure, the data has been analyzed using a multifractal detrended fluctuation analysis. Further to characterize the diversity of embedded patterns in the HD 73045 flux time series, we computed various entropy-based complexity measures e.g. sample entropy, spectral entropy and permutation entropy. The presence of periodic structure in the time series was further analyzed using the visibility network and horizontal visibility network model of the time series. The degree distributions in the two network models confirm such structures.

  13. Aging in complex interdependency networks.

    PubMed

    Vural, Dervis C; Morrison, Greg; Mahadevan, L

    2014-02-01

    Although species longevity is subject to a diverse range of evolutionary forces, the mortality curves of a wide variety of organisms are rather similar. Here we argue that qualitative and quantitative features of aging can be reproduced by a simple model based on the interdependence of fault-prone agents on one other. In addition to fitting our theory to the empiric mortality curves of six very different organisms, we establish the dependence of lifetime and aging rate on initial conditions, damage and repair rate, and system size. We compare the size distributions of disease and death and see that they have qualitatively different properties. We show that aging patterns are independent of the details of interdependence network structure, which suggests that aging is a many-body effect, and that the qualitative and quantitative features of aging are not sensitively dependent on the details of dependency structure or its formation.

  14. UBVRIz LIGHT CURVES OF 51 TYPE II SUPERNOVAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galbany, Lluis; Hamuy, Mario; Jaeger, Thomas de

    We present a compilation of UBVRIz light curves of 51 type II supernovae discovered during the course of four different surveys during 1986–2003: the Cerro Tololo Supernova Survey, the Calán/Tololo Supernova Program (C and T), the Supernova Optical and Infrared Survey (SOIRS), and the Carnegie Type II Supernova Survey (CATS). The photometry is based on template-subtracted images to eliminate any potential host galaxy light contamination, and calibrated from foreground stars. This work presents these photometric data, studies the color evolution using different bands, and explores the relation between the magnitude at maximum brightness and the brightness decline parameter (s) frommore » maximum light through the end of the recombination phase. This parameter is found to be shallower for redder bands and appears to have the best correlation in the B band. In addition, it also correlates with the plateau duration, being shorter (longer) for larger (smaller) s values.« less

  15. UBVRIz Light Curves of 51 Type II Supernovae

    NASA Astrophysics Data System (ADS)

    Galbany, Lluís; Hamuy, Mario; Phillips, Mark M.; Suntzeff, Nicholas B.; Maza, José; de Jaeger, Thomas; Moraga, Tania; González-Gaitán, Santiago; Krisciunas, Kevin; Morrell, Nidia I.; Thomas-Osip, Joanna; Krzeminski, Wojtek; González, Luis; Antezana, Roberto; Wishnjewski, Marina; McCarthy, Patrick; Anderson, Joseph P.; Gutiérrez, Claudia P.; Stritzinger, Maximilian; Folatelli, Gastón; Anguita, Claudio; Galaz, Gaspar; Green, Elisabeth M.; Impey, Chris; Kim, Yong-Cheol; Kirhakos, Sofia; Malkan, Mathew A.; Mulchaey, John S.; Phillips, Andrew C.; Pizzella, Alessandro; Prosser, Charles F.; Schmidt, Brian P.; Schommer, Robert A.; Sherry, William; Strolger, Louis-Gregory; Wells, Lisa A.; Williger, Gerard M.

    2016-02-01

    We present a compilation of UBVRIz light curves of 51 type II supernovae discovered during the course of four different surveys during 1986-2003: the Cerro Tololo Supernova Survey, the Calán/Tololo Supernova Program (C&T), the Supernova Optical and Infrared Survey (SOIRS), and the Carnegie Type II Supernova Survey (CATS). The photometry is based on template-subtracted images to eliminate any potential host galaxy light contamination, and calibrated from foreground stars. This work presents these photometric data, studies the color evolution using different bands, and explores the relation between the magnitude at maximum brightness and the brightness decline parameter (s) from maximum light through the end of the recombination phase. This parameter is found to be shallower for redder bands and appears to have the best correlation in the B band. In addition, it also correlates with the plateau duration, being shorter (longer) for larger (smaller) s values.

  16. Monte Carol-Based Dosimetry of Beta-Emitters for Intravascular Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, C.K.

    2002-06-25

    Monte Carlo simulations for radiation dosimetry and the experimental verifications of the simulations have been developed for the treatment geometry of intravascular brachytherapy, a form of radionuclide therapy for occluded coronary disease (restenosis). Monte Carlo code, MCNP4C, has been used to calculate the radiation dose from the encapsulated array of B-emitting seeds (Sr/Y-source train). Solid water phantoms have been fabricated to measure the dose on the radiochromic films that were exposed to the beta source train for both linear and curved coronary vessel geometries. While the dose difference for the 5-degree curved vessel at the prescription point of f+2.0 mmmore » is within the 10% guideline set by the AAPM, however, the difference increased dramatically to 16.85% for the 10-degree case which requires additional adjustment for the acceptable dosimetry planning. The experimental dose measurements agree well with the simulation results« less

  17. Effect of Low and Very Low Doses of Simple Phenolics on Plant Peroxidase Activity

    PubMed Central

    Malarczyk, Elżbieta; Kochmańska-Rdest, Janina; Paździoch-Czochra, Marzanna

    2004-01-01

    Changes in the activity of horseradish peroxidase resulting from an addition of ethanol water dilutions of 19 phenolic compounds were observed. For each compound, the enzyme activity was plotted against the degree of dilution expressed as n = –log100 (mol/L) in the range 0 ≤ n ≥ 20. All the curves showed sinusoidal activity, more or less regular, with two to four peaks on average. Each analyzed compound had a characteristic sinusoidal shape, which was constant for samples of peroxidase from various commercial firms. This was clearly visible after function fitting to experimental results based on the Marquadt–Levenberg algorithm using the least-squares method. Among the 19 phenolics, the highest amplitudes were observed for phenol and iso- and vanillate acids and aldehydes. The specific character of each of the analyzed curves offers a possibility of choosing proper dilutions of phenolic compound for activating or inhibiting of peroxidase activity. PMID:19330128

  18. Ecology and thermal inactivation of microbes in and on interplanetary space vehicle components

    NASA Technical Reports Server (NTRS)

    Campbell, J. E.

    1973-01-01

    The thermal inactivation curve for Bacillus subtilis var. niger spores on the Viking lander is examined. Tests were conducted at 113 C and 25% RH, and over a wide range of temperatures using .001% RH and additions of P2O5 to dry the environment. Results show the 25% RH environment did not significantly reduce the survival curve, while the survival curves for spores treated under the drier .001% RH environment was reduced by a factor of 3.

  19. Constructing Pairing-Friendly Elliptic Curves under Embedding Degree 1 for Securing Critical Infrastructures.

    PubMed

    Wang, Maocai; Dai, Guangming; Choo, Kim-Kwang Raymond; Jayaraman, Prem Prakash; Ranjan, Rajiv

    2016-01-01

    Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user's public key based on the user's identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al.

  20. Seismic fragility curves of bridge piers accounting for ground motions in Korea

    NASA Astrophysics Data System (ADS)

    Nguyen, Duy-Duan; Lee, Tae-Hyung

    2018-04-01

    Korea is located in a slight-to-moderate seismic zone. Nevertheless, several studies pointed that the peak earthquake magnitude in the region can be reached to approximately 6.5. Accordingly, a seismic vulnerability evaluation of the existing structures accounting for ground motions in Korea is momentous. The purpose of this paper is to develop seismic fragility curves for bridge piers of a steel box girder bridge equipped with and without base isolators based on a set of ground motions recorded in Korea. A finite element simulation platform, OpenSees, is utilized to perform nonlinear time history analyses of the bridges. A series of damage states is defined based on a damage index which is expressed in terms of the column displacement ductility ratio. The fragility curves based on Korean motions were thereafter compared with the fragility curves generated using worldwide earthquakes to assess the effect of the two ground motion groups on the seismic fragility curves of the bridge piers. The results reveal that both non- and base-isolated bridge piers are less vulnerable during the Korean ground motions than that under worldwide earthquakes.

  1. Constructing Pairing-Friendly Elliptic Curves under Embedding Degree 1 for Securing Critical Infrastructures

    PubMed Central

    Dai, Guangming

    2016-01-01

    Information confidentiality is an essential requirement for cyber security in critical infrastructure. Identity-based cryptography, an increasingly popular branch of cryptography, is widely used to protect the information confidentiality in the critical infrastructure sector due to the ability to directly compute the user’s public key based on the user’s identity. However, computational requirements complicate the practical application of Identity-based cryptography. In order to improve the efficiency of identity-based cryptography, this paper presents an effective method to construct pairing-friendly elliptic curves with low hamming weight 4 under embedding degree 1. Based on the analysis of the Complex Multiplication(CM) method, the soundness of our method to calculate the characteristic of the finite field is proved. And then, three relative algorithms to construct pairing-friendly elliptic curve are put forward. 10 elliptic curves with low hamming weight 4 under 160 bits are presented to demonstrate the utility of our approach. Finally, the evaluation also indicates that it is more efficient to compute Tate pairing with our curves, than that of Bertoni et al. PMID:27564373

  2. Derivation of a Provisional, Age-dependent, AIS2+ Thoracic Risk Curve for the THOR50 Test Dummy via Integration of NASS Cases, PMHS Tests, and Simulation Data.

    PubMed

    Laituri, Tony R; Henry, Scott; El-Jawahri, Raed; Muralidharan, Nirmal; Li, Guosong; Nutt, Marvin

    2015-11-01

    A provisional, age-dependent thoracic risk equation (or, "risk curve") was derived to estimate moderate-to-fatal injury potential (AIS2+), pertaining to men with responses gaged by the advanced mid-sized male test dummy (THOR50). The derivation involved two distinct data sources: cases from real-world crashes (e.g., the National Automotive Sampling System, NASS) and cases involving post-mortem human subjects (PMHS). The derivation was therefore more comprehensive, as NASS datasets generally skew towards younger occupants, and PMHS datasets generally skew towards older occupants. However, known deficiencies had to be addressed (e.g., the NASS cases had unknown stimuli, and the PMHS tests required transformation of known stimuli into THOR50 stimuli). For the NASS portion of the analysis, chest-injury outcomes for adult male drivers about the size of the THOR50 were collected from real-world, 11-1 o'clock, full-engagement frontal crashes (NASS, 1995-2012 calendar years, 1985-2012 model-year light passenger vehicles). The screening for THOR50-sized men involved application of a set of newly-derived "correction" equations for self-reported height and weight data in NASS. Finally, THOR50 stimuli were estimated via field simulations involving attendant representative restraint systems, and those stimuli were then assigned to corresponding NASS cases (n=508). For the PMHS portion of the analysis, simulation-based closure equations were developed to convert PMHS stimuli into THOR50 stimuli. Specifically, closure equations were derived for the four measurement locations on the THOR50 chest by cross-correlating the results of matched-loading simulations between the test dummy and the age-dependent, Ford Human Body Model. The resulting closure equations demonstrated acceptable fidelity (n=75 matched simulations, R2≥0.99). These equations were applied to the THOR50-sized men in the PMHS dataset (n=20). The NASS and PMHS datasets were combined and subjected to survival analysis with event-frequency weighting and arbitrary censoring. The resulting risk curve--a function of peak THOR50 chest compression and age--demonstrated acceptable fidelity for recovering the AIS2+ chest injury rate of the combined dataset (i.e., IR_dataset=1.97% vs. curve-based IR_dataset=1.98%). Additional sensitivity analyses showed that (a) binary logistic regression yielded a risk curve with nearly-identical fidelity, (b) there was only a slight advantage of combining the small-sample PMHS dataset with the large-sample NASS dataset, (c) use of the PMHS-based risk curve for risk estimation of the combined dataset yielded relatively poor performance (194% difference), and (d) when controlling for the type of contact (lab-consistent or not), the resulting risk curves were similar.

  3. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    NASA Astrophysics Data System (ADS)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.

  4. Assessing the Classification Accuracy of Early Numeracy Curriculum-Based Measures Using Receiver Operating Characteristic Curve Analysis

    ERIC Educational Resources Information Center

    Laracy, Seth D.; Hojnoski, Robin L.; Dever, Bridget V.

    2016-01-01

    Receiver operating characteristic curve (ROC) analysis was used to investigate the ability of early numeracy curriculum-based measures (EN-CBM) administered in preschool to predict performance below the 25th and 40th percentiles on a quantity discrimination measure in kindergarten. Areas under the curve derived from a sample of 279 students ranged…

  5. Simplified curve fits for the thermodynamic properties of equilibrium air

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Tannehill, J. C.; Weilmuenster, K. J.

    1986-01-01

    New improved curve fits for the thermodynamic properties of equilibrium air were developed. The curve fits are for p = p(e,rho), a = a(e,rho), T = T(e,rho), s = s(e,rho), T = T(p,rho), h = h(p,rho), rho = rho(p,s), e = e(p,s) and a = a(p,s). These curve fits can be readily incorporated into new or existing Computational Fluid Dynamics (CFD) codes if real-gas effects are desired. The curve fits were constructed using Grabau-type transition functions to model the thermodynamic surfaces in a piecewise manner. The accuracies and continuity of these curve fits are substantially improved over those of previous curve fits appearing in NASA CR-2470. These improvements were due to the incorporation of a small number of additional terms in the approximating polynomials and careful choices of the transition functions. The ranges of validity of the new curve fits are temperatures up to 25,000 K and densities from 10 to the minus 7th to 100 amagats (rho/rho sub 0).

  6. Parametric estimates for the receiver operating characteristic curve generalization for non-monotone relationships.

    PubMed

    Martínez-Camblor, Pablo; Pardo-Fernández, Juan C

    2017-01-01

    Diagnostic procedures are based on establishing certain conditions and then checking if those conditions are satisfied by a given individual. When the diagnostic procedure is based on a continuous marker, this is equivalent to fix a region or classification subset and then check if the observed value of the marker belongs to that region. Receiver operating characteristic curve is a valuable and popular tool to study and compare the diagnostic ability of a given marker. Besides, the area under the receiver operating characteristic curve is frequently used as an index of the global discrimination ability. This paper revises and widens the scope of the receiver operating characteristic curve definition by setting the classification subsets in which the final decision is based in the spotlight of the analysis. We revise the definition of the receiver operating characteristic curve in terms of particular classes of classification subsets and then focus on a receiver operating characteristic curve generalization for situations in which both low and high values of the marker are associated with more probability of having the studied characteristic. Parametric and non-parametric estimators of the receiver operating characteristic curve generalization are investigated. Monte Carlo studies and real data examples illustrate their practical performance.

  7. p-Curve and p-Hacking in Observational Research

    PubMed Central

    Bruns, Stephan B.; Ioannidis, John P. A.

    2016-01-01

    The p-curve, the distribution of statistically significant p-values of published studies, has been used to make inferences on the proportion of true effects and on the presence of p-hacking in the published literature. We analyze the p-curve for observational research in the presence of p-hacking. We show by means of simulations that even with minimal omitted-variable bias (e.g., unaccounted confounding) p-curves based on true effects and p-curves based on null-effects with p-hacking cannot be reliably distinguished. We also demonstrate this problem using as practical example the evaluation of the effect of malaria prevalence on economic growth between 1960 and 1996. These findings call recent studies into question that use the p-curve to infer that most published research findings are based on true effects in the medical literature and in a wide range of disciplines. p-values in observational research may need to be empirically calibrated to be interpretable with respect to the commonly used significance threshold of 0.05. Violations of randomization in experimental studies may also result in situations where the use of p-curves is similarly unreliable. PMID:26886098

  8. Polymorphic site index curves for white pine in the southern Appalachians

    Treesearch

    Donald E. Beck

    1971-01-01

    Site index curves are presented for natural stands of even-aged white pine in the southern Appalachians. The curves are based on measured height-growth trends in 42 stands. Shape of the height-growth curves was shown to change progressively with the level of site index, and these polymorphic trends are incorporated in the finished site-index curves. Comparison of the...

  9. Reconstructing the Geomagnetic Field in West Africa: First Absolute Intensity Results from Burkina Faso

    NASA Astrophysics Data System (ADS)

    Kapper, Lisa; Donadini, Fabio; Serneels, Vincent; Tema, Evdokia; Goguitchaichvili, Avto; Julio Morales, Juan

    2017-03-01

    We present absolute geomagnetic intensities from iron smelting furnaces discovered at the metallurgical site of Korsimoro, Burkina Faso. Up to now, archaeologists recognized four different types of furnaces based on different construction methods, which were related to four subsequent time periods. Additionally, radiocarbon ages obtained from charcoal confine the studied furnaces to ages ranging from 700-1700 AD, in good agreement with the archaeologically determined time periods for each type of furnace. Archaeointensity results reveal three main groups of Arai diagrams. The first two groups contain specimens with either linear Arai diagrams, or slightly curved diagrams or two phases of magnetization. The third group encompasses specimens with strong zigzag or curvature in their Arai diagrams. Specimens of the first two groups were accepted after applying selection criteria to guarantee the high quality of the results. Our data compared to palaeosecular variation curves show a similar decreasing trend between 900-1500 AD. However, they reveal larger amplitudes at around 800 AD and 1650 AD than the reference curves and geomagnetic field models. Furthermore, they agree well with archaeomagnetic data from Mali and Senegal around 800 AD and with volcanic data around 1700 AD.

  10. Physical growth curves of indigenous Xavante children in Central Brazil: results from a longitudinal study (2009-2012).

    PubMed

    Ferreira, Aline A; Welch, James R; Cunha, Geraldo Marcelo; Coimbra, Carlos E A

    2016-07-01

    The nutritional profile of Indigenous children in Brazil is comparable to those observed in some of the least developed regions of the world. Weight and height growth curves were characterised based on longitudinal data from a local Indigenous population experiencing the double burden of child under-nutrition and adult obesity. Anthropometric data were collected in six waves from 2009-2011 for children <10 in two proximate Xavante villages in Central Brazil. Prevalence rates for stunting, wasting and thinness were calculated using WHO references. Weight and height data were adjusted for generalised additive mixed models to generate growth curves. Prevalence rates of stunting and wasting were high, but cases of thinness and excess weight were negligible. Weight and height began close to WHO medians, but fell substantially before 12 months. Boys but not girls were able to catch-up in weight before age 10. From 3-10 years, height for both sexes remained between -2 and 0 z-scores. Impaired Xavante growth before 1 year followed by inconsistent recovery before 10 years reflects health and wellbeing disparities with regard to the Brazilian national population and a complex epidemiology of growth involving rapid nutritional change.

  11. Reconstructing the Geomagnetic Field in West Africa: First Absolute Intensity Results from Burkina Faso

    PubMed Central

    Kapper, Lisa; Donadini, Fabio; Serneels, Vincent; Tema, Evdokia; Goguitchaichvili, Avto; Julio Morales, Juan

    2017-01-01

    We present absolute geomagnetic intensities from iron smelting furnaces discovered at the metallurgical site of Korsimoro, Burkina Faso. Up to now, archaeologists recognized four different types of furnaces based on different construction methods, which were related to four subsequent time periods. Additionally, radiocarbon ages obtained from charcoal confine the studied furnaces to ages ranging from 700–1700 AD, in good agreement with the archaeologically determined time periods for each type of furnace. Archaeointensity results reveal three main groups of Arai diagrams. The first two groups contain specimens with either linear Arai diagrams, or slightly curved diagrams or two phases of magnetization. The third group encompasses specimens with strong zigzag or curvature in their Arai diagrams. Specimens of the first two groups were accepted after applying selection criteria to guarantee the high quality of the results. Our data compared to palaeosecular variation curves show a similar decreasing trend between 900–1500 AD. However, they reveal larger amplitudes at around 800 AD and 1650 AD than the reference curves and geomagnetic field models. Furthermore, they agree well with archaeomagnetic data from Mali and Senegal around 800 AD and with volcanic data around 1700 AD. PMID:28350006

  12. Microfocal angiography of the pulmonary vasculature

    NASA Astrophysics Data System (ADS)

    Clough, Anne V.; Haworth, Steven T.; Roerig, David T.; Linehan, John H.; Dawson, Christopher A.

    1998-07-01

    X-ray microfocal angiography provides a means of assessing regional microvascular perfusion parameters using residue detection of vascular indicators. As an application of this methodology, we studied the effects of alveolar hypoxia, a pulmonary vasoconstrictor, on the pulmonary microcirculation to determine changes in regional blood mean transit time, volume and flow between control and hypoxic conditions. Video x-ray images of a dog lung were acquired as a bolus of radiopaque contrast medium passed through the lobar vasculature. X-ray time-absorbance curves were acquired from arterial and microvascular regions-of-interest during both control and hypoxic alveolar gas conditions. A mathematical model based on indicator-dilution theory applied to image residue curves was applied to the data to determine changes in microvascular perfusion parameters. Sensitivity of the model parameters to the model assumptions was analyzed. Generally, the model parameter describing regional microvascular volume, corresponding to area under the microvascular absorbance curve, was the most robust. The results of the model analysis applied to the experimental data suggest a significant decrease in microvascular volume with hypoxia. However, additional model assumptions concerning the flow kinematics within the capillary bed may be required for assessing changes in regional microvascular flow and mean transit time from image residue data.

  13. Assessment of two theoretical methods to estimate potentiometrictitration curves of peptides: comparison with experiment

    PubMed Central

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A.

    2008-01-01

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric-titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH) and dimethylsulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the Electrostatically Driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, MD simulations are run with the AMBER force field and the Generalized-Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethyl amine and propyl amine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach, and the titration curve in water calculated using the MD-based approach, have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the non-aqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models. PMID:16509748

  14. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    PubMed

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach, which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the nonaqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models.

  15. Do the disc degeneration and osteophyte contribute to the curve rigidity of degenerative scoliosis?

    PubMed

    Zhu, Feng; Bao, Hongda; Yan, Peng; Liu, Shunan; Bao, Mike; Zhu, Zezhang; Liu, Zhen; Qiu, Yong

    2017-03-29

    The factors associated with lateral curve flexibility in degenerative scoliosis have not been well documented. Disc degeneration could result in significant change in stiffness and range of motion in lateral bending films. The osteophytes could be commonly observed in degenerative spine but the relationship between osteophyte formation and curve flexibility remains controversial. The aim of the current study is to clarify if the disc degeneration and osteophyte formation were both associated with curve flexibility of degenerative scoliosis. A total of 85 patients were retrospectively analyzed. The inclusion criteria were as follow: age greater than 45 years, diagnosed as degenerative scoliosis and coronal Cobb angle greater than 20°. Curve flexibility was calculated based on Cobb angle, and range of motion (ROM) was based on disc angle evaluation. Regional disc degeneration score (RDS) was obtained according to Pfirrmann classification and osteophyte formation score (OFS) was based on Nanthan classification. Spearman correlation was performed to analyze the relationship between curve flexibility and RDS as well as OFS. Moderate correlation was found between RDS and curve flexibility with a Spearman coefficient of -0.487 (P = 0.009). Similarly, moderate correlation was observed between curve flexibility and OFS with a Spearman coefficient of -0.429 (P = 0.012). Strong correlation was found between apical ROM and OFS compared to the relationship between curve flexibility and OFS with a Spearman coefficient of -0.627 (P < 0.001). Both disc degeneration and osteophytes formation correlated with curve rigidity. The pre-operative evaluation of both features may aid in the surgical decision-making in degenerative scoliosis patients.

  16. WE-E-18A-04: Precision In-Vivo Dosimetry Using Optically Stimulated Luminescence Dosimeters and a Pulsed-Stimulating Dose Reader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q; Herrick, A; Hoke, S

    Purpose: A new readout technology based on pulsed optically stimulating luminescence is introduced (microSTARii, Landauer, Inc, Glenwood, IL60425). This investigation searches for approaches that maximizes the dosimetry accuracy in clinical applications. Methods: The sensitivity of each optically stimulated luminescence dosimeter (OSLD) was initially characterized by exposing it to a given radiation beam. After readout, the luminescence signal stored in the OSLD was erased by exposing its sensing area to a 21W white LED light for 24 hours. A set of OSLDs with consistent sensitivities was selected to calibrate the dose reader. Higher order nonlinear curves were also derived from themore » calibration readings. OSLDs with cumulative doses below 15 Gy were reused. Before an in-vivo dosimetry, the OSLD luminescence signal was erased with the white LED light. Results: For a set of 68 manufacturer-screened OSLDs, the measured sensitivities vary in a range of 17.3%. A sub-set of the OSLDs with sensitivities within ±1% was selected for the reader calibration. Three OSLDs in a group were exposed to a given radiation. Nine groups were exposed to radiation doses ranging from 0 to 13 Gy. Additional verifications demonstrated that the reader uncertainty is about 3%. With an external calibration function derived by fitting the OSLD readings to a 3rd-order polynomial, the dosimetry uncertainty dropped to 0.5%. The dose-luminescence response curves of individual OSLDs were characterized. All curves converge within 1% after the sensitivity correction. With all uncertainties considered, the systematic uncertainty is about 2%. Additional tests emulating in-vivo dosimetry by exposing the OSLDs under different radiation sources confirmed the claim. Conclusion: The sensitivity of individual OSLD should be characterized initially. A 3rd-order polynomial function is a more accurate representation of the dose-luminescence response curve. The dosimetry uncertainty specified by the manufacturer is 4%. Following the proposed approach, it can be controlled to 2%.« less

  17. TDAAPS 2: Acoustic Wave Propagation in Attenuative Moving Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph A.

    This report outlines recent enhancements to the TDAAPS algorithm first described by Symons et al., 2005. One of the primary additions to the code is the ability to specify an attenuative media using standard linear fluid mechanisms to match reasonably general frequency versus loss curves, including common frequency versus loss curves for the atmosphere and seawater. Other improvements that will be described are the addition of improved numerical boundary conditions via various forms of Perfectly Matched Layers, enhanced accuracy near high contrast media interfaces, and improved physics options.

  18. Electrical conductivity of high-purity germanium crystals at low temperature

    NASA Astrophysics Data System (ADS)

    Yang, Gang; Kooi, Kyler; Wang, Guojian; Mei, Hao; Li, Yangyang; Mei, Dongming

    2018-05-01

    The temperature dependence of electrical conductivity of single-crystal and polycrystalline high-purity germanium (HPGe) samples has been investigated in the temperature range from 7 to 100 K. The conductivity versus inverse of temperature curves for three single-crystal samples consist of two distinct temperature ranges: a high-temperature range where the conductivity increases to a maximum with decreasing temperature, and a low-temperature range where the conductivity continues decreasing slowly with decreasing temperature. In contrast, the conductivity versus inverse of temperature curves for three polycrystalline samples, in addition to a high- and a low-temperature range where a similar conductive behavior is shown, have a medium-temperature range where the conductivity decreases dramatically with decreasing temperature. The turning point temperature ({Tm}) which corresponds to the maximum values of the conductivity on the conductivity versus inverse of temperature curves are higher for the polycrystalline samples than for the single-crystal samples. Additionally, the net carrier concentrations of all samples have been calculated based on measured conductivity in the whole measurement temperature range. The calculated results show that the ionized carrier concentration increases with increasing temperature due to thermal excitation, but it reaches saturation around 40 K for the single-crystal samples and 70 K for the polycrystalline samples. All these differences between the single-crystal samples and the polycrystalline samples could be attributed to trapping and scattering effects of the grain boundaries on the charge carriers. The relevant physical models have been proposed to explain these differences in the conductive behaviors between two kinds of samples.

  19. Material quality assessment of silk nanofibers based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Brandoli Machado, Bruno; Nunes Gonçalves, Wesley; Martinez Bruno, Odemir

    2013-02-01

    In this paper, we propose a novel approach for texture analysis based on artificial crawler model. Our method assumes that each agent can interact with the environment and each other. The evolution process converges to an equilibrium state according to the set of rules. For each textured image, the feature vector is composed by signatures of the live agents curve at each time. Experimental results revealed that combining the minimum and maximum signatures into one increase the classification rate. In addition, we pioneer the use of autonomous agents for characterizing silk fibroin scaffolds. The results strongly suggest that our approach can be successfully employed for texture analysis.

  20. Marginal abatement cost curves for NOx that account for renewable electricity, energy efficiency, and fuel switching

    EPA Science Inventory

    A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...

  1. Regional and sectoral marginal abatement cost curves for NOx incorporating controls, renewable electricity, energy efficiency and fuel switching

    EPA Science Inventory

    A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...

  2. Marginal abatement cost curve for NOx incorporating controls, renewable electricity, energy efficiency and fuel switching

    EPA Science Inventory

    A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their resp...

  3. Marginal abatement cost curve for NOx incorporating controls, renewable electricity, energy efficiency and fuel switching

    EPA Science Inventory

    A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their rela...

  4. A New Curve of Critical Nitrogen Concentration Based on Spike Dry Matter for Winter Wheat in Eastern China.

    PubMed

    Zhao, Ben; Ata-Ui-Karim, Syed Tahir; Yao, Xia; Tian, YongChao; Cao, WeiXing; Zhu, Yan; Liu, XiaoJun

    2016-01-01

    Diagnosing the status of crop nitrogen (N) helps to optimize crop yield, improve N use efficiency, and reduce the risk of environmental pollution. The objectives of the present study were to develop a critical N (Nc) dilution curve for winter wheat (based on spike dry matter [SDM] during the reproductive growth period), to compare this curve with the existing Nc dilution curve (based on plant dry matter [DM] of winter wheat), and to explore its ability to reliably estimate the N status of winter wheat. Four field experiments, using varied N fertilizer rates (0-375 kg ha-1) and six cultivars (Yangmai16, Ningmai13, Ningmai9, Aikang58, Yangmai12, Huaimai 17), were conducted in the Jiangsu province of eastern China. Twenty plants from each plot were sampled to determine the SDM and spike N concentration (SNC) during the reproductive growth period. The spike Nc curve was described by Nc = 2.85×SDM-0.17, with SDM ranging from 0.752 to 7.233 t ha-1. The newly developed curve was lower than the Nc curve based on plant DM. The N nutrition index (NNI) for spike dry matter ranged from 0.62 to 1.1 during the reproductive growth period across the seasons. Relative yield (RY) increased with increasing NNI; however, when NNI was greater than 0.96, RY plateaued and remained stable. The spike Nc dilution curve can be used to correctly identify the N nutrition status of winter wheat to support N management during the reproductive growth period for winter wheat in eastern China.

  5. BOX-COUNTING DIMENSION COMPUTED BY α-DENSE CURVES

    NASA Astrophysics Data System (ADS)

    García, G.; Mora, G.; Redtwitz, D. A.

    We introduce a method to reduce to the real case the calculus of the box-counting dimension of subsets of the unit cube In, n > 1. The procedure is based on the existence of special types of α-dense curves (a generalization of the space-filling curves) in In called δ-uniform curves.

  6. Titration Curves: Fact and Fiction.

    ERIC Educational Resources Information Center

    Chamberlain, John

    1997-01-01

    Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…

  7. Establishing the Learning Curve of Robotic Sacral Colpopexy in a Start-up Robotics Program.

    PubMed

    Sharma, Shefali; Calixte, Rose; Finamore, Peter S

    2016-01-01

    To determine the learning curve of the following segments of a robotic sacral colpopexy: preoperative setup, operative time, postoperative transition, and room turnover. A retrospective cohort study to determine the number of cases needed to reach points of efficiency in the various segments of a robotic sacral colpopexy (Canadian Task Force II-2). A university-affiliated community hospital. Women who underwent robotic sacral colpopexy at our institution from 2009 to 2013 comprise the study population. Patient characteristics and operative reports were extracted from a patient database that has been maintained since the inception of the robotics program at Winthrop University Hospital and electronic medical records. Based on additional procedures performed, 4 groups of patients were created (A-D). Learning curves for each of the segment times of interest were created using penalized basis spline (B-spline) regression. Operative time was further analyzed using an inverse curve and sequential grouping. A total of 176 patients were eligible. Nonparametric tests detected no difference in procedure times between the 4 groups (A-D) of patients. The preoperative and postoperative points of efficiency were 108 and 118 cases, respectively. The operative points of proficiency and efficiency were 25 and 36 cases, respectively. Operative time was further analyzed using an inverse curve that revealed that after 11 cases the surgeon had reached 90% of the learning plateau. Sequential grouping revealed no significant improvement in operative time after 60 cases. Turnover time could not be assessed because of incomplete data. There is a difference in the operative time learning curve for robotic sacral colpopexy depending on the statistical analysis used. The learning curve of the operative segment showed an improvement in operative time between 25 and 36 cases when using B-spline regression. When the data for operative time was fit to an inverse curve, a learning rate of 11 cases was appreciated. Using sequential grouping to describe the data, no improvement in operative time was seen after 60 cases. Ultimately, we believe that efficiency in operative time is attained after 30 to 60 cases when performing robotic sacral colpopexy. The learning curve for preoperative setup and postoperative transition, which is reflective of anesthesia and nursing staff, was approximately 110 cases. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.

  8. Meta-analysis of Diagnostic Accuracy and ROC Curves with Covariate Adjusted Semiparametric Mixtures.

    PubMed

    Doebler, Philipp; Holling, Heinz

    2015-12-01

    Many screening tests dichotomize a measurement to classify subjects. Typically a cut-off value is chosen in a way that allows identification of an acceptable number of cases relative to a reference procedure, but does not produce too many false positives at the same time. Thus for the same sample many pairs of sensitivities and false positive rates result as the cut-off is varied. The curve of these points is called the receiver operating characteristic (ROC) curve. One goal of diagnostic meta-analysis is to integrate ROC curves and arrive at a summary ROC (SROC) curve. Holling, Böhning, and Böhning (Psychometrika 77:106-126, 2012a) demonstrated that finite semiparametric mixtures can describe the heterogeneity in a sample of Lehmann ROC curves well; this approach leads to clusters of SROC curves of a particular shape. We extend this work with the help of the [Formula: see text] transformation, a flexible family of transformations for proportions. A collection of SROC curves is constructed that approximately contains the Lehmann family but in addition allows the modeling of shapes beyond the Lehmann ROC curves. We introduce two rationales for determining the shape from the data. Using the fact that each curve corresponds to a natural univariate measure of diagnostic accuracy, we show how covariate adjusted mixtures lead to a meta-regression on SROC curves. Three worked examples illustrate the method.

  9. Effective Discharge and Annual Sediment Yield on Brazos River

    NASA Astrophysics Data System (ADS)

    Rouhnia, M.; Salehi, M.; Keyvani, A.; Ma, F.; Strom, K. B.; Raphelt, N.

    2012-12-01

    Geometry of an alluvial river alters dynamically over the time due to the sediment mobilization on the banks and bottom of the river channel in various flow rates. Many researchers tried to define a single representative discharge for these morphological processes such as "bank-full discharge", "effective discharge" and "channel forming discharge". Effective discharge is the flow rate in which, the most sediment load is being carried by water, in a long term period. This project is aimed to develop effective discharge estimates for six gaging stations along the Brazos River from Waco, TX to Rosharon, TX. The project was performed with cooperation of the In-stream Flow Team of the Texas Water Development Board (TWDB). Project objectives are listed as: 1) developing "Flow Duration Curves" for six stations based on mean-daily discharge by downloading the required, additional data from U.S Geological Survey website, 2) developing "Rating Curves" for six gaging stations after sampling and field measurements in three different flow conditions, 3) developing a smooth shaped "Sediment Yield Histogram" with a well distinguished peak as effective discharge. The effective discharge was calculated using two methods of manually and automatic bin selection. The automatic method is based on kernel density approximation. Cross-sectional geometry measurements, particle size distributions and water field samples were processed in the laboratory to obtain the suspended sediment concentration associated with flow rate. Rating curves showed acceptable trends, as the greater flow rate we experienced, the more sediment were carried by water.

  10. Mathematical modeling improves EC50 estimations from classical dose-response curves.

    PubMed

    Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar

    2015-03-01

    The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.

  11. Manga Vectorization and Manipulation with Procedural Simple Screentone.

    PubMed

    Yao, Chih-Yuan; Hung, Shih-Hsuan; Li, Guo-Wei; Chen, I-Yu; Adhitya, Reza; Lai, Yu-Chi

    2017-02-01

    Manga are a popular artistic form around the world, and artists use simple line drawing and screentone to create all kinds of interesting productions. Vectorization is helpful to digitally reproduce these elements for proper content and intention delivery on electronic devices. Therefore, this study aims at transforming scanned Manga to a vector representation for interactive manipulation and real-time rendering with arbitrary resolution. Our system first decomposes the patch into rough Manga elements including possible borders and shading regions using adaptive binarization and screentone detector. We classify detected screentone into simple and complex patterns: our system extracts simple screentone properties for refining screentone borders, estimating lighting, compensating missing strokes inside screentone regions, and later resolution independently rendering with our procedural shaders. Our system treats the others as complex screentone areas and vectorizes them with our proposed line tracer which aims at locating boundaries of all shading regions and polishing all shading borders with the curve-based Gaussian refiner. A user can lay down simple scribbles to cluster Manga elements intuitively for the formation of semantic components, and our system vectorizes these components into shading meshes along with embedded Bézier curves as a unified foundation for consistent manipulation including pattern manipulation, deformation, and lighting addition. Our system can real-time and resolution independently render the shading regions with our procedural shaders and drawing borders with the curve-based shader. For Manga manipulation, the proposed vector representation can be not only magnified without artifacts but also deformed easily to generate interesting results.

  12. [Vegetation index estimation by chlorophyll content of grassland based on spectral analysis].

    PubMed

    Xiao, Han; Chen, Xiu-Wan; Yang, Zhen-Yu; Li, Huai-Yu; Zhu, Han

    2014-11-01

    Comparing the methods of existing remote sensing research on the estimation of chlorophyll content, the present paper confirms that the vegetation index is one of the most practical and popular research methods. In recent years, the increasingly serious problem of grassland degradation. This paper, firstly, analyzes the measured reflectance spectral curve and its first derivative curve in the grasslands of Songpan, Sichuan and Gongger, Inner Mongolia, conducts correlation analysis between these two spectral curves and chlorophyll content, and finds out the regulation between REP (red edge position) and grassland chlorophyll content, that is, the higher the chlorophyll content is, the higher the REIP (red-edge inflection point) value would be. Then, this paper constructs GCI (grassland chlorophyll index) and selects the most suitable band for retrieval. Finally, this paper calculates the GCI by the use of satellite hyperspectral image, conducts the verification and accuracy analysis of the calculation results compared with chlorophyll content data collected from field of twice experiments. The result shows that for grassland chlorophyll content, GCI has stronger sensitivity than other indices of chlorophyll, and has higher estimation accuracy. GCI is the first proposed to estimate the grassland chlorophyll content, and has wide application potential for the remote sensing retrieval of grassland chlorophyll content. In addition, the grassland chlorophyll content estimation method based on remote sensing retrieval in this paper provides new research ideas for other vegetation biochemical parameters' estimation, vegetation growth status' evaluation and grassland ecological environment change's monitoring.

  13. Association of total mixed ration particle fractions retained on the Penn State Particle Separator with milk, fat, and protein yield lactation curves at the cow level.

    PubMed

    Caccamo, M; Ferguson, J D; Veerkamp, R F; Schadt, I; Petriglieri, R; Azzaro, G; Pozzebon, A; Licitra, G

    2014-01-01

    As part of a larger project aiming to develop management evaluation tools based on results from test-day (TD) models, the objective of this study was to examine the effect of physical composition of total mixed rations (TMR) tested quarterly from March 2006 through December 2008 on milk, fat, and protein yield curves for 25 herds in Ragusa, Sicily. A random regression sire-maternal grandsire model was used to estimate variance components for milk, fat, and protein yields fitted on a full data set, including 241,153 TD records from 9,809 animals in 42 herds recorded from 1995 through 2008. The model included parity, age at calving, year at calving, and stage of pregnancy as fixed effects. Random effects were herd × test date, sire and maternal grandsire additive genetic effect, and permanent environmental effect modeled using third-order Legendre polynomials. Model fitting was carried out using ASREML. Afterward, for the 25 herds involved in the study, 9 particle size classes were defined based on the proportions of TMR particles on the top (19-mm) and middle (8-mm) screen of the Penn State Particle Separator. Subsequently, the model with estimated variance components was used to examine the influence of TMR particle size class on milk, fat, and protein yield curves. An interaction was included with the particle size class and days in milk. The effect of the TMR particle size class was modeled using a ninth-order Legendre polynomial. Lactation curves were predicted from the model while controlling for TMR chemical composition (crude protein content of 15.5%, neutral detergent fiber of 40.7%, and starch of 19.7% for all classes), to have pure estimates of particle distribution not confounded by nutrient content of TMR. We found little effect of class of particle proportions on milk yield and fat yield curves. Protein yield was greater for sieve classes with 10.4 to 17.4% of TMR particles retained on the top (19-mm) sieve. Optimal distributions different from those recommended may reflect regional differences based on climate and types and quality of forages fed. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. A Dirichlet process model for classifying and forecasting epidemic curves

    PubMed Central

    2014-01-01

    Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642

  15. Electrofacies analysis for coal lithotype profiling based on high-resolution wireline log data

    NASA Astrophysics Data System (ADS)

    Roslin, A.; Esterle, J. S.

    2016-06-01

    The traditional approach to coal lithotype analysis is based on a visual characterisation of coal in core, mine or outcrop exposures. As not all wells are fully cored, the petroleum and coal mining industries increasingly use geophysical wireline logs for lithology interpretation.This study demonstrates a method for interpreting coal lithotypes from geophysical wireline logs, and in particular discriminating between bright or banded, and dull coal at similar densities to a decimetre level. The study explores the optimum combination of geophysical log suites for training the coal electrofacies interpretation, using neural network conception, and then propagating the results to wells with fewer wireline data. This approach is objective and has a recordable reproducibility and rule set.In addition to conventional gamma ray and density logs, laterolog resistivity, microresistivity and PEF data were used in the study. Array resistivity data from a compact micro imager (CMI tool) were processed into a single microresistivity curve and integrated with the conventional resistivity data in the cluster analysis. Microresistivity data were tested in the analysis to test the hypothesis that the improved vertical resolution of microresistivity curve can enhance the accuracy of the clustering analysis. The addition of PEF log allowed discrimination between low density bright to banded coal electrofacies and low density inertinite-rich dull electrofacies.The results of clustering analysis were validated statistically and the results of the electrofacies results were compared to manually derived coal lithotype logs.

  16. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  17. Compressibility of two polyvinyl siloxane interocclusal record materials and its effect on mounted cast relationships.

    PubMed

    Campos, A A; Nathanson, D

    1999-10-01

    Addition silicones (polyvinyl siloxanes) are universally accepted as accurate and stable impression materials. They have also gained popularity as interocclusal record materials. However, it has not been defined if it is possible to work with polyvinyl siloxanes without changing the recorded maxillomandibular relations. This study examined the compressibility of 2 addition silicones as interocclusal record materials, analyzing the changes of maxillomandibular relations at the condyle region when different compressive forces are used to stabilize articulated casts. Sixteen interocclusal records, obtained from the same patient (8 of each polyvinyl siloxane, Blu-Mousse, Fast Set), were interposed between the patient casts in a new measuring system obtaining 48 curves of load versus maxillomandibular positional changes in 3 axes (x, y, z). These curves were compared with curves obtained with the casts in maximum intercuspation without interocclusal records (reference curves). Analysis of variance was used to compare maxillomandibular positional changes among the 3 groups (n = 48 each): Blu-Mousse, Fast Set, and control group or maximum intercuspation without interocclusal record. There was no significant change in maxillomandibular relations when forces up to 1 kgf were applied to stabilize the casts related by means of Blu-Mousse and Fast Set addition silicone interocclusal records. It is possible to use these polyvinyl siloxanes as interocclusal record materials without changing the recorded maxillomandibular relations.

  18. An approach to bioassessment of water quality using diversity measures based on species accumulative curves.

    PubMed

    Xu, Guangjian; Zhang, Wei; Xu, Henglong

    2015-02-15

    Traditional community-based bioassessment is time-consuming because they rely on full species-abundance data of a community. To improve bioassessment efficiency, the feasibility of the diversity measures based on species accumulative curves for bioassessment of water quality status was studied based on a dataset of microperiphyton fauna. The results showed that: (1) the species accumulative curves well fitted the Michaelis-Menten equation; (2) the β- and γ-diversity, as well as the number of samples to 50% of the maximum species number (Michaelis-Menten constant K), can be statistically estimated based on the formulation; (3) the rarefied α-diversity represented a significant negative correlation with the changes in the nutrient NH4-N; and (4) the estimated β-diversity and the K constant were significantly positively related to the concentration of NH4-N. The results suggest that the diversity measures based on species accumulative curves might be used as a potential bioindicator of water quality in marine ecosystems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. High-efficient full-duplex WDM-RoF system with sub-central station

    NASA Astrophysics Data System (ADS)

    Liu, Anliang; Yin, Hongxi; Wu, Bin

    2018-05-01

    With an additional sub-central station (S-CS), a high-efficient full-duplex radio-over-fiber (RoF) system compatible with the wavelength-division-multiplexing technology is proposed and experimentally demonstrated in this paper. To improve the dispersion tolerance of the RoF system, the baseband data format for the downlink and an all-optical down-conversion approach for the uplink are employed. In addition, this RoF system can not only make full use of the fiber link resources but also realize the upstream transmission without any local light sources at remote base stations (BSs). A 10-GHz RoF experimental system with a 1.25-Gb/s rate bidirectional transmission is established based on the S-CS structure. The feasibility and reliability of this RoF system are verified through eye diagrams and bit error rate (BER) curves experimentally obtained.

  20. Lorenz curve of a light beam: evaluating beam quality from a majorization perspective.

    PubMed

    Porras, Miguel A; Gonzalo, Isabel; Ahmir Malik, M

    2017-08-01

    We introduce a novel approach for the characterization of the quality of a laser beam that is not based on particular criteria for beam width definition. The Lorenz curve of a light beam is a sophisticated version of the so-called power-in-the-bucket curve, formed by the partial sums of discretized joint intensity distribution in the near and far fields, sorted in decreasing order. According to majorization theory, a higher Lorenz curve implies that all measures of spreading in phase space, and, in particular, all Rényi (and Shannon) entropy-based measures of the beam width products in near and far fields, are unanimously smaller, providing a strong assessment of a better beam quality. Two beams whose Lorenz curves intersect can be considered of relatively better or lower quality only according to specific criteria, which can be inferred from the plot of the respective Lorenz curves.

  1. 4963 Kanroku: Asteroid with a possible precession of rotation axis

    NASA Astrophysics Data System (ADS)

    Sokova, Iraida A.; Marchini, Alessandro; Franco, Lorenzo; Papini, Riccardo; Salvaggio, Fabio; Palmas, Teodora; Sokov, Eugene N.; Garlitz, Joe; Knight, Carl R.; Bretton, Marc

    2018-04-01

    Based on photometric observations of 4963 Kanroku as part of a campaign to measure its light-curve, changes of the light-curve profile have been detected. These changes are of a periodic nature, i.e. the profiles change with a detected period P = 16.4032 h. Based on simulations of the shape of the asteroid and using observational data, we make the assumption that such changes of the light-curve of the asteroid could be caused by the existence of a precession force acting on the axis of rotation of the asteroid. Simulations of the 4963 Kanroku light-curve, taking into account the detected precession, and the parameters for the shape of the asteroid, the modeled light-curves are in good agreement with the light-curves obtained from the observation campaign. Thus, the detected precession force may indicate a possible satellite of the asteroid 4963 Kanroku.

  2. Polymorphic site index curves for red fir in California and southern Oregon

    Treesearch

    K. Leroy Dolph

    1991-01-01

    Polymorphic site index curves were developed from stem analysis data of 194 dominant red fir trees in California and southern Oregon. Site index was based on breast-height age and total tree height, with a base age of 50 years at breast height. Site index curves for breast height ages 10 to 160 years are presented for approximate estimates of site index. For more...

  3. SU-F-J-178: A Computer Simulation Model Observer for Task-Based Image Quality Assessment in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Mutic, S; Anastasio, M

    Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less

  4. Pressure-volume (P-V) curves in Atriplex nummularia Lindl. for evaluation of osmotic adjustment and water status under saline conditions.

    PubMed

    Teixeira Lins, Cíntia Maria; Rodrigues de Souza, Edivan; Farias de Melo, Hidelblandi; Silva Souza Paulino, Martha Katharinne; Dourado Magalhães, Pablo Rugero; Yago de Carvalho Leal, Lucas; Bentzen Santos, Hugo Rafael

    2018-03-01

    The survival of Atriplex nummularia plants in saline environments is possible mainly due to the presence of salt-accumulating epidermal vesicles. Commonly, destructive methods, such as plant material maceration and subsequent reading in osmometers, are employed in studies on water relations and osmotic adjustment and are inconvenient due to their underestimation of the total water potential inside the cells, which can cause overestimation of an osmotic adjustment that is not present. As a result, methods that preserve leaf structure, such as pressure-volume (P-V) curves, which take into consideration only the salts that compose the symplastic solution, are more adequate. Thus, the main objectives of this study were to evaluate the effect of determination methods of osmotic potential (Ψ o ) in Atriplex nummularia through destructive and leaf structure-preserving techniques and to determine the water relations of the species under increasing NaCl concentrations. Plants were subjected to daily irrigations, maintaining soil moisture at 80% of field capacity, with solutions of increasing NaCl concentration (0, 0.05, 0.1, 0.2, 0.25 and 0.3 M) for 84 days. Water potential, osmotic potential and osmotic adjustment were determined. In addition, P-V curves were constructed using pressure chambers. Water and osmotic potentials decreased linearly with increasing NaCl concentration in the irrigation solution. The main discrepancies observed were related to the osmotic adjustments determined through maceration and P-V curves. Based on the present research, it was possible to conclude that in studies with species that have salt-accumulating vesicles in the epidermis, such as the plants in the genus Atriplex, constructing P-V curves is more adequate than destructive methods. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  5. Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions.

    PubMed

    Joshi, Gaurav V; Duan, Yuanyuan; Della Bona, Alvaro; Hill, Thomas J; St John, Kenneth; Griggs, Jason A

    2013-11-01

    To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPam(1/2) reaching a plateau at different critical flaw sizes based on loading method. Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  6. Fatigue loading and R-curve behavior of a dental glass-ceramic with multiple flaw distributions

    PubMed Central

    Joshi, Gaurav V.; Duan, Yuanyuan; Bona, Alvaro Della; Hill, Thomas J.; John, Kenneth St.; Griggs, Jason A.

    2013-01-01

    Objectives To determine the effects of surface finish and mechanical loading on the rising toughness curve (R-curve) behavior of a fluorapatite glass-ceramic (IPS e.max ZirPress) and to determine a statistical model for fitting fatigue lifetime data with multiple flaw distributions. Materials and Methods Rectangular beam specimens were fabricated by pressing. Two groups of specimens (n=30) with polished (15 μm) or air abraded surface were tested under rapid monotonic loading in oil. Additional polished specimens were subjected to cyclic loading at 2 Hz (n=44) and 10 Hz (n=36). All fatigue tests were performed using a fully articulated four-point flexure fixture in 37°C water. Fractography was used to determine the critical flaw size and estimate fracture toughness. To prove the presence of R-curve behavior, non-linear regression was used. Forward stepwise regression was performed to determine the effects on fracture toughness of different variables, such as initial flaw type, critical flaw size, critical flaw eccentricity, cycling frequency, peak load, and number of cycles. Fatigue lifetime data were fit to an exclusive flaw model. Results There was an increase in fracture toughness values with increasing critical flaw size for both loading methods (rapid monotonic loading and fatigue). The values for the fracture toughness ranged from 0.75 to 1.1 MPa·m1/2 reaching a plateau at different critical flaw sizes based on loading method. Significance Cyclic loading had a significant effect on the R-curve behavior. The fatigue lifetime distribution was dependent on the flaw distribution, and it fit well to an exclusive flaw model. PMID:24034441

  7. Effectiveness of Treatment of Idiopathic Scoliosis by SpineCor Dynamic Bracing with Special Physiotherapy Programme in SpineCor System.

    PubMed

    Rożek, Karina; Potaczek, Tomasz; Zarzycka, Maja; Lipik, Ewa; Jasiewicz, Barbara

    2016-10-28

    The SpineCor dynamic brace for the treatment of idiopathic scoliosis is designed to maintain the correct position of the spine and a new movement strategy for 20 hours per day. The SpineCor exercise system intensifies and complements the brace treatment. This study evaluated the effectiveness of a comprehensive treatment of idiopathic scoliosis involving the SpineCor system. The study assessed a group of 40 patients (38 girls and 2 boys) with idiopathic scoliosis treated with the SpineCor brace. The average age at beginning of treatment was 13.1 yrs (10-15). Minimum treatment time was 18 months. 28 participants met the SRS criteria. Angles of the curve before and after bracing based on imaging studies were measured at the beginning and end of the treatment, analyzed and compared. Rehabilitation focused on teaching active corrective movement throughout the brace treatment. A control group was formed of 33 patients, including 21 meeting the SRS criteria, who used the SpineCor dynamic brace but did not participate in the associated exercise programme. Among patients from the exercise group who met the SRS criteria, 25% demonstrated reduced curve angles, 35.7% demonstrated curve progression and 39.3% showed stabilization (no change). Among patients meeting the SRS criteria from the control group, a decrease in curve angle was observed in 14.3% of the patients, curve progression in 57.1% and stabilization in 28.6%. 1. The addition of a dedicated physiotherapy programme to SpineCor dynamic bracing improves the chances of obtaining a positive outcome. 2. It is necessary to further analyse the course of the comprehensive treatment, also with regard to other types of braces and kinesiotherapy programmes.

  8. Idiopathic Early-Onset Scoliosis: Growing Rods Versus Vertically Expandable Prosthetic Titanium Ribs at 5-Year Follow-up.

    PubMed

    Bachabi, Malick; McClung, Anna; Pawelek, Jeff B; El Hawary, Ron; Thompson, George H; Smith, John T; Vitale, Michael G; Akbarnia, Behrooz A; Sponseller, Paul D

    2018-06-08

    Distraction-based techniques allow spinal growth until skeletal maturity while preventing curve progression. Two multicenter early-onset scoliosis databases were used to identify patients with idiopathic spine abnormalities treated with traditional growing rods (TGR) or vertically expandable titanium ribs (VEPTR). Patients underwent at least 4 lengthenings and had at least 5-year follow-up. Significance was set at P<0.05. In total, 50 patients treated with TGR and 22 treated with VEPTR were included. Mean (±SD) age at surgery was 5.5 (±2.0) years for the TGR group versus 4.3 (±1.9) years for the VEPTR group (P=0.044); other demographic parameters were similar. VEPTR patients had more procedures (mean 15±4.2) than TGR patients (mean 10±4.0) (P=0.001). Unilateral constructs were present in 18% (4 of 22) of VEPTR and 16% (8 of 50) of TGR patients. Bilateral constructs spanned a mean 2.1 additional surgical levels and exposed patients to 1.6 fewer procedures than unilateral constructs. Curve correction was similar between bilateral and unilateral constructs. TGR patients experienced greater curve correction (50%) than VEPTR patients (27%) (P<0.001) and achieved a greater percentage of thoracic height gain (24%) than VEPTR patients (12%) (P=0.024). At latest follow-up, TGR patients had better maintenance of curve correction, less kyphosis, and 15% greater absolute gain in thoracic height versus VEPTR patients. TGR patients had a lower rate of wound complications (14%) than VEPTR patients (41%) (P=0.011). In patients with idiopathic early-onset scoliosis, TGRs produced greater initial curve correction, greater thoracic height gains, less kyphosis, and lower incidence of wound complications compared with VEPTR. Level III.

  9. Creative Tiling: A Story of 1000-and-1 Curves

    ERIC Educational Resources Information Center

    Al-Darwish, Nasir

    2012-01-01

    We describe a procedure that utilizes symmetric curves for building artistic tiles. One particular curve was found to mesh nicely with hundreds other curves, resulting in eye-catching tiling designs. The results of our work serve as a good example of using ideas from 2-D graphics and algorithms in a practical web-based application.

  10. Salt-induced aggregation and fusion of dioctadecyldimethylammonium chloride and sodium dihexadecylphosphate vesicles.

    PubMed Central

    Carmona-Ribeiro, A M; Chaimovich, H

    1986-01-01

    Small dioctadecyldimethylammonium chloride (DODAC) vesicles prepared by sonication fuse upon addition of NaCl as detected by several methods (electron microscopy, trapped volume determinations, temperature-dependent phase transition curves, and osmometer behavior. In contrast, small sodium dihexadecyl phosphate (DHP) vesicles mainly aggregate upon NaCl addition as shown by electron microscopy and the lack of osmometer behavior. Scatter-derived absorbance changes of small and large DODAC or DHP vesicles as a function of time after salt addition were obtained for a range of NaCl or amphiphile concentration. These changes were interpreted in accordance with a phenomenological model based upon fundamental light-scattering laws and simple geometrical considerations. Short-range hydration repulsion between DODAC (or DHP) vesicles is possibly the main energy barrier for the fusion process. Images FIGURE 2 FIGURE 9 PMID:3779002

  11. No evidence for an open vessel effect in centrifuge-based vulnerability curves of a long-vesselled liana (Vitis vinifera).

    PubMed

    Jacobsen, Anna L; Pratt, R Brandon

    2012-06-01

    Vulnerability to cavitation curves are used to estimate xylem cavitation resistance and can be constructed using multiple techniques. It was recently suggested that a technique that relies on centrifugal force to generate negative xylem pressures may be susceptible to an open vessel artifact in long-vesselled species. Here, we used custom centrifuge rotors to measure different sample lengths of 1-yr-old stems of grapevine to examine the influence of open vessels on vulnerability curves, thus testing the hypothesized open vessel artifact. These curves were compared with a dehydration-based vulnerability curve. Although samples differed significantly in the number of open vessels, there was no difference in the vulnerability to cavitation measured on 0.14- and 0.271-m-long samples of Vitis vinifera. Dehydration and centrifuge-based curves showed a similar pattern of declining xylem-specific hydraulic conductivity (K(s)) with declining water potential. The percentage loss in hydraulic conductivity (PLC) differed between dehydration and centrifuge curves and it was determined that grapevine is susceptible to errors in estimating maximum K(s) during dehydration because of the development of vessel blockages. Our results from a long-vesselled liana do not support the open vessel artifact hypothesis. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  12. A unified degree day model describes survivorship of Copitarsia corruda Pogue & Simmons (Lepidoptera: Noctuidae) at different constant temperatures.

    PubMed

    Gómez, N N; Venette, R C; Gould, J R; Winograd, D F

    2009-02-01

    Predictions of survivorship are critical to quantify the probability of establishment by an alien invasive species, but survival curves rarely distinguish between the effects of temperature on development versus senescence. We report chronological and physiological age-based survival curves for a potentially invasive noctuid, recently described as Copitarsia corruda Pogue & Simmons, collected from Peru and reared on asparagus at six constant temperatures between 9.7 and 34.5 degrees C. Copitarsia spp. are not known to occur in the United States but are routinely intercepted at ports of entry. Chronological age survival curves differ significantly among temperatures. Survivorship at early age after hatch is greatest at lower temperatures and declines as temperature increases. Mean longevity was 220 (+/-13 SEM) days at 9.7 degrees C. Physiological age survival curves constructed with developmental base temperature (7.2 degrees C) did not correspond to those constructed with a senescence base temperature (5.9 degrees C). A single degree day survival curve with an appropriate temperature threshold based on senescence adequately describes survivorship under non-stress temperature conditions (5.9-24.9 degrees C).

  13. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    PubMed Central

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  14. Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    2016-12-08

    In neutron multiplicity counting one may fit a curve by minimizing an objective function, χmore » $$2\\atop{n}$$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W -1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$$2\\atop{n}$$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.« less

  15. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study.

    PubMed

    Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Takai, Yoshihiro; Yoshizawa, Makoto

    2015-05-01

    To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the tracking result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors' proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors' algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.

  16. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoyong, E-mail: xiaoyong@ieee.org; Homma, Noriyasu, E-mail: homma@ieee.org; Ichiji, Kei, E-mail: ichiji@yoshizawa.ecei.tohoku.ac.jp

    2015-05-15

    Purpose: To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. Methods: A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the trackingmore » result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. Results: For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors’ proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. Conclusions: In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors’ algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.« less

  17. Applying Emax model and bivariate thin plate splines to assess drug interactions

    PubMed Central

    Kong, Maiying; Lee, J. Jack

    2014-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95% point-wise confidence interval as well as its 95% simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies. PMID:20036878

  18. Applying Emax model and bivariate thin plate splines to assess drug interactions.

    PubMed

    Kong, Maiying; Lee, J Jack

    2010-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.

  19. Bayesian analysis of stage-fall-discharge rating curves and their uncertainties

    NASA Astrophysics Data System (ADS)

    Mansanarez, V.; Le Coz, J.; Renard, B.; Lang, M.; Pierrefeu, G.; Vauchel, P.

    2016-09-01

    Stage-fall-discharge (SFD) rating curves are traditionally used to compute streamflow records at sites where the energy slope of the flow is variable due to variable backwater effects. We introduce a model with hydraulically interpretable parameters for estimating SFD rating curves and their uncertainties. Conventional power functions for channel and section controls are used. The transition to a backwater-affected channel control is computed based on a continuity condition, solved either analytically or numerically. The practical use of the method is demonstrated with two real twin-gauge stations, the Rhône River at Valence, France, and the Guthusbekken stream at station 0003ṡ0033, Norway. Those stations are typical of a channel control and a section control, respectively, when backwater-unaffected conditions apply. The performance of the method is investigated through sensitivity analysis to prior information on controls and to observations (i.e., available gaugings) for the station of Valence. These analyses suggest that precisely identifying SFD rating curves requires adapted gauging strategy and/or informative priors. The Madeira River, one of the largest tributaries of the Amazon, provides a challenging case typical of large, flat, tropical river networks where bed roughness can also be variable in addition to slope. In this case, the difference in staff gauge reference levels must be estimated as another uncertain parameter of the SFD model. The proposed Bayesian method is a valuable alternative solution to the graphical and empirical techniques still proposed in hydrometry guidance and standards.

  20. Study on peak shape fitting method in radon progeny measurement.

    PubMed

    Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju

    2015-11-01

    Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. The Stroke Assessment of Fall Risk (SAFR): predictive validity in inpatient stroke rehabilitation.

    PubMed

    Breisinger, Terry P; Skidmore, Elizabeth R; Niyonkuru, Christian; Terhorst, Lauren; Campbell, Grace B

    2014-12-01

    To evaluate relative accuracy of a newly developed Stroke Assessment of Fall Risk (SAFR) for classifying fallers and non-fallers, compared with a health system fall risk screening tool, the Fall Harm Risk Screen. Prospective quality improvement study conducted at an inpatient stroke rehabilitation unit at a large urban university hospital. Patients admitted for inpatient stroke rehabilitation (N = 419) with imaging or clinical evidence of ischemic or hemorrhagic stroke, between 1 August 2009 and 31 July 2010. Not applicable. Sensitivity, specificity, and area under the curve for Receiver Operating Characteristic Curves of both scales' classifications, based on fall risk score completed upon admission to inpatient stroke rehabilitation. A total of 68 (16%) participants fell at least once. The SAFR was significantly more accurate than the Fall Harm Risk Screen (p < 0.001), with area under the curve of 0.73, positive predictive value of 0.29, and negative predictive value of 0.94. For the Fall Harm Risk Screen, area under the curve was 0.56, positive predictive value was 0.19, and negative predictive value was 0.86. Sensitivity and specificity of the SAFR (0.78 and 0.63, respectively) was higher than the Fall Harm Risk Screen (0.57 and 0.48, respectively). An evidence-derived, population-specific fall risk assessment may more accurately predict fallers than a general fall risk screen for stroke rehabilitation patients. While the SAFR improves upon the accuracy of a general assessment tool, additional refinement may be warranted. © The Author(s) 2014.

  2. High-resolution melting analysis (HRM) for differentiation of four major Taeniidae species in dogs Taenia hydatigena, Taenia multiceps, Taenia ovis, and Echinococcus granulosus sensu stricto.

    PubMed

    Dehghani, Mansoureh; Mohammadi, Mohammad Ali; Rostami, Sima; Shamsaddini, Saeedeh; Mirbadie, Seyed Reza; Harandi, Majid Fasihi

    2016-07-01

    Tapeworms of the genus Taenia include several species of important parasites with considerable medical and veterinary significance. Accurate identification of these species in dogs is the prerequisite of any prevention and control program. Here, we have applied an efficient method for differentiating four major Taeniid species in dogs, i.e., Taenia hydatigena, T. multiceps, T. ovis, and Echinococcus granulosus sensu stricto. High-resolution melting (HRM) analysis is simpler, less expensive, and faster technique than conventional DNA-based assays and enables us to detect PCR amplicons in a closed system. Metacestode samples were collected from local abattoirs from sheep. All the isolates had already been identified by PCR-sequencing, and their sequence data were deposited in the GenBank. Real-time PCR coupled with HRM analysis targeting mitochondrial cox1 and ITS1 genes was used to differentiate taeniid species. Distinct melting curves were obtained from ITS1 region enabling accurate differentiation of three Taenia species and E. granulosus in dogs. The HRM curves of Taenia species and E .granulosus were clearly separated at Tm of 85 to 87 °C. In addition, double-pick melting curves were produced in mixed infections. Cox1 melting curves were not decisive enough to distinguish four taeniids. In this work, the efficiency of HRM analysis to differentiate four major taeniid species in dogs has been demonstrated using ITS1 gene.

  3. Analysis of diffusion in curved surfaces and its application to tubular membranes.

    PubMed

    Klaus, Colin James Stockdale; Raghunathan, Krishnan; DiBenedetto, Emmanuele; Kenworthy, Anne K

    2016-12-01

    Diffusion of particles in curved surfaces is inherently complex compared with diffusion in a flat membrane, owing to the nonplanarity of the surface. The consequence of such nonplanar geometry on diffusion is poorly understood but is highly relevant in the case of cell membranes, which often adopt complex geometries. To address this question, we developed a new finite element approach to model diffusion on curved membrane surfaces based on solutions to Fick's law of diffusion and used this to study the effects of geometry on the entry of surface-bound particles into tubules by diffusion. We show that variations in tubule radius and length can distinctly alter diffusion gradients in tubules over biologically relevant timescales. In addition, we show that tubular structures tend to retain concentration gradients for a longer time compared with a comparable flat surface. These findings indicate that sorting of particles along the surfaces of tubules can arise simply as a geometric consequence of the curvature without any specific contribution from the membrane environment. Our studies provide a framework for modeling diffusion in curved surfaces and suggest that biological regulation can emerge purely from membrane geometry. © 2016 Klaus, Raghunathan, et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  4. Multi-frequency properties of synthetic blazar radio light curves within the shock-in-jet scenario

    NASA Astrophysics Data System (ADS)

    Fromm, C. M.; Fuhrmann, L.; Perucho, M.

    2015-08-01

    Context. Blazars are among the most powerful extragalactic objects as a sub-class of active galactic nuclei. They launch relativistic jets and their emitted radiation shows strong variability across the entire electro-magnetic spectrum. The mechanisms producing the variability are still controversial, and different models have been proposed to explain the observed variations in multi-frequency blazar light curves. Aims: We investigate the capabilities of the classical shock-in-jet model to explain and reconstruct the observed evolution of flares in the turnover frequency - turnover flux density (νm-Sm) plane and their frequency dependent light curve parameters. With a detailed parameter space study, we provide the framework for future, detailed comparisons of observed flare signatures with the shock-in-jet scenario. Methods: Based on the shock model, we compute synthetic single-dish light curves at different radio frequencies (2.6 to 345 GHz) and for different physical conditions in a conical jet (e.g. magnetic field geometry and Doppler factor). From those we extract the slopes of the different energy loss stages within the (νm-Sm) plane and deduce the frequency dependence of different light curve parameters, such as flare amplitude, time scale, and cross-band delays. Results: The evolution of the Doppler factor along the jet has the strongest influence on the evolution of the flare and on the frequency dependent light curve parameters. The synchrotron stage can be hidden in the Compton or in the adiabatic stage, depending mainly on the evolution of the Doppler factor, which makes it difficult to detect its signature in observations. In addition, we show that the time lags between different frequencies can be used as an efficient tool to better constrain the physical properties of these objects. Appendix A is available in electronic form at http://www.aanda.org

  5. Search for light curve modulations among Kepler candidates. Three very low-mass transiting companions

    NASA Astrophysics Data System (ADS)

    Lillo-Box, J.; Ribas, A.; Barrado, D.; Merín, B.; Bouy, H.

    2016-07-01

    Context. Light curve modulations in the sample of Kepler planet candidates allows the disentangling of the nature of the transiting object by photometrically measuring its mass. This is possible by detecting the effects of the gravitational pull of the companion (ellipsoidal modulations) and in some cases, the photometric imprints of the Doppler effect when observing in a broad band (Doppler beaming). Aims: We aim to photometrically unveil the nature of some transiting objects showing clear light curve modulations in the phase-folded Kepler light curve. Methods: We selected a subsample among the large crop of Kepler objects of interest (KOIs) based on their chances to show detectable light curve modulations, I.e., close (a< 12 R⋆) and large (in terms of radius, according to their transit signal) candidates. We modeled their phase-folded light curves with consistent equations for the three effects, namely, reflection, ellipsoidal and beaming (known as REB modulations). Results: We provide detailed general equations for the fit of the REB modulations for the case of eccentric orbits. These equations are accurate to the photometric precisions achievable by current and forthcoming instruments and space missions. By using this mathematical apparatus, we find three close-in very low-mass companions (two of them in the brown dwarf mass domain) orbiting main-sequence stars (KOI-554, KOI-1074, and KOI-3728), and reject the planetary nature of the transiting objects (thus classifying them as false positives). In contrast, the detection of the REB modulations and transit/eclipse signal allows the measurement of their mass and radius that can provide important constraints for modeling their interiors since just a few cases of low-mass eclipsing binaries are known. Additionally, these new systems can help to constrain the similarities in the formation process of the more massive and close-in planets (hot Jupiters), brown dwarfs, and very low-mass companions.

  6. A systematic evaluation of contemporary impurity correction methods in ITS-90 aluminium fixed point cells

    NASA Astrophysics Data System (ADS)

    da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham

    2017-06-01

    The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.

  7. Effects of chemical composition on the corrosion of dental alloys.

    PubMed

    Galo, Rodrigo; Ribeiro, Ricardo Faria; Rodrigues, Renata Cristina Silveira; Rocha, Luís Augusto; de Mattos, Maria da Glória Chiarello

    2012-01-01

    The aim of this study was to determine the effect of the oral environment on the corrosion of dental alloys with different compositions, using electrochemical methods. The corrosion rates were obtained from the current-potential curves and electrochemical impedance spectroscopy (EIS). The effect of artificial saliva on the corrosion of dental alloys was dependent on alloy composition. Dissolution of the ions occurred in all tested dental alloys and the results were strongly dependent on the general alloy composition. Regarding the alloys containing nickel, the Ni-Cr and Ni-Cr-Ti alloys released 0.62 mg/L of Ni on average, while the Co-Cr dental alloy released ions between 0.01 and 0.03 mg/L of Co and Cr, respectively.The open-circuit potential stabilized at a higher level with lower deviation (standard deviation: Ni-Cr-6Ti = 32 mV/SCE and Co-Cr = 54 mV/SCE). The potenciodynamic curves of the dental alloys showed that the Ni-based dental alloy with >70 wt% of Ni had a similar curve and the Co-Cr dental alloy showed a low current density and hence a high resistance to corrosion compared with the Ni-based dental alloys. Some changes in microstructure were observed and this fact influenced the corrosion behavior for the alloys. The lower corrosion resistance also led to greater release of nickel ions to the medium. The quantity of Co ions released from the Co-Cr-Mo alloy was relatively small in the solutions. In addition, the quantity of Cr ions released into the artificial saliva from the Co-Cr alloy was lower than Cr release from the Ni-based dental alloys.

  8. Micro-cone targets for producing high energy and low divergence particle beams

    DOEpatents

    Le Galloudec, Nathalie

    2013-09-10

    The present invention relates to micro-cone targets for producing high energy and low divergence particle beams. In one embodiment, the micro-cone target includes a substantially cone-shaped body including an outer surface, an inner surface, a generally flat and round, open-ended base, and a tip defining an apex. The cone-shaped body tapers along its length from the generally flat and round, open-ended base to the tip defining the apex. In addition, the outer surface and the inner surface connect the base to the tip, and the tip curves inwardly to define an outer surface that is concave, which is bounded by a rim formed at a juncture where the outer surface meets the tip.

  9. VizieR Online Data Catalog: Stellar surface gravity measures of KIC stars (Bastien+, 2016)

    NASA Astrophysics Data System (ADS)

    Bastien, F. A.; Stassun, K. G.; Basri, G.; Pepper, J.

    2016-04-01

    In our analysis we use all quarters from the Kepler mission except for Q0, and we only use the long-cadence light curves. Additionally, we only use the Pre-search Data Conditioning, Maximum A Posteriori (PDC-MAP) light curves, as further discussed in Section 3.4.1. (1 data file).

  10. Buckling Behavior of Long Anisotropic Plates Subjected to Elastically Restrained Thermal Expansion and Contraction

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.

    2004-01-01

    An approach for synthesizing buckling results for thin balanced and unbalanced symmetric laminates that are subjected to uniform heating or cooling and elastically restrained against thermal expansion or contraction is presented. This approach uses a nondimensional analysis for infinitely long, flexural anisotropic plates that are subjected to combined mechanical loads. In addition, stiffness-weighted laminate thermal-expansion parameters and compliance coefficients are derived that are used to determine critical temperatures in terms of physically intuitive mechanical-buckling coefficients. Many results are presented for some common laminates that are intended to facilitate a structural designer s transition to the use of the generic buckling design curves. Several curves that illustrate the fundamental parameters used in the analysis are presented, for nine contemporary material systems, that provide physical insight into the buckling response in addition to providing useful design data. Examples are presented that demonstrate the use of the generic design curves.

  11. A New Curve of Critical Nitrogen Concentration Based on Spike Dry Matter for Winter Wheat in Eastern China

    PubMed Central

    Zhao, Ben; Ata-UI-Karim, Syed Tahir; Yao, Xia; Tian, YongChao; Cao, WeiXing; Zhu, Yan; Liu, XiaoJun

    2016-01-01

    Diagnosing the status of crop nitrogen (N) helps to optimize crop yield, improve N use efficiency, and reduce the risk of environmental pollution. The objectives of the present study were to develop a critical N (Nc) dilution curve for winter wheat (based on spike dry matter [SDM] during the reproductive growth period), to compare this curve with the existing Nc dilution curve (based on plant dry matter [DM] of winter wheat), and to explore its ability to reliably estimate the N status of winter wheat. Four field experiments, using varied N fertilizer rates (0–375 kg ha-1) and six cultivars (Yangmai16, Ningmai13, Ningmai9, Aikang58, Yangmai12, Huaimai 17), were conducted in the Jiangsu province of eastern China. Twenty plants from each plot were sampled to determine the SDM and spike N concentration (SNC) during the reproductive growth period. The spike Nc curve was described by Nc = 2.85×SDM-0.17, with SDM ranging from 0.752 to 7.233 t ha-1. The newly developed curve was lower than the Nc curve based on plant DM. The N nutrition index (NNI) for spike dry matter ranged from 0.62 to 1.1 during the reproductive growth period across the seasons. Relative yield (RY) increased with increasing NNI; however, when NNI was greater than 0.96, RY plateaued and remained stable. The spike Nc dilution curve can be used to correctly identify the N nutrition status of winter wheat to support N management during the reproductive growth period for winter wheat in eastern China. PMID:27732634

  12. Base flow of streams in the outcrop area of southeastern sand aquifer, South Carolina, Georgia, Alabama, and Mississippi

    USGS Publications Warehouse

    Stricker, Virginia

    1983-01-01

    The base flow component of streamflow was separated from hydrographs for unregulated streams in the Cretaceous and Tertiary clastic outcrop area of South Carolina, Georgia, Alabama, and Mississippi. The base flow values are used in estimating recharge to the sand aquifer. Relations developed between mean annual base flow and stream discharge at the 60- and 65-percent streamflow duration point can be used to approximate mean annual base flow in lieu of hydrograph separation methods for base flows above 10 cu ft/s. Base flow recession curves were used to derive estimates of hydraulic diffusivity of the aquifer which was converted to transmissivity using estimated specific yield. These base-flow-derived transmissivities are in general agreement with transmissivities derived from well data. The shape of flow duration curves of streams is affected by the lithology of the Coastal Plain sediments. Steep flow duration curves appear to be associated with basins underlain by clay or chalk where a low percentage of the discharge is base flow while flatter curves appear to be associated with basins underlain by sand and gravel where a high percentage of the discharge is base flow. (USGS)

  13. Gaussian decomposition of high-resolution melt curve derivatives for measuring genome-editing efficiency

    PubMed Central

    Zaboikin, Michail; Freter, Carl

    2018-01-01

    We describe a method for measuring genome editing efficiency from in silico analysis of high-resolution melt curve data. The melt curve data derived from amplicons of genome-edited or unmodified target sites were processed to remove the background fluorescent signal emanating from free fluorophore and then corrected for temperature-dependent quenching of fluorescence of double-stranded DNA-bound fluorophore. Corrected data were normalized and numerically differentiated to obtain the first derivatives of the melt curves. These were then mathematically modeled as a sum or superposition of minimal number of Gaussian components. Using Gaussian parameters determined by modeling of melt curve derivatives of unedited samples, we were able to model melt curve derivatives from genetically altered target sites where the mutant population could be accommodated using an additional Gaussian component. From this, the proportion contributed by the mutant component in the target region amplicon could be accurately determined. Mutant component computations compared well with the mutant frequency determination from next generation sequencing data. The results were also consistent with our earlier studies that used difference curve areas from high-resolution melt curves for determining the efficiency of genome-editing reagents. The advantage of the described method is that it does not require calibration curves to estimate proportion of mutants in amplicons of genome-edited target sites. PMID:29300734

  14. ADDITIVITY ASSESSMENT OF TRIHALOMETHANE MIXTURES BY PROPORTIONAL RESPONSE ADDITION

    EPA Science Inventory

    If additivity is known or assumed, the toxicity of a chemical mixture may be predicted from the dose response curves of the individual chemicals comprising the mixture. As single chemical data are abundant and mixture data sparse, mixture risk methods that utilize single chemical...

  15. Single-aliquot EPR dosimetry of wallboard (drywall).

    PubMed

    Mistry, R; Thompson, J W; Boreham, D R; Rink, W J

    2011-11-01

    Electron paramagnetic resonance spectra and dose-response curves are presented for a variety of wallboard samples obtained from different manufacturing facilities, as well as for source gypsum and anhydrite. The intensity of the CO(3)(-) paramagnetic centre (G2) is enhanced with gamma radiation. Isothermal decay curves are used to propose annealing methods for the removal of the radiosensitive CO(3)(-) radical without affecting the unirradiated baseline. Post-irradiation annealing of wallboard prevents recuperation of the radiosensitive CO(3)(-) radical with additional irradiation. A single-aliquot additive dose procedure is developed that successfully measures test doses as low as 0.76 Gy.

  16. On the Relation of Setting and Early-Age Strength Development to Porosity and Hydration in Cement-Based Materials

    PubMed Central

    Lootens, Didier; Bentz, Dale P.

    2016-01-01

    Previous research has demonstrated a linear relationship between compressive strength (mortar cubes and concrete cylinders) and cumulative heat release normalized per unit volume of (mixing) water for a wide variety of cement-based mixtures at ages of 1 d and beyond. This paper utilizes concurrent ultrasonic reflection and calorimetry measurements to further explore this relationship from the time of specimen casting to 3 d. The ultrasonic measurements permit a continuous evaluation of thickening, setting, and strength development during this time period for comparison with the ongoing chemical reactions, as characterized by isothermal calorimetry measurements. Initially, the ultrasonic strength-heat release relation depends strongly on water-to-cement ratio, as well as admixture additions, with no universal behavior. Still, each individual strength-heat release curve is consistent with a percolation-based view of the cement setting process. However, beyond about 8 h for the systems investigated in the present study, the various strength-heat release curves merge towards a single relationship that broadly characterizes the development of strength as a function of heat released (fractional space filled), demonstrating that mortar and/or concrete strength at early ages can be effectively monitored using either ultrasonic or calorimetry measurements on small paste or mortar specimens. PMID:27046956

  17. On the Relation of Setting and Early-Age Strength Development to Porosity and Hydration in Cement-Based Materials.

    PubMed

    Lootens, Didier; Bentz, Dale P

    2016-04-01

    Previous research has demonstrated a linear relationship between compressive strength (mortar cubes and concrete cylinders) and cumulative heat release normalized per unit volume of (mixing) water for a wide variety of cement-based mixtures at ages of 1 d and beyond. This paper utilizes concurrent ultrasonic reflection and calorimetry measurements to further explore this relationship from the time of specimen casting to 3 d. The ultrasonic measurements permit a continuous evaluation of thickening, setting, and strength development during this time period for comparison with the ongoing chemical reactions, as characterized by isothermal calorimetry measurements. Initially, the ultrasonic strength-heat release relation depends strongly on water-to-cement ratio, as well as admixture additions, with no universal behavior. Still, each individual strength-heat release curve is consistent with a percolation-based view of the cement setting process. However, beyond about 8 h for the systems investigated in the present study, the various strength-heat release curves merge towards a single relationship that broadly characterizes the development of strength as a function of heat released (fractional space filled), demonstrating that mortar and/or concrete strength at early ages can be effectively monitored using either ultrasonic or calorimetry measurements on small paste or mortar specimens.

  18. Acoustic propagation in curved ducts with extended reacting wall treatment

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.

    1989-01-01

    A finite-element Galerkin formulation was employed to study the attenuation of acoustic waves propagating in two-dimensional S-curved ducts with absorbing walls without a mean flow. The reflection and transmission at the entrance and the exit of a curved duct were determined by coupling the finite-element solutions in the curved duct to the eigenfunctions of an infinite, uniform, hard wall duct. In the frequency range where the duct height and acoustic wave length are nearly equal, the effects of duct length, curvature (duct offset) and absorber thickness were examined. For a given offset in the curved duct, the length of the S-duct was found to significantly affect both the absorptive and reflective characteristics of the duct. A means of reducing the number of elements in the absorber region was also presented. In addition, for a curved duct, power attenuation contours were examined to determine conditions for maximum acoustic power absorption. Again, wall curvature was found to significantly effect the optimization process.

  19. The South Carolina bridge-scour envelope curves

    USGS Publications Warehouse

    Benedict, Stephen T.; Feaster, Toby D.; Caldwell, Andral W.

    2016-09-30

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a series of three field investigations to evaluate historical, riverine bridge scour in the Piedmont and Coastal Plain regions of South Carolina. These investigations included data collected at 231 riverine bridges, which lead to the development of bridge-scour envelope curves for clear-water and live-bed components of scour. The application and limitations of the South Carolina bridge-scour envelope curves were documented in four reports, each report addressing selected components of bridge scour. The current investigation (2016) synthesizes the findings of these previous reports into a guidance manual providing an integrated procedure for applying the envelope curves. Additionally, the investigation provides limited verification for selected bridge-scour envelope curves by comparing them to field data collected outside of South Carolina from previously published sources. Although the bridge-scour envelope curves have limitations, they are useful supplementary tools for assessing the potential for scour at riverine bridges in South Carolina.

  20. Smooth time-dependent receiver operating characteristic curve estimators.

    PubMed

    Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos

    2018-03-01

    The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.

  1. Inverse Diffusion Curves Using Shape Optimization.

    PubMed

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  2. Closed loop engine control for regulating NOx emissions, using a two-dimensional fuel-air curve

    DOEpatents

    Bourn, Gary D.; Smith, Jack A.; Gingrich, Jess W.

    2007-01-30

    An engine control strategy that ensures that NOx emissions from the engine will be maintained at an acceptable level. The control strategy is based on a two-dimensional fuel-air curve, in which air manifold pressure (AMP) is a function of fuel header pressure and engine speed. The control strategy provides for closed loop NOx adjustment to a base AMP value derived from the fuel-air curve.

  3. On the distribution of saliency.

    PubMed

    Berengolts, Alexander; Lindenbaum, Michael

    2006-12-01

    Detecting salient structures is a basic task in perceptual organization. Saliency algorithms typically mark edge-points with some saliency measure, which grows with the length and smoothness of the curve on which these edge-points lie. Here, we propose a modified saliency estimation mechanism that is based on probabilistically specified grouping cues and on curve length distributions. In this framework, the Shashua and Ullman saliency mechanism may be interpreted as a process for detecting the curve with maximal expected length. Generalized types of saliency naturally follow. We propose several specific generalizations (e.g., gray-level-based saliency) and rigorously derive the limitations on generalized saliency types. We then carry out a probabilistic analysis of expected length saliencies. Using ergodicity and asymptotic analysis, we derive the saliency distributions associated with the main curves and with the rest of the image. We then extend this analysis to finite-length curves. Using the derived distributions, we derive the optimal threshold on the saliency for discriminating between figure and background and bound the saliency-based figure-from-ground performance.

  4. Adaptive zero-tree structure for curved wavelet image coding

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Wang, Demin; Vincent, André

    2006-02-01

    We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.

  5. Defining the learning curve of laparoendoscopic single-site Heller myotomy.

    PubMed

    Ross, Sharona B; Luberice, Kenneth; Kurian, Tony J; Paul, Harold; Rosemurgy, Alexander S

    2013-08-01

    Initial outcomes suggest laparoendoscopic single-site (LESS) Heller myotomy with anterior fundoplication provides safe, efficacious, and cosmetically superior outcomes relative to conventional laparoscopy. This study was undertaken to define the learning curve of LESS Heller myotomy with anterior fundoplication. One hundred patients underwent LESS Heller myotomy with anterior fundoplication. Symptom frequency and severity were scored using a Likert scale (0 = never/not bothersome to 10 = always/very bothersome). Symptom resolution, additional trocars, and complications were compared among patient quartiles. Median data are presented. Preoperative frequency/severity scores were: dysphagia = 10/8 and regurgitation = 8/7. Additional trocars were placed in 12 patients (10%), of whom all were in the first two quartiles. Esophagotomy/gastrotomy occurred in three patients. Postoperative complications occurred in 9 per cent. No conversions to "open" operations occurred. Length of stay was 1 day. Postoperative frequency/severity scores were: dysphagia = 2/0 and regurgitation = 0/0; scores were less than before myotomy (P < 0.001). There were no apparent scars, except where additional trocars were placed. LESS Heller myotomy with anterior fundoplication well palliates symptoms of achalasia with no apparent scar. Placement of additional trocars only occurred early in the experience. For surgeons proficient with the conventional laparoscopic approach, the learning curve of LESS Heller myotomy with anterior fundoplication is short and safe, because proficiency is quickly attained.

  6. Modeling Patterns of Activities using Activity Curves

    PubMed Central

    Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen

    2016-01-01

    Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve, which represents an abstraction of an individual’s normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics. PMID:27346990

  7. Modeling Patterns of Activities using Activity Curves.

    PubMed

    Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen

    2016-06-01

    Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve , which represents an abstraction of an individual's normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics.

  8. Dried blood spot analysis of creatinine with LC-MS/MS in addition to immunosuppressants analysis.

    PubMed

    Koster, Remco A; Greijdanus, Ben; Alffenaar, Jan-Willem C; Touw, Daan J

    2015-02-01

    In order to monitor creatinine levels or to adjust the dosage of renally excreted or nephrotoxic drugs, the analysis of creatinine in dried blood spots (DBS) could be a useful addition to DBS analysis. We developed a LC-MS/MS method for the analysis of creatinine in the same DBS extract that was used for the analysis of tacrolimus, sirolimus, everolimus, and cyclosporine A in transplant patients with the use of Whatman FTA DMPK-C cards. The method was validated using three different strategies: a seven-point calibration curve using the intercept of the calibration to correct for the natural presence of creatinine in reference samples, a one-point calibration curve at an extremely high concentration in order to diminish the contribution of the natural presence of creatinine, and the use of creatinine-[(2)H3] with an eight-point calibration curve. The validated range for creatinine was 120 to 480 μmol/L (seven-point calibration curve), 116 to 7000 μmol/L (1-point calibration curve), and 1.00 to 400.0 μmol/L for creatinine-[(2)H3] (eight-point calibration curve). The precision and accuracy results for all three validations showed a maximum CV of 14.0% and a maximum bias of -5.9%. Creatinine in DBS was found stable at ambient temperature and 32 °C for 1 week and at -20 °C for 29 weeks. Good correlations were observed between patient DBS samples and routine enzymatic plasma analysis and showed the capability of the DBS method to be used as an alternative for creatinine plasma measurement.

  9. Design of airborne imaging spectrometer based on curved prism

    NASA Astrophysics Data System (ADS)

    Nie, Yunfeng; Xiangli, Bin; Zhou, Jinsong; Wei, Xiaoxiao

    2011-11-01

    A novel moderate-resolution imaging spectrometer spreading from visible wavelength to near infrared wavelength range with a spectral resolution of 10 nm, which combines curved prisms with the Offner configuration, is introduced. Compared to conventional imaging spectrometers based on dispersive prism or diffractive grating, this design possesses characteristics of small size, compact structure, low mass as well as little spectral line curve (smile) and spectral band curve (keystone or frown). Besides, the usage of compound curved prisms with two or more different materials can greatly reduce the nonlinearity inevitably brought by prismatic dispersion. The utilization ratio of light radiation is much higher than imaging spectrometer of the same type based on combination of diffractive grating and concentric optics. In this paper, the Seidel aberration theory of curved prism and the optical principles of Offner configuration are illuminated firstly. Then the optical design layout of the spectrometer is presented, and the performance evaluation of this design, including spot diagram and MTF, is analyzed. To step further, several types of telescope matching this system are provided. This work provides an innovational perspective upon optical system design of airborne spectral imagers; therefore, it can offer theoretic guide for imaging spectrometer of the same kind.

  10. Going Beyond, Going Further: The Preparation of Acid-Base Titration Curves.

    ERIC Educational Resources Information Center

    McClendon, Michael

    1984-01-01

    Background information, list of materials needed, and procedures used are provided for a simple technique for generating mechanically plotted acid-base titration curves. The method is suitable for second-year high school chemistry students. (JN)

  11. Development of regional curves relating bankfull-channel geometry and discharge to drainage area for streams in Pennsylvania and selected areas of Maryland

    USGS Publications Warehouse

    Chaplin, Jeffrey J.

    2005-01-01

    Natural-stream designs are commonly based on the dimensions of the bankfull channel, which is capable of conveying discharges that transport sediment without excessive erosion or deposition. Regional curves relate bankfull-channel geometry and discharge to drainage area in watersheds with similar runoff characteristics and commonly are utilized by practitioners of natural-stream design to confirm or refute selection of the field-identified bankfull channel. Data collected from 66 streamflow-gaging stations and associated stream reaches between December 1999 and December 2003 were used in one-variable ordinary least-squares regression analyses to develop regional curves relating drainage area to cross-sectional area, discharge, width, and mean depth of the bankfull channel. Watersheds draining to these stations are predominantly within the Piedmont, Ridge and Valley, and Appalachian Plateaus Physiographic Provinces of Pennsylvania and northern Maryland. Statistical analyses of physiography, percentage of watershed area underlain by carbonate bedrock, and percentage of watershed area that is glaciated indicate that carbonate bedrock, not physiography or glaciation, has a controlling influence on the slope of regional curves. Regional curves developed from stations in watersheds underlain by 30 percent or less carbonate bedrock generally had steeper slopes than the corresponding relations developed from watersheds underlain by greater than 30 percent carbonate bedrock. In contrast, there is little evidence to suggest that regional curves developed from stations in the Piedmont or Ridge and Valley Physiographic Province are different from the corresponding relations developed from stations in the Appalachian Plateaus Physiographic Province. On the basis of these findings, regional curves are presented to represent two settings that are independent of physiography: (1) noncarbonate settings characterized by watersheds with carbonate bedrock underlying 30 percent or less of watershed area, and (2) carbonate settings characterized by watersheds with carbonate bedrock underlying greater than 30 percent of watershed area. All regional curves presented in this report have slopes that are significantly different from zero and normally distributed residuals that vary randomly with drainage area. Drainage area explains the most variability in bankfull cross-sectional area and bankfull discharge in the noncarbonate setting (R2 = 0.92 for both). Less variability is explained in bankfull width and depth (R2 = 0.81 and 0.72, respectively). Regional curves representing the carbonate setting are generally not as statistically robust as the corresponding noncarbonate relations because there were only 11 stations available to develop these curves and drainage area cannot explain variance resulting from karst features. The carbonate regional curves generally are characterized by less confidence, lower R2 values, and higher residual standard errors. Poor representation by watersheds less than 40 mi2 causes the carbonate regional curves for bankfull discharge, cross-sectional area, and mean depth to be disproportionately influenced by the smallest watershed (values of Cook's Distance range from 3.6 to 8.4). Additional bankfull discharge and channel-geometry data from small watersheds might reduce this influence, increase confidence, and generally improve regional curves representing the carbonate setting. Limitations associated with streamflow-gaging station selection and development of the curves result in some constraints for the application of regional curves presented in this report. These curves apply only to streams within the study area in watersheds having land use, streamflow regulation, and drainage areas that are consistent with the criteria used for station selection. Regardless of the setting, the regional curves presented here are not intended for use as the sole method for estimation of bankfull characteristics; however, th

  12. 46 CFR 173.025 - Additional intact stability standards: Counterballasted vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... of heel. T=angle of heel. EC01MR91.028 Where— GZ(1) is the righting arm curve at the displacement corresponding to the vessel without hooking load. GZ(2) is the righting arm curve at the displacement... of the hook load and the counterballast at the displacement with hook load. HA(2) is the heeling arm...

  13. 46 CFR 173.025 - Additional intact stability standards: Counterballasted vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of heel. T=angle of heel. EC01MR91.028 Where— GZ(1) is the righting arm curve at the displacement corresponding to the vessel without hooking load. GZ(2) is the righting arm curve at the displacement... of the hook load and the counterballast at the displacement with hook load. HA(2) is the heeling arm...

  14. 46 CFR 173.025 - Additional intact stability standards: Counterballasted vessels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of heel. T=angle of heel. EC01MR91.028 Where— GZ(1) is the righting arm curve at the displacement corresponding to the vessel without hooking load. GZ(2) is the righting arm curve at the displacement... of the hook load and the counterballast at the displacement with hook load. HA(2) is the heeling arm...

  15. 46 CFR 173.025 - Additional intact stability standards: Counterballasted vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of heel. T=angle of heel. EC01MR91.028 Where— GZ(1) is the righting arm curve at the displacement corresponding to the vessel without hooking load. GZ(2) is the righting arm curve at the displacement... of the hook load and the counterballast at the displacement with hook load. HA(2) is the heeling arm...

  16. 46 CFR 173.025 - Additional intact stability standards: Counterballasted vessels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of heel. T=angle of heel. EC01MR91.028 Where— GZ(1) is the righting arm curve at the displacement corresponding to the vessel without hooking load. GZ(2) is the righting arm curve at the displacement... of the hook load and the counterballast at the displacement with hook load. HA(2) is the heeling arm...

  17. The upper bound of abutment scour defined by selected laboratory and field data

    USGS Publications Warehouse

    Benedict, Stephen; Caldwell, Andral W.

    2015-01-01

    The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, conducted a field investigation of abutment scour in South Carolina and used that data to develop envelope curves defining the upper bound of abutment scour. To expand upon this previous work, an additional cooperative investigation was initiated to combine the South Carolina data with abutment-scour data from other sources and evaluate the upper bound of abutment scour with the larger data set. To facilitate this analysis, a literature review was made to identify potential sources of published abutment-scour data, and selected data, consisting of 446 laboratory and 331 field measurements, were compiled for the analysis. These data encompassed a wide range of laboratory and field conditions and represent field data from 6 states within the United States. The data set was used to evaluate the South Carolina abutment-scour envelope curves. Additionally, the data were used to evaluate a dimensionless abutment-scour envelope curve developed by Melville (1992), highlighting the distinct difference in the upper bound for laboratory and field data. The envelope curves evaluated in this investigation provide simple but useful tools for assessing the potential maximum abutment-scour depth in the field setting.

  18. A systematic methodology for creep master curve construction using the stepped isostress method (SSM): a numerical assessment

    NASA Astrophysics Data System (ADS)

    Miranda Guedes, Rui

    2018-02-01

    Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.

  19. Estimating site index of ponderosa pine in Northern California...standard curves, soil series, stem analysis

    Treesearch

    Robert F. Powers

    1972-01-01

    Four sets of standard site index curves based on statewide or regionwide averages were compared with data on natural growth from nine young stands of ponderosa pine in northern California. The curves tested were by Meyer; Dunning; Dunning and Reineke; and Arvanitis, Lindquist, and Palley. The effects of soils on height growth were also studied. Among the curves tested...

  20. Site index curves for northern hardwoods in northern Wisconsin and Upper Michigan.

    Treesearch

    Willard H. Carmean

    1978-01-01

    Site index curves based on stem analyses were computed for 13 species found in even-aged, second growth northern hardwood stands. These curves showed that most species had similarly-shaped height growth curves in early years, but after 40 years differences in both rate and pattern of growth between species was evident for trees growing on medium and good sites. Most...

  1. Nonlinear deformation of composites with consideration of the effect of couple-stresses

    NASA Astrophysics Data System (ADS)

    Lagzdiņš, A.; Teters, G.; Zilaucs, A.

    1998-09-01

    Nonlinear deformation of spatially reinforced composites under active loading (without unloading) is considered. All the theoretical constructions are based on the experimental data on unidirectional and ±π/4 cross-ply epoxy plastics reinforced with glass fibers. Based on the elastic properties of the fibers and EDT-10 epoxy binder, the linear elastic characteristics of a transversely isotropic unidirectionally reinforced fiberglass plastic are found, whereas the nonlinear characteristics are obtained from experiments. For calculating the deformation properties of the ±π/4 cross-ply plastic, a refined version of the Voigt method is applied taking into account also the couple-stresses arising in the composite due to relative rotation of the reinforcement fibers. In addition, a fourth-rank damage tensor is introduced in order to account for the impact of fracture caused by the couple-stresses. The unknown constants are found from the experimental uniaxial tension curve for the cross-ply composite. The comparison between the computed curves and experimental data for other loading paths shows that the description of the nonlinear behavior of composites can be improved by considering the effect of couple-stresses generated by rotations of the reinforcing fibers.

  2. Normalized inverse characterization of sound absorbing rigid porous media.

    PubMed

    Zieliński, Tomasz G

    2015-06-01

    This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.

  3. Variability Survey of ω Centauri in the Near-IR: Period-Luminosity Relations

    NASA Astrophysics Data System (ADS)

    Navarrete, Camila; Catelan, Márcio; Contreras Ramos, Rodrigo; Gran, Felipe; Alonso-García, Javier; Dékány, István

    2015-08-01

    ω Centauri (NGC 5139) is by far the most massive globular star cluster in the Milky Way, and has even been suggested to be the remnant of a dwarf galaxy. As such, it contains a large number of variable stars of different classes. Here we report on a deep, wide-field, near-infrared variability survey of omega Cen, carried out by our team using ESO's 4.1m VISTA telescope. Our time-series data comprise 42 and 100 epochs in J and Ks, respectively. This unique dataset has allowed us to derive complete light curves for hundreds of variable stars in the cluster, and thereby perform a detailed analysis of the near-infrared period-luminosity (PL) relations for different variability classes, including type II Cepheids, SX Phoenicis, and RR Lyrae stars. In this contribution, in addition to describing our survey and presenting the derived light curves, we present the resulting PL relations for each of these variability classes, including the first calibration of this sort for the SX Phoenicis stars. Based on these relations, we also provide an updated (pulsational) distance modulus for omega Cen, compare with results based on independent techniques, and discuss possible sources of systematic errors.

  4. Z-Index Parameterization for Volumetric CT Image Reconstruction via 3-D Dictionary Learning.

    PubMed

    Bai, Ti; Yan, Hao; Jia, Xun; Jiang, Steve; Wang, Ge; Mou, Xuanqin

    2017-12-01

    Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared with the low dose FDK reconstruction. The proposed method is expected to reduce the radiation dose by a factor of 8 for CBCT, considering the voted strongly discriminated low contrast tissues.

  5. Using Spreadsheets to Produce Acid-Base Titration Curves.

    ERIC Educational Resources Information Center

    Cawley, Martin James; Parkinson, John

    1995-01-01

    Describes two spreadsheets for producing acid-base titration curves, one uses relatively simple cell formulae that can be written into the spreadsheet by inexperienced students and the second uses more complex formulae that are best written by the teacher. (JRH)

  6. Activities of Antibiotic Combinations against Resistant Strains of Pseudomonas aeruginosa in a Model of Infected THP-1 Monocytes

    PubMed Central

    Buyck, Julien M.

    2014-01-01

    Antibiotic combinations are often used for treating Pseudomonas aeruginosa infections but their efficacy toward intracellular bacteria has not been investigated so far. We have studied combinations of representatives of the main antipseudomonal classes (ciprofloxacin, meropenem, tobramycin, and colistin) against intracellular P. aeruginosa in a model of THP-1 monocytes in comparison with bacteria growing in broth, using the reference strain PAO1 and two clinical isolates (resistant to ciprofloxacin and meropenem, respectively). Interaction between drugs was assessed by checkerboard titration (extracellular model only), by kill curves, and by using the fractional maximal effect (FME) method, which allows studying the effects of combinations when dose-effect relationships are not linear. For drugs used alone, simple sigmoidal functions could be fitted to all concentration-effect relationships (extracellular and intracellular bacteria), with static concentrations close to (ciprofloxacin, colistin, and meropenem) or slightly higher than (tobramycin) the MIC and with maximal efficacy reaching the limit of detection in broth but only a 1 to 1.5 (colistin, meropenem, and tobramycin) to 2 to 3 (ciprofloxacin) log10 CFU decrease intracellularly. Extracellularly, all combinations proved additive by checkerboard titration but synergistic using the FME method and more bactericidal in kill curve assays. Intracellularly, all combinations proved additive only based on both FME and kill curve assays. Thus, although combinations appeared to modestly improve antibiotic activity against intracellular P. aeruginosa, they do not allow eradication of these persistent forms of infections. Combinations including ciprofloxacin were the most active (even against the ciprofloxacin-resistant strain), which is probably related to the fact this drug was the most effective alone intracellularly. PMID:25348528

  7. Initial laparoscopic basic skills training shortens the learning curve of laparoscopic suturing and is cost-effective.

    PubMed

    Stefanidis, Dimitrios; Hope, William W; Korndorffer, James R; Markley, Sarah; Scott, Daniel J

    2010-04-01

    Laparoscopic suturing is an advanced skill that is difficult to acquire. Simulator-based skills curricula have been developed that have been shown to transfer to the operating room. Currently available skills curricula need to be optimized. We hypothesized that mastering basic laparoscopic skills first would shorten the learning curve of a more complex laparoscopic task and reduce resource requirements for the Fundamentals of Laparoscopic Surgery suturing curriculum. Medical students (n = 20) with no previous simulator experience were enrolled in an IRB-approved protocol, pretested on the Fundamentals of Laparoscopic Surgery suturing model, and randomized into 2 groups. Group I (n = 10) trained (unsupervised) until proficiency levels were achieved on 5 basic tasks; Group II (n = 10) received no basic training. Both groups then trained (supervised) on the Fundamentals of Laparoscopic Surgery suturing model until previously reported proficiency levels were achieved. Two weeks later, they were retested to evaluate their retention scores, training parameters, instruction requirements, and cost between groups using t-test. Baseline characteristics and performance were similar for both groups, and 9 of 10 subjects in each group achieved the proficiency levels. The initial performance on the simulator was better for Group I after basic skills training, and their suturing learning curve was shorter compared with Group II. In addition, Group I required less active instruction. Overall time required to finish the curriculum was similar for both groups; but the Group I training strategy cost less, with a savings of $148 per trainee. Teaching novices basic laparoscopic skills before a more complex laparoscopic task produces substantial cost savings. Additional studies are needed to assess the impact of such integrated curricula on ultimate educational benefit. Copyright (c) 2010 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  8. CONTRAIS: CONservative TReatment for Adolescent Idiopathic Scoliosis: a randomised controlled trial protocol

    PubMed Central

    2013-01-01

    Background Idiopathic scoliosis is a three-dimensional structural deformity of the spine that occurs in children and adolescents. Recent reviews on bracing and exercise treatment have provided some evidence for effect of these interventions. The purpose of this study is to improve the evidence base regarding the effectiveness of conservative treatments for preventing curve progression in idiopathic scoliosis. Methods/design Patients: Previously untreated girls and boys with idiopathic scoliosis, 9 to 17 years of age with at least one year of remaining growth and a curve Cobb angle of 25–40 degrees will be included. A total of 135 participants will be randomly allocated in groups of 45 patients each to receive one of the three interventions. Interventions: All three groups will receive a physical activity prescription according to the World Health Organisation recommendations. One group will additionally wear a hyper-corrective night-time brace. One group will additionally perform postural scoliosis-specific exercises. Outcome: Participation in the study will last until the curve has progressed, or until cessation of skeletal growth. Outcome variables will be measured every 6 months. The primary outcome variable, failure of treatment, is defined as progression of the Cobb angle more than 6 degrees, compared to the primary x-ray, seen on two consecutive spinal standing x-rays taken with 6 months interval. Secondary outcome measures include the SRS-22r and EQ5D-Y quality of life questionnaires, the International Physical Activity Questionnaire (IPAQ) short form, and Cobb angle at end of the study. Discussion This trial will evaluate which of the tested conservative treatment approaches that is the most effective for patients with adolescent idiopathic scoliosis. Trial registration NCT01761305 PMID:24007599

  9. CONTRAIS: CONservative TReatment for Adolescent Idiopathic Scoliosis: a randomised controlled trial protocol.

    PubMed

    Abbott, Allan; Möller, Hans; Gerdhem, Paul

    2013-09-05

    Idiopathic scoliosis is a three-dimensional structural deformity of the spine that occurs in children and adolescents. Recent reviews on bracing and exercise treatment have provided some evidence for effect of these interventions. The purpose of this study is to improve the evidence base regarding the effectiveness of conservative treatments for preventing curve progression in idiopathic scoliosis. Previously untreated girls and boys with idiopathic scoliosis, 9 to 17 years of age with at least one year of remaining growth and a curve Cobb angle of 25-40 degrees will be included. A total of 135 participants will be randomly allocated in groups of 45 patients each to receive one of the three interventions. All three groups will receive a physical activity prescription according to the World Health Organisation recommendations. One group will additionally wear a hyper-corrective night-time brace. One group will additionally perform postural scoliosis-specific exercises. Participation in the study will last until the curve has progressed, or until cessation of skeletal growth. OUTCOME variables will be measured every 6 months. The primary outcome variable, failure of treatment, is defined as progression of the Cobb angle more than 6 degrees, compared to the primary x-ray, seen on two consecutive spinal standing x-rays taken with 6 months interval. Secondary outcome measures include the SRS-22r and EQ5D-Y quality of life questionnaires, the International Physical Activity Questionnaire (IPAQ) short form, and Cobb angle at end of the study. This trial will evaluate which of the tested conservative treatment approaches that is the most effective for patients with adolescent idiopathic scoliosis. NCT01761305.

  10. Curing behavior and reaction kinetics of binder resins for 3D-printing investigated by dielectric analysis (DEA)

    NASA Astrophysics Data System (ADS)

    Möginger, B.; Kehret, L.; Hausnerova, B.; Steinhaus, J.

    2016-05-01

    3D-Printing is an efficient method in the field of additive manufacturing. In order to optimize the properties of manufactured parts it is essential to adapt the curing behavior of the resin systems with respect to the requirements. Thus, effects of resin composition, e.g. due to different additives such as thickener and curing agents, on the curing behavior have to be known. As the resin transfers from a liquid to a solid glass the time dependent ion viscosity was measured using DEA with flat IDEX sensors. This allows for a sensitive measurement of resin changes as the ion viscosity changes two to four decades. The investigated resin systems are based on the monomers styrene and HEMA. To account for the effects of copolymerization in the calculation of the reaction kinetics it was assumed that the reaction can be considered as a homo-polymerization having a reaction order n≠1. Then the measured ion viscosity curves are fitted with the solution of the reactions kinetics - the time dependent degree of conversion (DC-function) - for times exceeding the initiation phase representing the primary curing. The measured ion viscosity curves can nicely be fitted with the DC-function and the determined fit parameters distinguish distinctly between the investigated resin compositions.

  11. Multiple performance measures are needed to evaluate triage systems in the emergency department.

    PubMed

    Zachariasse, Joany M; Nieboer, Daan; Oostenbrink, Rianne; Moll, Henriëtte A; Steyerberg, Ewout W

    2018-02-01

    Emergency department triage systems can be considered prediction rules with an ordinal outcome, where different directions of misclassification have different clinical consequences. We evaluated strategies to compare the performance of triage systems and aimed to propose a set of performance measures that should be used in future studies. We identified performance measures based on literature review and expert knowledge. Their properties are illustrated in a case study evaluating two triage modifications in a cohort of 14,485 pediatric emergency department visits. Strengths and weaknesses of the performance measures were systematically appraised. Commonly reported performance measures are measures of statistical association (34/60 studies) and diagnostic accuracy (17/60 studies). The case study illustrates that none of the performance measures fulfills all criteria for triage evaluation. Decision curves are the performance measures with the most attractive features but require dichotomization. In addition, paired diagnostic accuracy measures can be recommended for dichotomized analysis, and the triage-weighted kappa and Nagelkerke's R 2 for ordinal analyses. Other performance measures provide limited additional information. When comparing modifications of triage systems, decision curves and diagnostic accuracy measures should be used in a dichotomized analysis, and the triage-weighted kappa and Nagelkerke's R 2 in an ordinal approach. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Simple and Sensitive Paper-Based Device Coupling Electrochemical Sample Pretreatment and Colorimetric Detection.

    PubMed

    Silva, Thalita G; de Araujo, William R; Muñoz, Rodrigo A A; Richter, Eduardo M; Santana, Mário H P; Coltro, Wendell K T; Paixão, Thiago R L C

    2016-05-17

    We report the development of a simple, portable, low-cost, high-throughput visual colorimetric paper-based analytical device for the detection of procaine in seized cocaine samples. The interference of most common cutting agents found in cocaine samples was verified, and a novel electrochemical approach was used for sample pretreatment in order to increase the selectivity. Under the optimized experimental conditions, a linear analytical curve was obtained for procaine concentrations ranging from 5 to 60 μmol L(-1), with a detection limit of 0.9 μmol L(-1). The accuracy of the proposed method was evaluated using seized cocaine samples and an addition and recovery protocol.

  13. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  14. Corrosion behavior of pristine and added MgB2 in Phosphate Buffered Saline Solution

    NASA Astrophysics Data System (ADS)

    Batalu, D.; Bojin, D.; Ghiban, B.; Aldica, G.; Badica, P.

    2012-09-01

    We have obtained by Spark Plasma Sintering (SPS), dense samples of MgB2 added with Ho2O3. Starting composition was (MgB2)0.975(HoO1.5)0.025 and we used addition powders with an average particle size below and above 100 nm. For Mg, pristine and added MgB2 samples we measured potentiodynamic polarization curves in Phosphate Buffered Saline (PBS) solution media at room temperature. MgB2 based composites show corrosion/ degradation effects. This behavior is in principle similar to Mg based alloys in the same media. Our work suggests that the different morphologies and phase compositions of the SPS-ed samples influence the interaction with corrosion medium; hence additions can play an important role in controlling the corrosion rate. Pristine MgB2 show a significant improvement of the corrosion resistance, if compared with Mg. The best corrosion resistance is obtained for pristine MgB2, followed by MgB2 with nano-Ho2O3 and μ-Ho2O3 additions.

  15. Revision of the Rainfall-intensity Duration Curves for the commonwealth of Kentucky.

    DOT National Transportation Integrated Search

    1999-06-01

    The purpose of this study was to revise and update the existing Rainfall Intensity- Duration-Frequency (IDF) Curves for the Commonwealth of Kentucky. The nine curves that currently govern Kentucky are based on data from First-Order Weather Stations i...

  16. How Far Is Quasar UV/Optical Variability from a Damped Random Walk at Low Frequency?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo Hengxiao; Wang Junxian; Cai Zhenyi

    Studies have shown that UV/optical light curves of quasars can be described using the prevalent damped random walk (DRW) model, also known as the Ornstein–Uhlenbeck process. A white noise power spectral density (PSD) is expected at low frequency in this model; however, a direct observational constraint to the low-frequency PSD slope is difficult due to the limited lengths of the light curves available. Meanwhile, quasars show scatter in their DRW parameters that is too large to be attributed to uncertainties in the measurements and dependence on the variation of known physical factors. In this work we present simulations showing that,more » if the low-frequency PSD deviates from the DRW, the red noise leakage can naturally produce large scatter in the variation parameters measured from simulated light curves. The steeper the low-frequency PSD slope, the larger scatter we expect. Based on observations of SDSS Stripe 82 quasars, we find that the low-frequency PSD slope should be no steeper than −1.3. The actual slope could be flatter, which consequently requires that the quasar variabilities should be influenced by other unknown factors. We speculate that the magnetic field and/or metallicity could be such additional factors.« less

  17. An automated cell analysis sensing system based on a microfabricated rheoscope for the study of red blood cells physiology.

    PubMed

    Bransky, Avishay; Korin, Natanel; Nemirovski, Yael; Dinnar, Uri

    2006-08-15

    An automated rheoscope has been developed, utilizing a microfabricated glass flow cell, high speed camera and advanced image-processing software. RBCs suspended in a high viscosity medium were filmed flowing through a microchannel. Under these conditions, RBCs exhibit different orientations and deformations according to their location in the velocity profile. The rheoscope system produces valuable data such as velocity profile of RBCs, spatial distribution within a microchannel and deformation index (DI) curves. The variation of DI across the channel height, due to change in shear stress, was measured carrying implications for diffractometry methods. These curves of DI were taken at a constant flow rate and cover most of the relevant shear stress spectrum. This is an improvement of the existing techniques for deformability measurements and may serve as a diagnostic tool for certain blood disorders. The DI curves were compared to measurements of the flowing RBCs velocity profile. In addition, we found that RBCs flowing in a microchannel are mostly gathered in the center of the flow and maintain a characteristic spatial distribution. The spatial distribution in this region changes slightly with increasing flow rate. Hence, the system described, provides means for examining the behavior of individual RBCs, and may serve as a microfabricated diagnostic device for deformability measurement.

  18. The influence of tip shape on bending force during needle insertion

    PubMed Central

    van de Berg, Nick J.; de Jong, Tonke L.; van Gerwen, Dennis J.; Dankelman, Jenny; van den Dobbelsteen, John J.

    2017-01-01

    Steering of needles involves the planning and timely modifying of instrument-tissue force interactions to allow for controlled deflections during the insertion in tissue. In this work, the effect of tip shape on these forces was studied using 10 mm diameter needle tips. Six different tips were selected, including beveled and conical versions, with or without pre-bend or pre-curve. A six-degree-of-freedom force/torque sensor measured the loads during indentations in tissue simulants. The increased insertion (axial) and bending (radial) forces with insertion depth — the force-displacement slopes — were analyzed. Results showed that the ratio between radial and axial forces was not always proportional. This means that the tip load does not have a constant orientation, as is often assumed in mechanics-based steering models. For all tip types, the tip-load assumed a more radial orientation with increased axial load. This effect was larger for straight tips than for pre-bent or pre-curved tips. In addition, the force-displacement slopes were consistently higher for (1) increased tip angles, and for (2) beveled tips compared to conical tips. Needles with a bent or curved tip allow for an increased bending force and a decreased variability of the tip load vector orientation. PMID:28074939

  19. Investigation of Hot Deformation Behavior of Duplex Stainless Steel Grade 2507

    NASA Astrophysics Data System (ADS)

    Kingklang, Saranya; Uthaisangsuk, Vitoon

    2017-01-01

    Recently, duplex stainless steels (DSSs) are being increasingly employed in chemical, petro-chemical, nuclear, and energy industries due to the excellent combination of high strength and corrosion resistance. Better understanding of deformation behavior and microstructure evolution of the material under hot working process is significant for achieving desired mechanical properties. In this work, plastic flow curves and microstructure development of the DSS grade 2507 were investigated. Cylindrical specimens were subjected to hot compression tests for different elevated temperatures and strain rates by a deformation dilatometer. It was found that stress-strain responses of the examined steel strongly depended on the forming rate and temperature. The flow stresses increased with higher strain rates and lower temperatures. Subsequently, predictions of the obtained stress-strain curves were done according to the Zener-Hollomon equation. Determination of material parameters for the constitutive model was presented. It was shown that the calculated flow curves agreed well with the experimental results. Additionally, metallographic examinations of hot compressed samples were performed by optical microscope using color tint etching. Area based phase fractions of the existing phases were determined for each forming condition. Hardness of the specimens was measured and discussed with the resulted microstructures. The proposed flow stress model can be used to design and optimize manufacturing process at elevated temperatures for the DSS.

  20. Identification of Reliable Components in Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS): a Data-Driven Approach across Metabolic Processes.

    PubMed

    Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun

    2015-11-04

    There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.

  1. An original approach was used to better evaluate the capacity of a prognostic marker using published survival curves.

    PubMed

    Dantan, Etienne; Combescure, Christophe; Lorent, Marine; Ashton-Chess, Joanna; Daguin, Pascal; Classe, Jean-Marc; Giral, Magali; Foucher, Yohann

    2014-04-01

    Predicting chronic disease evolution from a prognostic marker is a key field of research in clinical epidemiology. However, the prognostic capacity of a marker is not systematically evaluated using the appropriate methodology. We proposed the use of simple equations to calculate time-dependent sensitivity and specificity based on published survival curves and other time-dependent indicators as predictive values, likelihood ratios, and posttest probability ratios to reappraise prognostic marker accuracy. The methodology is illustrated by back calculating time-dependent indicators from published articles presenting a marker as highly correlated with the time to event, concluding on the high prognostic capacity of the marker, and presenting the Kaplan-Meier survival curves. The tools necessary to run these direct and simple computations are available online at http://www.divat.fr/en/online-calculators/evalbiom. Our examples illustrate that published conclusions about prognostic marker accuracy may be overoptimistic, thus giving potential for major mistakes in therapeutic decisions. Our approach should help readers better evaluate clinical articles reporting on prognostic markers. Time-dependent sensitivity and specificity inform on the inherent prognostic capacity of a marker for a defined prognostic time. Time-dependent predictive values, likelihood ratios, and posttest probability ratios may additionally contribute to interpret the marker's prognostic capacity. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    PubMed

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  3. Risk prediction models for selection of lung cancer screening candidates: A retrospective validation study

    PubMed Central

    ten Haaf, Kevin; Tammemägi, Martin C.; Han, Summer S.; Kong, Chung Yin; Plevritis, Sylvia K.; de Koning, Harry J.; Steyerberg, Ewout W.

    2017-01-01

    Background Selection of candidates for lung cancer screening based on individual risk has been proposed as an alternative to criteria based on age and cumulative smoking exposure (pack-years). Nine previously established risk models were assessed for their ability to identify those most likely to develop or die from lung cancer. All models considered age and various aspects of smoking exposure (smoking status, smoking duration, cigarettes per day, pack-years smoked, time since smoking cessation) as risk predictors. In addition, some models considered factors such as gender, race, ethnicity, education, body mass index, chronic obstructive pulmonary disease, emphysema, personal history of cancer, personal history of pneumonia, and family history of lung cancer. Methods and findings Retrospective analyses were performed on 53,452 National Lung Screening Trial (NLST) participants (1,925 lung cancer cases and 884 lung cancer deaths) and 80,672 Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO) ever-smoking participants (1,463 lung cancer cases and 915 lung cancer deaths). Six-year lung cancer incidence and mortality risk predictions were assessed for (1) calibration (graphically) by comparing the agreement between the predicted and the observed risks, (2) discrimination (area under the receiver operating characteristic curve [AUC]) between individuals with and without lung cancer (death), and (3) clinical usefulness (net benefit in decision curve analysis) by identifying risk thresholds at which applying risk-based eligibility would improve lung cancer screening efficacy. To further assess performance, risk model sensitivities and specificities in the PLCO were compared to those based on the NLST eligibility criteria. Calibration was satisfactory, but discrimination ranged widely (AUCs from 0.61 to 0.81). The models outperformed the NLST eligibility criteria over a substantial range of risk thresholds in decision curve analysis, with a higher sensitivity for all models and a slightly higher specificity for some models. The PLCOm2012, Bach, and Two-Stage Clonal Expansion incidence models had the best overall performance, with AUCs >0.68 in the NLST and >0.77 in the PLCO. These three models had the highest sensitivity and specificity for predicting 6-y lung cancer incidence in the PLCO chest radiography arm, with sensitivities >79.8% and specificities >62.3%. In contrast, the NLST eligibility criteria yielded a sensitivity of 71.4% and a specificity of 62.2%. Limitations of this study include the lack of identification of optimal risk thresholds, as this requires additional information on the long-term benefits (e.g., life-years gained and mortality reduction) and harms (e.g., overdiagnosis) of risk-based screening strategies using these models. In addition, information on some predictor variables included in the risk prediction models was not available. Conclusions Selection of individuals for lung cancer screening using individual risk is superior to selection criteria based on age and pack-years alone. The benefits, harms, and feasibility of implementing lung cancer screening policies based on risk prediction models should be assessed and compared with those of current recommendations. PMID:28376113

  4. Using ROC Curves to Choose Minimally Important Change Thresholds when Sensitivity and Specificity Are Valued Equally: The Forgotten Lesson of Pythagoras. Theoretical Considerations and an Example Application of Change in Health Status

    PubMed Central

    Froud, Robert; Abel, Gary

    2014-01-01

    Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472

  5. Influence of natural organic matter on fate and transport of silver nanoparticles in saturated porous media: laboratory experiments and modeling

    NASA Astrophysics Data System (ADS)

    Kanel, Sushil R.; Flory, Jason; Meyerhoefer, Allie; Fraley, Jessica L.; Sizemore, Ioana E.; Goltz, Mark N.

    2015-03-01

    Understanding the fate and transport of silver nanoparticles (AgNPs) is of importance due to their widespread use and potential harmful effects on humans and the environment. The present study investigates the fate and transport of widely used Creighton AgNPs in saturated porous media. Previous investigations of AgNP transport in the presence of natural organic matter (NOM) report contradictory results regarding how the presence of NOM affected the stability and mobility of AgNPs. In this work, a nonreactive tracer, AgNPs and a mixture of AgNPs and NOM were injected into a background solution (0.01 mM of NaNO3) flowing through laboratory columns packed with water-saturated glass beads to obtain concentration versus time breakthrough curves. Transport of AgNPs in the presence of NOM was simulated with a model that accounted for both reversible and irreversible attachment. Based upon an analysis of the AgNP breakthrough curves, it was found that addition of NOM at concentrations ranging from 1 to 40 mg L-1 resulted in significant decreases in both the zeroth and first moments of the breakthrough curves. These observations may be attributed to NOM promoting AgNP aggregation and irreversible attachment. Raman and surface-enhanced Raman scattering analysis of NOM-AgNP mixtures revealed that a possible interaction of NOM with AgNP occurred through the carboxylic moieties (-COO-) located in the immediate vicinity of the metallic surface. At higher concentrations of NOM, both the zeroth and first moments of the breakthrough curves increased. Based on modeling and the literature, we hypothesize that as the NOM concentration increases, it begins to coat both the AgNPs and the glass beads, leading to a situation where AgNP transport may be described in the same way that transport of a sorbing hydrophobic compound partitioning to an immobile organic phase is typically described, assuming reversible, rate-limited sorption.

  6. Vacuum-ultraviolet photoionization studies of the microhydration of DNA bases (guanine, cytosine, adenine, and thymine).

    PubMed

    Belau, Leonid; Wilson, Kevin R; Leone, Stephen R; Ahmed, Musahid

    2007-08-09

    In this work, we report on a photoionization study of the microhydration of the four DNA bases. Gas-phase clusters of water with DNA bases [guanine (G), cytosine (C), adenine (A), and thymine (T)] are generated via thermal vaporization of the bases and expansion of the resultant vapor in a continuous supersonic jet expansion of water seeded in Ar. The resulting clusters are investigated by single-photon ionization with tunable vacuum-ultraviolet synchrotron radiation and mass analyzed using reflectron mass spectrometry. Photoionization efficiency (PIE) curves are recorded for the DNA bases and the following water (W) clusters: G, GWn (n = 1-3); C, CWn (n = 1-3); A, AWn (n = 1,2); and T, TWn (n = 1-3). Appearance energies (AE) are derived from the onset of these PIE curves (all energies in eV): G (8.1 +/- 0.1), GW (8.0 +/- 0.1), GW2 (8.0 +/- 0.1), and GW3 (8.0); C (8.65 +/- 0.05), CW (8.45 +/- 0.05), CW2 (8.4 +/- 0.1), and CW3 (8.3 +/- 0.1); A (8.30 +/- 0.05), AW (8.20 +/- 0.05), and AW2 (8.1 +/- 0.1); T (8.90 +/- 0.05); and TW (8.75 +/- 0.05), TW2 (8.6 +/- 0.1), and TW3 (8.6 +/- 0.1). The AEs of the DNA bases decrease slightly with the addition of water molecules (up to three) but do not converge to values found for photoinduced electron removal from DNA bases in solution.

  7. Sulcal set optimization for cortical surface registration.

    PubMed

    Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M

    2010-04-15

    Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.

  8. Heterozygote PCR product melting curve prediction.

    PubMed

    Dwight, Zachary L; Palais, Robert; Kent, Jana; Wittwer, Carl T

    2014-03-01

    Melting curve prediction of PCR products is limited to perfectly complementary strands. Multiple domains are calculated by recursive nearest neighbor thermodynamics. However, the melting curve of an amplicon containing a heterozygous single-nucleotide variant (SNV) after PCR is the composite of four duplexes: two matched homoduplexes and two mismatched heteroduplexes. To better predict the shape of composite heterozygote melting curves, 52 experimental curves were compared with brute force in silico predictions varying two parameters simultaneously: the relative contribution of heteroduplex products and an ionic scaling factor for mismatched tetrads. Heteroduplex products contributed 25.7 ± 6.7% to the composite melting curve, varying from 23%-28% for different SNV classes. The effect of ions on mismatch tetrads scaled to 76%-96% of normal (depending on SNV class) and averaged 88 ± 16.4%. Based on uMelt (www.dna.utah.edu/umelt/umelt.html) with an expanded nearest neighbor thermodynamic set that includes mismatched base pairs, uMelt HETS calculates helicity as a function of temperature for homoduplex and heteroduplex products, as well as the composite curve expected from heterozygotes. It is an interactive Web tool for efficient genotyping design, heterozygote melting curve prediction, and quality control of melting curve experiments. The application was developed in Actionscript and can be found online at http://www.dna.utah.edu/hets/. © 2013 WILEY PERIODICALS, INC.

  9. New well testing applications of the pressure derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onur, M.

    1989-01-01

    This work presents new derivative type curves based on a new derivative group which is equal to the dimensionless pressure group divided by its logarithmic derivative with respect to dimensionless time group. One major advantage of these type curves is that the type-curve match of field pressure/pressure-derivative data with the new derivative type curves is accomplished by moving the field data plot in only the horizontal direction. This type-curve match fixes time match-point values. The pressure change versus time data is then matched with the dimensionless pressure solution to determine match-point values. Well/reservoir parameters can then be estimated in themore » standard way. This two step type-curve matching procedure increases the likelihood of obtaining a unique match. Moreover, the unique correspondence between the ordinate of the field data plot and the new derivative type curves should prove useful in determining whether given field data actually represents the well/reservoir model assumed by a selected type curve solution. It is also shown that the basic idea used in construction the type curves can be used to ensure that proper semilog straight lines are chosen when analyzing pressure data by semilog methods. Analysis of both drawdown and buildup data is considered and actual field cases are analyzed using the new derivative type curves and the semilog identification method. This work also presents new methods based on the pressure derivative to analyze buildup data obtained at a well (fracture or unfractured) produced to pseudosteady-state prior to shut-in. By using a method of analysis based on the pressure derivative, it is shown that a well's drainage area at the instant of shut-in and the flow capacity can be computed directly from buildup data even in cases where conventional semilog straight lines are not well-defined.« less

  10. Determination of Iron Ion in the Water of a Natural Hot Spring Using Microfluidic Paper-based Analytical Devices.

    PubMed

    Ogawa, Kazuma; Kaneta, Takashi

    2016-01-01

    Microfluidic paper-based analytical devices (μPADs) were used to detect the iron ion content in the water of a natural hot spring in order to assess the applicability of this process to the environmental analysis of natural water. The μPADs were fabricated using a wax printer after the addition of hydroxylamine into the detection reservoirs to reduce Fe(3+) to Fe(2+), 1,10-phenanthroline for the forming of a complex, and poly(acrylic acid) for ion-pair formation with an acetate buffer (pH 4.7). The calibration curve of Fe(3+) showed a linearity that ranged from 100 to 1000 ppm in the semi-log plot whereas the color intensity was proportional to the concentration of Fe(3+) and ranged from 40 to 350 ppm. The calibration curve represented the daily fluctuation in successive experiments during four days, which indicated that a calibration curve must be constructed for each day. When freshly prepared μPADs were compared with stored ones, no significant difference was found. The μPADs were applied to the determination of Fe(3+) in a sample of water from a natural hot spring. Both the accuracy and the precision of the μPAD method were evaluated by comparisons with the results obtained via conventional spectrophotometry. The results of the μPADs were in good agreement with, but less precise than, those obtained via conventional spectrophotometry. Consequently, the μPADs offer advantages that include rapid and miniaturized operation, although the precision was poorer than that of conventional spectrophotometry.

  11. THE IMPACT OF MOLECULAR GAS ON MASS MODELS OF NEARBY GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, B. S.; Blok, W. J. G. de; Walter, F.

    2016-04-15

    We present CO velocity fields and rotation curves for a sample of nearby galaxies, based on data from HERACLES. We combine our data with THINGS, SINGS, and KINGFISH results to provide a comprehensive sample of mass models of disk galaxies inclusive of molecular gas. We compare the kinematics of the molecular (CO from HERACLES) and atomic (H i from THINGS) gas distributions to determine the extent to which CO may be used to probe the dynamics in the inner part of galaxies. In general, we find good agreement between the CO and H i kinematics, with small differences in themore » inner part of some galaxies. We add the contribution of the molecular gas to the mass models in our galaxies by using two different conversion factors α{sub CO} to convert CO luminosity to molecular gas mass surface density—the constant Milky Way value and the radially varying profiles determined in recent work based on THINGS, HERACLES, and KINGFISH data. We study the relative effect that the addition of the molecular gas has on the halo rotation curves for Navarro–Frenk–White and the observationally motivated pseudo-isothermal halos. The contribution of the molecular gas varies for galaxies in our sample—for those galaxies where there is a substantial molecular gas content, using different values of α{sub CO} can result in significant differences to the relative contribution of the molecular gas and hence the shape of the dark matter halo rotation curves in the central regions of galaxies.« less

  12. Using a high-fidelity patient simulator with first-year medical students to facilitate learning of cardiovascular function curves.

    PubMed

    Harris, David M; Ryan, Kathleen; Rabuck, Cynthia

    2012-09-01

    Students are relying on technology for learning more than ever, and educators need to adapt to facilitate student learning. High-fidelity patient simulators (HFPS) are usually reserved for the clinical years of medical education and are geared to improve clinical decision skills, teamwork, and patient safety. Finding ways to incorporate HFPS into preclinical medical education represents more of a challenge, and there is limited literature regarding its implementation. The main objective of this study was to implement a HFPS activity into a problem-based curriculum to enhance the learning of basic sciences. More specifically, the focus was to aid in student learning of cardiovascular function curves and help students develop heart failure treatment strategies based on basic cardiovascular physiology concepts. Pretests and posttests, along with student surveys, were used to determine student knowledge and perception of learning in two first-year medical school classes. There was an increase of 21% and 22% in the percentage of students achieving correct answers on a posttest compared with their pretest score. The median number of correct questions increased from pretest scores of 2 and 2.5 to posttest scores of 4 and 5 of a possible total of 6 in each respective year. Student survey data showed agreement that the activity aided in learning. This study suggests that a HFPS activity can be implemented during the preclinical years of medical education to address basic science concepts. Additionally, it suggests that student learning of cardiovascular function curves and heart failure strategies are facilitated.

  13. Comparative study of sub-micrometer polymeric structures: Dot-arrays, linear and crossed gratings generated by UV laser based two-beam interference, as surfaces for SPR and AFM based bio-sensing

    NASA Astrophysics Data System (ADS)

    Csete, M.; Sipos, Á.; Kőházi-Kis, A.; Szalai, A.; Szekeres, G.; Mathesz, A.; Csákó, T.; Osvay, K.; Bor, Zs.; Penke, B.; Deli, M. A.; Veszelka, Sz.; Schmatulla, A.; Marti, O.

    2007-12-01

    Two-dimensional gratings are generated on poly-carbonate films spin-coated onto thin gold-silver bimetallic layers by two-beam interference method. Sub-micrometer periodic polymer dots and stripes are produced illuminating the poly-carbonate surface by p- and s-polarized beams of a frequency quadrupled Nd:YAG laser, and crossed gratings are generated by rotating the substrates between two sequential treatments. It is shown by pulsed force mode atomic force microscopy that the mean value of the adhesion is enhanced on the dot-arrays and on the crossed gratings. The grating-coupling on the two-dimensional structures results in double peaks on the angle dependent resonance curves of the surface plasmons excited by frequency doubled Nd:YAG laser. The comparison of the resonance curves proves that a surface profile ensuring minimal undirected scattering is required to optimize the grating-coupling, in addition to the minimal modulation amplitude, and to the optimal azimuthal orientation. The secondary minima are the narrowest in presence of linear gratings on multi-layers having optimized composition, and on crossed structures consisting of appropriately oriented polymer stripes. The large coupling efficiency and adhesion result in high detection sensitivity on the crossed gratings. Bio-sensing is realized by monitoring the rotated-crossed grating-coupled surface plasmon resonance curves, and detecting the chemical heterogeneity by tapping-mode atomic force microscopy. The interaction of Amyloid-β peptide, a pathogenetic factor in Alzheimer disease, with therapeutical molecules is demonstrated.

  14. Development of an x-ray prism for analyzer based imaging systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bewer, Brian; Chapman, Dean

    Analyzer crystal based imaging techniques such as diffraction enhanced imaging (DEI) and multiple imaging radiography (MIR) utilize the Bragg peak of perfect crystal diffraction to convert angular changes into intensity changes. These x-ray techniques extend the capability of conventional radiography, which derives image contrast from absorption, by providing large intensity changes for small angle changes introduced from the x-ray beam traversing the sample. Objects that have very little absorption contrast may have considerable refraction and ultrasmall angle x-ray scattering contrast improving visualization and extending the utility of x-ray imaging. To improve on the current DEI technique an x-ray prism (XRP)more » was designed and included in the imaging system. The XRP allows the analyzer crystal to be aligned anywhere on the rocking curve without physically moving the analyzer from the Bragg angle. By using the XRP to set the rocking curve alignment rather than moving the analyzer crystal physically the needed angle sensitivity is changed from submicroradians for direct mechanical movement of the analyzer crystal to tens of milliradians for movement of the XRP angle. However, this improvement in angle positioning comes at the cost of absorption loss in the XRP and depends on the x-ray energy. In addition to using an XRP for crystal alignment it has the potential for scanning quickly through the entire rocking curve. This has the benefit of collecting all the required data for image reconstruction in a single measurement thereby removing some problems with motion artifacts which remain a concern in current DEI/MIR systems especially for living animals.« less

  15. Development of an x-ray prism for analyzer based imaging systems

    NASA Astrophysics Data System (ADS)

    Bewer, Brian; Chapman, Dean

    2010-08-01

    Analyzer crystal based imaging techniques such as diffraction enhanced imaging (DEI) and multiple imaging radiography (MIR) utilize the Bragg peak of perfect crystal diffraction to convert angular changes into intensity changes. These x-ray techniques extend the capability of conventional radiography, which derives image contrast from absorption, by providing large intensity changes for small angle changes introduced from the x-ray beam traversing the sample. Objects that have very little absorption contrast may have considerable refraction and ultrasmall angle x-ray scattering contrast improving visualization and extending the utility of x-ray imaging. To improve on the current DEI technique an x-ray prism (XRP) was designed and included in the imaging system. The XRP allows the analyzer crystal to be aligned anywhere on the rocking curve without physically moving the analyzer from the Bragg angle. By using the XRP to set the rocking curve alignment rather than moving the analyzer crystal physically the needed angle sensitivity is changed from submicroradians for direct mechanical movement of the analyzer crystal to tens of milliradians for movement of the XRP angle. However, this improvement in angle positioning comes at the cost of absorption loss in the XRP and depends on the x-ray energy. In addition to using an XRP for crystal alignment it has the potential for scanning quickly through the entire rocking curve. This has the benefit of collecting all the required data for image reconstruction in a single measurement thereby removing some problems with motion artifacts which remain a concern in current DEI/MIR systems especially for living animals.

  16. Development of an x-ray prism for analyzer based imaging systems.

    PubMed

    Bewer, Brian; Chapman, Dean

    2010-08-01

    Analyzer crystal based imaging techniques such as diffraction enhanced imaging (DEI) and multiple imaging radiography (MIR) utilize the Bragg peak of perfect crystal diffraction to convert angular changes into intensity changes. These x-ray techniques extend the capability of conventional radiography, which derives image contrast from absorption, by providing large intensity changes for small angle changes introduced from the x-ray beam traversing the sample. Objects that have very little absorption contrast may have considerable refraction and ultrasmall angle x-ray scattering contrast improving visualization and extending the utility of x-ray imaging. To improve on the current DEI technique an x-ray prism (XRP) was designed and included in the imaging system. The XRP allows the analyzer crystal to be aligned anywhere on the rocking curve without physically moving the analyzer from the Bragg angle. By using the XRP to set the rocking curve alignment rather than moving the analyzer crystal physically the needed angle sensitivity is changed from submicroradians for direct mechanical movement of the analyzer crystal to tens of milliradians for movement of the XRP angle. However, this improvement in angle positioning comes at the cost of absorption loss in the XRP and depends on the x-ray energy. In addition to using an XRP for crystal alignment it has the potential for scanning quickly through the entire rocking curve. This has the benefit of collecting all the required data for image reconstruction in a single measurement thereby removing some problems with motion artifacts which remain a concern in current DEI/MIR systems especially for living animals.

  17. PERIODOGRAMS FOR MULTIBAND ASTRONOMICAL TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VanderPlas, Jacob T.; Ivezic, Željko

    This paper introduces the multiband periodogram, a general extension of the well-known Lomb–Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb–Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common tomore » all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. After a pedagogical development of the formalism of least-squares spectral analysis, which motivates the essential features of the multiband model, we use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data, a vast improvement over the years of data reported to be required by previous studies. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.« less

  18. VizieR Online Data Catalog: Photometry/spectroscopic measurements for KA1858+4850 (Pei+, 2014)

    NASA Astrophysics Data System (ADS)

    Pei, L.; Barth, A. J.; Aldering, G. S.; Briley, M. M.; Carroll, C. J.; Carson, D. J.; Cenko, S. B.; Clubb, K. I.; Cohen, D. P.; Cucchiara, A.; Desjardins, T. D.; Edelson, R.; Fang, J. J.; Fedrow, J. M.; Filippenko, A. V.; Fox, O. D.; Furniss, A.; Gates, E. L.; Gregg, M.; Gustafson, S.; Horst, J. C.; Joner, M. D.; Kelly, P. L.; Lacy, M.; Laney, C. D.; Leonard, D. C.; Li, W.; Malkan, M. A.; Margon, B.; Neeleman, M.; Nguyen, M. L.; Prochaska, J. X.; Ross, N. R.; Sand, D. J.; Searcy, K. J.; Shivvers, I. S.; Silverman, J. M.; Smith, G. H.; Suzuki, N.; Smith, K. L.; Tytler, D.; Werk, J. K.; Worseck, G.

    2017-05-01

    We employed the Lick Observatory 3 m Shane telescope with the Kast Spectrograph and five other ground-based telescopes to spectroscopically and photometrically monitor KA1858+4850 from 2012 February to November. Reverberation mapping requires a continuum light curve with high sampling cadence and S/N. To achieve this, we obtained V-band images from ground-based telescopes and used aperture photometry to construct a light curve for KA1858+4850 that has nearly nightly sampling for a span of 290 days. For several reasons, we chose to use the V-band light curve rather than the Kepler light curve for reverberation measurements. (2 data files).

  19. Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models

    PubMed Central

    Hillis, Stephen L.

    2015-01-01

    A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405

  20. Prestraining and Its Influence on Subsequent Fatigue Life

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.; Mcgaw, Michael A.; Kalluri, Sreeramesh

    1995-01-01

    An experimental program was conducted to study the damaging effects of tensile and compressive prestrains on the fatigue life of nickel-base, Inconel 718 superalloy at room temperature. To establish baseline fatigue behavior, virgin specimens with a solid uniform gage section were fatigued to failure under fully-reversed strain-control. Additional specimens were prestrained to 2 percent, 5 percent, and 10 percent (engineering strains) in the tensile direction and to 2 percent (engineering strain) in the compressive direction under stroke-control, and were subsequently fatigued to failure under fully-reversed strain-control. Experimental results are compared with estimates of remaining fatigue lives (after prestraining) using three life prediction approaches: (1) the Linear Damage Rule; (2) the Linear Strain and Life Fraction Rule; and (3) the nonlinear Damage Curve Approach. The Smith-Watson-Topper parameter was used to estimate fatigue lives in the presence of mean stresses. Among the cumulative damage rules investigated, best remaining fatigue life predictions were obtained with the nonlinear Damage Curve Approach.

  1. The Final Kepler Planet Candidate Catalog (DR25)

    NASA Astrophysics Data System (ADS)

    Coughlin, Jeffrey; Thompson, Susan E.; Kepler Team

    2017-06-01

    We present Kepler's final planet candidate catalog, which is based on the Q1--Q17 DR25 data release and was created to allow for accurate calculations of planetary occurrence rates. We discuss improvements made to our fully automated candidate vetting procedure, which yields specific categories of false positives and a disposition score value to indicate decision confidence. We present the use of light curve inversion and scrambling, in addition to our continued use of pixel-level transit injection, to produce artificial planet candidates and false positives. Since these simulated data sets were subjected to the same automated vetting procedure as the real data set, we are able to measure both the completeness and reliability of the catalog. The DR25 catalog, source code, and a multitude of completeness and reliability data products are available at the Exoplanet Archive (http://exoplanetarchive.ipac.caltech.edu). The DR25 light curves and pixel-level data are available at MAST (http://archive.stsci.edu/kepler).

  2. Evaluation of postprandial glucose excursion using a novel minimally invasive glucose area-under-the-curve monitoring system.

    PubMed

    Kuranuki, Sachi; Sato, Toshiyuki; Okada, Seiki; Hosoya, Samiko; Seko, Akinobu; Sugihara, Kaya; Nakamura, Teiji

    2013-01-01

    To develop a minimally invasive interstitial fluid extraction technology (MIET) to monitor postprandial glucose area under the curve (AUC) without blood sampling, we evaluated the accuracy of glucose AUC measured by MIET and compared with that by blood sampling after food intake. Interstitial fluid glucose AUC (IG-AUC) following consumption of 6 different types of foods was measured by MIET. MIET consisted of stamping microneedle arrays, placing hydrogel patches on the areas, and calculating IG-AUC based on glucose levels in the hydrogels. Glycemic index (GI) was determined using IG-AUC and reference AUC measured by blood sampling. IG-AUC strongly correlated with reference AUC (R = 0.91), and GI determined using IG-AUC showed good correlation with that determined by reference AUC (R = 0.88). IG-AUC obtained by MIET can accurately predict the postprandial glucose excursion without blood sampling. In addition, feasibility of GI measurement by MIET was confirmed.

  3. Main-belt Asteroids in the K2 Uranus Field

    NASA Astrophysics Data System (ADS)

    Molnár, L.; Pál, A.; Sárneczky, K.; Szabó, R.; Vinkó, J.; Szabó, Gy. M.; Kiss, Cs.; Hanyecz, O.; Marton, G.; Kiss, L. L.

    2018-02-01

    We present the K2 light curves of a large sample of untargeted main-belt asteroids (MBAs) detected with the Kepler Space Telescope. The asteroids were observed within the Uranus superstamp, a relatively large, continuous field with a low stellar background designed to cover the planet Uranus and its moons during Campaign 8 of the K2 mission. The superstamp offered the possibility of obtaining precise, uninterrupted light curves of a large number of MBAs and thus determining unambiguous rotation rates for them. We obtained photometry for 608 MBAs, and were able to determine or estimate rotation rates for 90 targets, of which 86 had no known values before. In an additional 16 targets we detected incomplete cycles and/or eclipse-like events. We found the median rotation rate to be significantly longer than that of the ground-based observations, indicating that the latter are biased toward shorter rotation rates. Our study highlights the need and benefits of further continuous photometry of asteroids.

  4. Testing for nonrandom shape similarity between sister cells using automated shape comparison

    NASA Astrophysics Data System (ADS)

    Guo, Monica; Marshall, Wallace F.

    2009-02-01

    Several reports in the biological literature have indicated that when a living cell divides, the two daughter cells have a tendency to be mirror images of each other in terms of their overall cell shape. This phenomenon would be consistent with inheritance of spatial organization from mother cell to daughters. However the published data rely on a small number of examples that were visually chosen, raising potential concerns about inadvertent selection bias. We propose to revisit this issue using automated quantitative shape comparison methods which would have no contribution from the observer and which would allow statistical testing of similarity in large numbers of cells. In this report we describe a first order approach to the problem using rigid curve matching. Using test images, we compare a pointwise correspondence based distance metric with a chamfer matching strategy and find that the latter provides better correspondence and smaller distances between aligned curves, especially when we allow nonrigid deformation of the outlines in addition to rotation.

  5. Electrochemical and spectroscopic study on the interaction between isoprenaline and DNA using multivariate curve resolution-alternating least squares.

    PubMed

    Ni, Yongnian; Wei, Min; Kokot, Serge

    2011-11-01

    Interaction of isoprenaline (ISO) with calf-thymus DNA was studied by spectroscopic and electrochemical methods. The behavior of ISO was investigated at a glassy carbon electrode (GCE) by cyclic voltammetry (CV) and differential pulse stripping voltammetry (DPSV); ISO was oxidized and an irreversible oxidation peak was observed. The binding constant K and the stoichiometric coefficient m of ISO with DNA were evaluated. Also, with the addition of DNA, hyperchromicity of the UV-vis absorption spectra of ISO was noted, while the fluorescence intensity decreased significantly. Multivariate curve resolution-alternating least squares (MCR-ALS) chemometrics method was applied to resolve the combined spectroscopic data matrix, which was obtained by the UV-vis and fluorescence methods. Pure spectra of ISO, DNA and ISO-DNA complex, and their concentration profiles were then successfully obtained. The results indicated that the ISO molecule intercalated into the base-pairs of DNA, and the complex of ISO-DNA was formed. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. BINARY CENTRAL STARS OF PLANETARY NEBULAE DISCOVERED THROUGH PHOTOMETRIC VARIABILITY. IV. THE CENTRAL STARS OF HaTr 4 AND Hf 2-2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillwig, Todd C.; Schaub, S. C.; Bond, Howard E.

    We explore the photometrically variable central stars of the planetary nebulae HaTr 4 and Hf 2-2. Both have been classified as close binary star systems previously based on their light curves alone. Here, we present additional arguments and data confirming the identification of both as close binaries with an irradiated cool companion to the hot central star. We include updated light curves, orbital periods, and preliminary binary modeling for both systems. We also identify for the first time the central star of HaTr 4 as an eclipsing binary. Neither system has been well studied in the past, but we utilizemore » the small amount of existing data to limit possible binary parameters, including system inclination. These parameters are then compared to nebular parameters to further our knowledge of the relationship between binary central stars of planetary nebulae and nebular shaping and ejection.« less

  7. VizieR Online Data Catalog: Lick AGN monitoring 2011: light curves (Barth+, 2015)

    NASA Astrophysics Data System (ADS)

    Barth, A. J.; Bennert, V. N.; Canalizo, G.; Filippenko, A. V.; Gates, E. L.; Greene, J. E.; Li, W.; Malkan, M. A.; Pancoast, A.; Sand, D. J.; Stern, D.; Treu, T.; Woo, J.-H.; Assef, R. J.; Bae, H.-J.; Brewer, B. J.; Cenko, S. B.; Clubb, K. I.; Cooper, M. C.; Diamond-Stanic, A. M.; Hiner, K. D.; Honig, S. F.; Hsiao, E.; Kandrashoff, M. T.; Lazarova, M. S.; Nierenberg, A. M.; Rex, J.; Silverman, J. M.; Tollerud, E. J.; Walsh, J. L.

    2015-05-01

    This project was allocated 69 nights at the Lick 3m Shane telescope, distributed between 2011 March 27 and June 13. Observations were conducted using the Kast double spectrograph (3440-5515Å on the blue side and 5410-8200Å on the red side). In order to extend our light curves for two AGNs, we also requested additional observations from other observers using the Kast spectrograph: Mrk 50 from 2011 January through March, and Zw 229-015 in June and July. For Zw 229-015, three additional observations were taken 20-23 days after the end of our main campaign. See section 3. (2 data files).

  8. Qualitative and quantitative analysis of an additive element in metal oxide nanometer film using laser induced breakdown spectroscopy.

    PubMed

    Xiu, Junshan; Liu, Shiming; Sun, Meiling; Dong, Lili

    2018-01-20

    The photoelectric performance of metal ion-doped TiO 2 film will be improved with the changing of the compositions and concentrations of additive elements. In this work, the TiO 2 films doped with different Sn concentrations were obtained with the hydrothermal method. Qualitative and quantitative analysis of the Sn element in TiO 2 film was achieved with laser induced breakdown spectroscopy (LIBS) with the calibration curves plotted accordingly. The photoelectric characteristics of TiO 2 films doped with different Sn content were observed with UV visible absorption spectra and J-V curves. All results showed that Sn doping could improve the optical absorption to be red-shifted and advance the photoelectric properties of the TiO 2 films. We had obtained that when the concentration of Sn doping in TiO 2 films was 11.89  mmol/L, which was calculated by the LIBS calibration curves, the current density of the film was the largest, which indicated the best photoelectric performance. It indicated that LIBS was a potential and feasible measured method, which was applied to qualitative and quantitative analysis of the additive element in metal oxide nanometer film.

  9. AL Pictoris and FR Piscium: Two Regular Blazhko RR Lyrae Stars

    NASA Astrophysics Data System (ADS)

    de Ponthière, P.; Hambsch, F.-J.; Menzies, K.; Sabo, R.

    2014-12-01

    The results presented are a continuation of observing campaigns conducted by a small group of amateur astronomers interested in the Blazhko effect of RR Lyrae stars. The goal of these observations is to confirm the RR Lyrae Blazhko effect and to detect any additional Blazhko modulation which cannot be identified from all sky survey data-mining. The Blazhko effect of the two observed stars is confirmed, but no additional Blazhko modulations have been detected. The observation of the RR Lyrae star AL Pictoris during 169 nights was conducted from San Pedro de Atacama (Chile). From the observed light curve, 49 pulsation maxima have been measured. Fourier analyses of (O-C), magnitude at maximum light (Mmax), and the complete light curve have provided a confirmation of published pulsation and Blazhko periods, 0.548622 and 34.07 days, respectively. The second multi-longitude observation campaign focused on the RR Lyrae star FR Piscium and was performed from Europe, the United States, and Chile. Fourier analyses of the light curve and of 59 measured brightness maxima have improved the accuracy of pulsation and Blazhko periods to 0.45568 and 51.31 days, respectively. For both stars, no additional Blazhko modulations have been detected.

  10. A high-throughput microtiter plate based method for the determination of peracetic acid and hydrogen peroxide.

    PubMed

    Putt, Karson S; Pugh, Randall B

    2013-01-01

    Peracetic acid is gaining usage in numerous industries who have found a myriad of uses for its antimicrobial activity. However, rapid high throughput quantitation methods for peracetic acid and hydrogen peroxide are lacking. Herein, we describe the development of a high-throughput microtiter plate based assay based upon the well known and trusted titration chemical reactions. The adaptation of these titration chemistries to rapid plate based absorbance methods for the sequential determination of hydrogen peroxide specifically and the total amount of peroxides present in solution are described. The results of these methods were compared to those of a standard titration and found to be in good agreement. Additionally, the utility of the developed method is demonstrated through the generation of degradation curves of both peracetic acid and hydrogen peroxide in a mixed solution.

  11. A High-Throughput Microtiter Plate Based Method for the Determination of Peracetic Acid and Hydrogen Peroxide

    PubMed Central

    Putt, Karson S.; Pugh, Randall B.

    2013-01-01

    Peracetic acid is gaining usage in numerous industries who have found a myriad of uses for its antimicrobial activity. However, rapid high throughput quantitation methods for peracetic acid and hydrogen peroxide are lacking. Herein, we describe the development of a high-throughput microtiter plate based assay based upon the well known and trusted titration chemical reactions. The adaptation of these titration chemistries to rapid plate based absorbance methods for the sequential determination of hydrogen peroxide specifically and the total amount of peroxides present in solution are described. The results of these methods were compared to those of a standard titration and found to be in good agreement. Additionally, the utility of the developed method is demonstrated through the generation of degradation curves of both peracetic acid and hydrogen peroxide in a mixed solution. PMID:24260173

  12. Reconstruction of quadratic curves in 3D using two or more perspective views: simulation studies

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjeev; Sukavanam, N.; Balasubramanian, R.

    2006-01-01

    The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.

  13. Empirical expression for DC magnetization curve of immobilized magnetic nanoparticles for use in biomedical applications

    NASA Astrophysics Data System (ADS)

    Elrefai, Ahmed L.; Sasayama, Teruyoshi; Yoshida, Takashi; Enpuku, Keiji

    2018-05-01

    We studied the magnetization (M-H) curve of immobilized magnetic nanoparticles (MNPs) used for biomedical applications. First, we performed numerical simulation on the DC M-H curve over a wide range of MNPs parameters. Based on the simulation results, we obtained an empirical expression for DC M-H curve. The empirical expression was compared with the measured M-H curves of various MNP samples, and quantitative agreements were obtained between them. We can also estimate the basic parameters of MNP from the comparison. Therefore, the empirical expression is useful for analyzing the M-H curve of immobilized MNPs for specific biomedical applications.

  14. Reduction in secondary dendrite arm spacing in cast eutectic Al-Si piston alloys by cerium addition

    NASA Astrophysics Data System (ADS)

    Ahmad, R.; Asmael, M. B. A.; Shahizan, N. R.; Gandouz, S.

    2017-01-01

    The effects of Ce on the secondary dendrite arm spacing (SDAS) and mechanical behavior of Al-Si-Cu-Mg alloys were investigated. The reduction of SDAS at different Ce concentrations was evaluated in a directional solidification experiment via computer-aided cooling curve thermal analysis (CA‒CCTA). The results showed that 0.1wt%-1.0wt% Ce addition resulted in a rapid solidification time, Δ t s, and low solidification temperature, Δ T S, whereas 0.1wt% Ce resulted in a fast solidification time, Δ t a-Al, of the α-Al phase. Furthermore, Ce addition refined the SDAS, which was reduced to approximately 36%. The mechanical properties of the alloys with and without Ce were investigated using tensile and hardness tests. The quality index ( Q) and ultimate tensile strength of (UTS) Al-Si-Cu-Mg alloys significantly improved with the addition of 0.1wt% Ce. Moreover, the base alloy hardness was improved with increasing Ce concentration.

  15. Activation energy associated with the electromigration of oligosaccharides through viscosity modifier and polymeric additive containing background electrolytes.

    PubMed

    Kerékgyártó, Márta; Járvás, Gábor; Novák, Levente; Guttman, András

    2016-02-01

    The activation energy related to the electromigration of oligosaccharides can be determined from their measured electrophoretic mobilities at different temperatures. The effects of a viscosity modifier (ethylene glycol) and a polymeric additive (linear polyacrylamide) on the electrophoretic mobility of linear sugar oligomers with α1-4 linked glucose units (maltooligosaccharides) were studied in CE using the activation energy concept. The electrophoretic separations of 8-aminopyrene-1,3,6-trisulfonate-labeled maltooligosaccharides were monitored by LIF detection in the temperature range of 20-50°C, using either 0-60% ethylene glycol (viscosity modifier) or 0-3% linear polyacrylamide (polymeric additive) containing BGEs. Activation energy curves were constructed based on the slopes of the Arrhenius plots. With the use of linear polyacrylamide additive, solute size-dependent activation energy variations were found for the maltooligosaccharides with polymerization degrees below and above maltoheptaose (DP 7), probably due to molecular conformation changes and possible matrix interaction effects. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A novel flexible capacitive touch pad based on graphene oxide film.

    PubMed

    Tian, He; Yang, Yi; Xie, Dan; Ren, Tian-Ling; Shu, Yi; Zhou, Chang-Jian; Sun, Hui; Liu, Xuan; Zhang, Cang-Hai

    2013-02-07

    Recently, graphene oxide (GO) supercapacitors with ultra-high energy densities have received significant attention. In addition to energy storage, GO capacitors might also have broad applications in renewable energy engineering, such as vibration and sound energy harvesting. Here, we experimentally create a macroscopic flexible capacitive touch pad based on GO film. An obvious touch "ON" to "OFF" voltage ratio up to ∼60 has been observed. Moreover, we tested the capacitor structure on both flat and curved surfaces and it showed high response sensitivity under fast touch rates. Collectively, our results raise the exciting prospect that the realization of macroscopic flexible keyboards with large-area graphene based materials is technologically feasible, which may open up important applications in control and interface design for solar cells, speakers, supercapacitors, batteries and MEMS systems.

  17. Superresolution confocal technology for displacement measurements based on total internal reflection.

    PubMed

    Kuang, Cuifang; Ali, M Yakut; Hao, Xiang; Wang, Tingting; Liu, Xu

    2010-10-01

    In order to achieve a higher axial resolution for displacement measurement, a novel method is proposed based on total internal reflection filter and confocal microscope principle. A theoretical analysis of the basic measurement principles is presented. The analysis reveals that the proposed confocal detection scheme is effective in enhancing the resolution of nonlinearity of the reflectance curve greatly. In addition, a simple prototype system has been developed based on the theoretical analysis and a series of experiments have been performed under laboratory conditions to verify the system feasibility, accuracy, and stability. The experimental results demonstrate that the axial resolution in displacement measurements is better than 1 nm in a range of 200 nm which is threefold better than that can be achieved using the plane reflector.

  18. Analysis of different tunneling mechanisms of In{sub x}Ga{sub 1−x}As/AlGaAs tunnel junction light-emitting transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Cheng-Han; Wu, Chao-Hsin, E-mail: chaohsinwu@ntu.edu.tw; Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei 106, Taiwan

    The electrical and optical characteristics of tunnel junction light-emitting transistors (TJLETs) with different indium mole fractions (x = 5% and 2.5%) of the In{sub x}Ga{sub 1−x}As base-collector tunnel junctions have been investigated. Two electron tunneling mechanisms (photon-assisted or direct tunneling) provide additional currents to electrical output and resupply holes back to the base region, resulting in the upward slope of I-V curves and enhanced optical output under forward-active operation. The larger direct tunneling probability and stronger Franz-Keldysh absorption for 5% TJLET lead to higher collector current slope and less optical intensity enhancement when base-collector junction is under reverse-biased.

  19. A novel flexible capacitive touch pad based on graphene oxide film

    NASA Astrophysics Data System (ADS)

    Tian, He; Yang, Yi; Xie, Dan; Ren, Tian-Ling; Shu, Yi; Zhou, Chang-Jian; Sun, Hui; Liu, Xuan; Zhang, Cang-Hai

    2013-01-01

    Recently, graphene oxide (GO) supercapacitors with ultra-high energy densities have received significant attention. In addition to energy storage, GO capacitors might also have broad applications in renewable energy engineering, such as vibration and sound energy harvesting. Here, we experimentally create a macroscopic flexible capacitive touch pad based on GO film. An obvious touch ``ON'' to ``OFF'' voltage ratio up to ~60 has been observed. Moreover, we tested the capacitor structure on both flat and curved surfaces and it showed high response sensitivity under fast touch rates. Collectively, our results raise the exciting prospect that the realization of macroscopic flexible keyboards with large-area graphene based materials is technologically feasible, which may open up important applications in control and interface design for solar cells, speakers, supercapacitors, batteries and MEMS systems.

  20. Artificial neural network approach to predict surgical site infection after free-flap reconstruction in patients receiving surgery for head and neck cancer.

    PubMed

    Kuo, Pao-Jen; Wu, Shao-Chun; Chien, Peng-Chen; Chang, Shu-Shya; Rau, Cheng-Shyuan; Tai, Hsueh-Ling; Peng, Shu-Hui; Lin, Yi-Chun; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua

    2018-03-02

    The aim of this study was to develop an effective surgical site infection (SSI) prediction model in patients receiving free-flap reconstruction after surgery for head and neck cancer using artificial neural network (ANN), and to compare its predictive power with that of conventional logistic regression (LR). There were 1,836 patients with 1,854 free-flap reconstructions and 438 postoperative SSIs in the dataset for analysis. They were randomly assigned tin ratio of 7:3 into a training set and a test set. Based on comprehensive characteristics of patients and diseases in the absence or presence of operative data, prediction of SSI was performed at two time points (pre-operatively and post-operatively) with a feed-forward ANN and the LR models. In addition to the calculated accuracy, sensitivity, and specificity, the predictive performance of ANN and LR were assessed based on area under the curve (AUC) measures of receiver operator characteristic curves and Brier score. ANN had a significantly higher AUC (0.892) of post-operative prediction and AUC (0.808) of pre-operative prediction than LR (both P <0.0001). In addition, there was significant higher AUC of post-operative prediction than pre-operative prediction by ANN (p<0.0001). With the highest AUC and the lowest Brier score (0.090), the post-operative prediction by ANN had the highest overall predictive performance. The post-operative prediction by ANN had the highest overall performance in predicting SSI after free-flap reconstruction in patients receiving surgery for head and neck cancer.

  1. Reliability Based Geometric Design of Horizontal Circular Curves

    NASA Astrophysics Data System (ADS)

    Rajbongshi, Pabitra; Kalita, Kuldeep

    2018-06-01

    Geometric design of horizontal circular curve primarily involves with radius of the curve and stopping sight distance at the curve section. Minimum radius is decided based on lateral thrust exerted on the vehicles and the minimum stopping sight distance is provided to maintain the safety in longitudinal direction of vehicles. Available sight distance at site can be regulated by changing the radius and middle ordinate at the curve section. Both radius and sight distance depend on design speed. Speed of vehicles at any road section is a variable parameter and therefore, normally the 98th percentile speed is taken as the design speed. This work presents a probabilistic approach for evaluating stopping sight distance, considering the variability of all input parameters of sight distance. It is observed that the 98th percentile sight distance value is much lower than the sight distance corresponding to 98th percentile speed. The distribution of sight distance parameter is also studied and found to follow a lognormal distribution. Finally, the reliability based design charts are presented for both plain and hill regions, and considering the effect of lateral thrust.

  2. Electrochemical approach for passivating steel and other metals and for the simultaneous production of a biocide to render water potable

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Potentiostatic polarization curves indicated that the cathodic reactions in deaerated KI-I2 water solutions were due to iodine reduction and hydrogen evolution. In the presence of oxygen an additional reduction wave appeared. Anodic polarization curves revealed that iodine could be produced in the region of potential from +600 to +1000 nv vs. SCE.

  3. Using the Arduino with MakerPlot software for the display of resonance curves characterisic of a series LCR circuit

    NASA Astrophysics Data System (ADS)

    Atkin, Keith

    2016-11-01

    This paper shows how very simple circuitry attached to an Arduino microcontroller can be used for the measurement of both frequency and amplitude of a sinusoidal signal. It is also shown how the addition of a readily available software package, MakerPlot, can facilitate the display and investigation of resonance curves for a series LCR circuit.

  4. Management

    Treesearch

    L.J. Barrett

    1934-01-01

    The Station has received recent inquiries regarding site index curves for species other than the mixed hardwood stands for which such data are already available, The attached curves for second growth yellow poplar and white pine will be found suitable for approximate site determinations in the Southern Appachians. The white pine curves are based upon measurements of...

  5. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    NASA Astrophysics Data System (ADS)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  6. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    NASA Astrophysics Data System (ADS)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  7. Characterizing time series via complexity-entropy curves

    NASA Astrophysics Data System (ADS)

    Ribeiro, Haroldo V.; Jauregui, Max; Zunino, Luciano; Lenzi, Ervin K.

    2017-06-01

    The search for patterns in time series is a very common task when dealing with complex systems. This is usually accomplished by employing a complexity measure such as entropies and fractal dimensions. However, such measures usually only capture a single aspect of the system dynamics. Here, we propose a family of complexity measures for time series based on a generalization of the complexity-entropy causality plane. By replacing the Shannon entropy by a monoparametric entropy (Tsallis q entropy) and after considering the proper generalization of the statistical complexity (q complexity), we build up a parametric curve (the q -complexity-entropy curve) that is used for characterizing and classifying time series. Based on simple exact results and numerical simulations of stochastic processes, we show that these curves can distinguish among different long-range, short-range, and oscillating correlated behaviors. Also, we verify that simulated chaotic and stochastic time series can be distinguished based on whether these curves are open or closed. We further test this technique in experimental scenarios related to chaotic laser intensity, stock price, sunspot, and geomagnetic dynamics, confirming its usefulness. Finally, we prove that these curves enhance the automatic classification of time series with long-range correlations and interbeat intervals of healthy subjects and patients with heart disease.

  8. An Approach of Estimating Individual Growth Curves for Young Thoroughbred Horses Based on Their Birthdays

    PubMed Central

    ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro

    2014-01-01

    ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356

  9. SU-E-J-122: The CBCT Dose Calculation Using a Patient Specific CBCT Number to Mass Density Conversion Curve Based On a Novel Image Registration and Organ Mapping Method in Head-And-Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, J; Lasio, G; Chen, S

    2015-06-15

    Purpose: To develop a CBCT HU correction method using a patient specific HU to mass density conversion curve based on a novel image registration and organ mapping method for head-and-neck radiation therapy. Methods: There are three steps to generate a patient specific CBCT HU to mass density conversion curve. First, we developed a novel robust image registration method based on sparseness analysis to register the planning CT (PCT) and the CBCT. Second, a novel organ mapping method was developed to transfer the organs at risk (OAR) contours from the PCT to the CBCT and corresponding mean HU values of eachmore » OAR were measured in both the PCT and CBCT volumes. Third, a set of PCT and CBCT HU to mass density conversion curves were created based on the mean HU values of OARs and the corresponding mass density of the OAR in the PCT. Then, we compared our proposed conversion curve with the traditional Catphan phantom based CBCT HU to mass density calibration curve. Both curves were input into the treatment planning system (TPS) for dose calculation. Last, the PTV and OAR doses, DVH and dose distributions of CBCT plans are compared to the original treatment plan. Results: One head-and-neck cases which contained a pair of PCT and CBCT was used. The dose differences between the PCT and CBCT plans using the proposed method are −1.33% for the mean PTV, 0.06% for PTV D95%, and −0.56% for the left neck. The dose differences between plans of PCT and CBCT corrected using the CATPhan based method are −4.39% for mean PTV, 4.07% for PTV D95%, and −2.01% for the left neck. Conclusion: The proposed CBCT HU correction method achieves better agreement with the original treatment plan compared to the traditional CATPhan based calibration method.« less

  10. More Rapidly Rotating PMS M Dwarfs with Light Curves Suggestive of Orbiting Clouds of Material

    NASA Astrophysics Data System (ADS)

    Stauffer, John; Rebull, Luisa; David, Trevor J.; Jardine, Moira; Collier Cameron, Andrew; Cody, Ann Marie; Hillenbrand, Lynne A.; Barrado, David; van Eyken, Julian; Melis, Carl; Briceno, Cesar

    2018-02-01

    In a previous paper, using data from K2 Campaign 2, we identified 11 very low mass members of the ρ Oph and Upper Scorpius star-forming region as having periodic photometric variability and phased light curves showing multiple scallops or undulations. All of the stars with the “scallop-shell” light curve morphology are mid-to-late M dwarfs without evidence of active accretion and with photometric periods generally <1 day. Their phased light curves have too much structure to be attributed to non-axisymmetrically distributed photospheric spots and rotational modulation. We have now identified an additional eight probable members of the same star-forming region plus three stars in the Taurus star-forming region with this same light curve morphology and sharing the same period and spectral type range as the previous group. We describe the light curves of these new stars in detail and present their general physical characteristics. We also examine the properties of the overall set of stars in order to identify common features that might help elucidate the causes of their photometric variability.

  11. Rational Degenerations of M-Curves, Totally Positive Grassmannians and KP2-Solitons

    NASA Astrophysics Data System (ADS)

    Abenda, Simonetta; Grinevich, Petr G.

    2018-03-01

    We establish a new connection between the theory of totally positive Grassmannians and the theory of M-curves using the finite-gap theory for solitons of the KP equation. Here and in the following KP equation denotes the Kadomtsev-Petviashvili 2 equation [see (1)], which is the first flow from the KP hierarchy. We also assume that all KP times are real. We associate to any point of the real totally positive Grassmannian Gr^{tp} (N,M) a reducible curve which is a rational degeneration of an M-curve of minimal genus {g=N(M-N)} , and we reconstruct the real algebraic-geometric data á la Krichever for the underlying real bounded multiline KP soliton solutions. From this construction, it follows that these multiline solitons can be explicitly obtained by degenerating regular real finite-gap solutions corresponding to smooth M-curves. In our approach, we rule the addition of each new rational component to the spectral curve via an elementary Darboux transformation which corresponds to a section of a specific projection Gr^{tp} (r+1,M-N+r+1)\\mapsto Gr^{tp} (r,M-N+r).

  12. Compression-based integral curve data reuse framework for flow visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Fan; Bi, Chongke; Guo, Hanqi

    Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less

  13. Automatic Synthesis of Panoramic Radiographs from Dental Cone Beam Computed Tomography Data.

    PubMed

    Luo, Ting; Shi, Changrong; Zhao, Xing; Zhao, Yunsong; Xu, Jinqiu

    2016-01-01

    In this paper, we propose an automatic method of synthesizing panoramic radiographs from dental cone beam computed tomography (CBCT) data for directly observing the whole dentition without the superimposition of other structures. This method consists of three major steps. First, the dental arch curve is generated from the maximum intensity projection (MIP) of 3D CBCT data. Then, based on this curve, the long axial curves of the upper and lower teeth are extracted to create a 3D panoramic curved surface describing the whole dentition. Finally, the panoramic radiograph is synthesized by developing this 3D surface. Both open-bite shaped and closed-bite shaped dental CBCT datasets were applied in this study, and the resulting images were analyzed to evaluate the effectiveness of this method. With the proposed method, a single-slice panoramic radiograph can clearly and completely show the whole dentition without the blur and superimposition of other dental structures. Moreover, thickened panoramic radiographs can also be synthesized with increased slice thickness to show more features, such as the mandibular nerve canal. One feature of the proposed method is that it is automatically performed without human intervention. Another feature of the proposed method is that it requires thinner panoramic radiographs to show the whole dentition than those produced by other existing methods, which contributes to the clarity of the anatomical structures, including the enamel, dentine and pulp. In addition, this method can rapidly process common dental CBCT data. The speed and image quality of this method make it an attractive option for observing the whole dentition in a clinical setting.

  14. Utility of genetic and non-genetic risk factors in prediction of type 2 diabetes: Whitehall II prospective cohort study.

    PubMed

    Talmud, Philippa J; Hingorani, Aroon D; Cooper, Jackie A; Marmot, Michael G; Brunner, Eric J; Kumari, Meena; Kivimäki, Mika; Humphries, Steve E

    2010-01-14

    To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes. Workplace based prospective cohort study with three 5 yearly medical screenings. 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years. Non-genetic variables included in two established risk models-the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)-and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor's diagnosis, or the use of anti-diabetic drugs. A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score. The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.

  15. Autonomous frequency domain identification: Theory and experiment

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.

    1989-01-01

    The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.

  16. Flood damage curves for consistent global risk assessments

    NASA Astrophysics Data System (ADS)

    de Moel, Hans; Huizinga, Jan; Szewczyk, Wojtek

    2016-04-01

    Assessing potential damage of flood events is an important component in flood risk management. Determining direct flood damage is commonly done using depth-damage curves, which denote the flood damage that would occur at specific water depths per asset or land-use class. Many countries around the world have developed flood damage models using such curves which are based on analysis of past flood events and/or on expert judgement. However, such damage curves are not available for all regions, which hampers damage assessments in those regions. Moreover, due to different methodologies employed for various damage models in different countries, damage assessments cannot be directly compared with each other, obstructing also supra-national flood damage assessments. To address these problems, a globally consistent dataset of depth-damage curves has been developed. This dataset contains damage curves depicting percent of damage as a function of water depth as well as maximum damage values for a variety of assets and land use classes (i.e. residential, commercial, agriculture). Based on an extensive literature survey concave damage curves have been developed for each continent, while differentiation in flood damage between countries is established by determining maximum damage values at the country scale. These maximum damage values are based on construction cost surveys from multinational construction companies, which provide a coherent set of detailed building cost data across dozens of countries. A consistent set of maximum flood damage values for all countries was computed using statistical regressions with socio-economic World Development Indicators from the World Bank. Further, based on insights from the literature survey, guidance is also given on how the damage curves and maximum damage values can be adjusted for specific local circumstances, such as urban vs. rural locations, use of specific building material, etc. This dataset can be used for consistent supra-national scale flood damage assessments, and guide assessment in countries where no damage model is currently available.

  17. Stochastic resonance in an array of integrate-and-fire neurons with threshold

    NASA Astrophysics Data System (ADS)

    Zhou, Bingchang; Qi, Qianqian

    2018-06-01

    We investigate the phenomenon of stochastic resonance (SR) in parallel integrate-and-fire neuronal arrays with threshold driven by additive noise or signal-dependent noise (SDN) and a noisy input signal. SR occurs in this system. Whether the system is subject to the additive noise or SDN, the input noise η(t) weakens the performance of SR but the array size N and signal parameter I1 promote the performance of SR. Signal parameter I0 promotes the performance of SR for the additive noise, but the peak values of the output signal-to-noise ratio (SNRout) first decrease, then increase as I0 increases for the SDN. Moreover, when N tends to infinity, for the SDN, the curve of SNRout first increases and then decreases, however, for the additive noise, the curve of SNRout increases to reach a plain. By comparing system performance with the additive noise to one with SDN, we also find that the information transmission of a periodic signal with SDN is significantly better than one with the additive noise in limited array size N.

  18. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  19. Dynamic rating curve assessment for hydrometric stations and computation of the associated uncertainties: Quality and station management indicators

    NASA Astrophysics Data System (ADS)

    Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan

    2014-09-01

    A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.

  20. Generalised model-independent characterisation of strong gravitational lenses. II. Transformation matrix between multiple images

    NASA Astrophysics Data System (ADS)

    Wagner, J.; Tessore, N.

    2018-05-01

    We determine the transformation matrix that maps multiple images with identifiable resolved features onto one another and that is based on a Taylor-expanded lensing potential in the vicinity of a point on the critical curve within our model-independent lens characterisation approach. From the transformation matrix, the same information about the properties of the critical curve at fold and cusp points can be derived as we previously found when using the quadrupole moment of the individual images as observables. In addition, we read off the relative parities between the images, so that the parity of all images is determined when one is known. We compare all retrievable ratios of potential derivatives to the actual values and to those obtained by using the quadrupole moment as observable for two- and three-image configurations generated by a galaxy-cluster scale singular isothermal ellipse. We conclude that using the quadrupole moments as observables, the properties of the critical curve are retrieved to a higher accuracy at the cusp points and to a lower accuracy at the fold points; the ratios of second-order potential derivatives are retrieved to comparable accuracy. We also show that the approach using ratios of convergences and reduced shear components is equivalent to ours in the vicinity of the critical curve, but yields more accurate results and is more robust because it does not require a special coordinate system as the approach using potential derivatives does. The transformation matrix is determined by mapping manually assigned reference points in the multiple images onto one another. If the assignment of the reference points is subject to measurement uncertainties under the influence of noise, we find that the confidence intervals of the lens parameters can be as large as the values themselves when the uncertainties are larger than one pixel. In addition, observed multiple images with resolved features are more extended than unresolved ones, so that higher-order moments should be taken into account to improve the reconstruction precision and accuracy.

  1. The Kosice meteorite fall: atmospheric trajectory and fragmentation from videos and radiometers

    NASA Astrophysics Data System (ADS)

    Borovicka, J.

    2012-01-01

    On 28 February 2010, 22h24m46s UT, a huge bolide of absolute magnitude -18 appeared over eastern Slovakia. Although this country is covered by the European Fireball Network (EN) and the Slovak Video Network, bad weather prevented direct imaging of the bolide by dedicated meteor cameras. Fortunately, three surveillance video cameras in Hungary recorded, at least partly, the event. These recordings allowed us to reconstruct the trajectory of the bolide and recover the meteorites. In addition, the light curve of the bolide was recorded by several EN camera radiometers, and sonic booms were registered by seismic stations in the region. The meteorites were classified as ordinary chondrites of type H5 (see Meteoritical Bulletin 100). I developed a model of atmospheric meteoroid fragmentation to fit the observed light curve. The model is based on the fact that meteoroid fragmentation leads to a sudden increase of a bolide's brightness, because the total meteoroid surface area increases after the fragmentation. A bright flare is produced if large numbers of small fragments or dust particles are released. I tried to model the whole light curve rigorously by setting up the mass distribution of fragments and/or dust particles released at each fragmentation point. The dust particles were allowed to be released either instantaneously or gradually. The ablation and radiation of individual particles were computed independently, and the summary light curve was computed. The deceleration at the end of the trajectory was taken into account as well. Based on the approximate calibration of the light curve, the initial mass of the meteoroid was estimated to 3500 kg (corresponding to diameter of 1.2 m). The major fragmentation occurred at a height of 39 km. Only few (probably three) large compact fragments of masses 20-100 kg survived this disruption. All of them fragmented again at lower heights below 30 km, producing minor flares on the light curve. In summary, Kosice was a weak meteoroid which fragmented heavily in the atmosphere and produced large numbers of small (under 10 g) meteorites. Nevertheless, some parts of the meteoroid were strong enough, so that a few relatively large (over 1 kg) meteorites exist as well. We were lucky that the three videos and the radiometric curves enabled us to reconstruct the trajectory and atmospheric fragmentation of the Kosice bolide, although the precision is, of course, lower than it would have been from regular meteor cameras. Full details will be published in the paper cited below. I am grateful to many people who collaborated in this work, especially Antal Igaz, Pavel Spurny, Juraj Toth, Pavel Kalenda, Jakub Haloda and Jan Svoren.

  2. The impact of galactic disc environment on star-forming clouds

    NASA Astrophysics Data System (ADS)

    Nguyen, Ngan K.; Pettitt, Alex R.; Tasker, Elizabeth J.; Okamoto, Takashi

    2018-03-01

    We explore the effect of different galactic disc environments on the properties of star-forming clouds through variations in the background potential in a set of isolated galaxy simulations. Rising, falling, and flat rotation curves expected in halo-dominated, disc-dominated, and Milky Way-like galaxies were considered, with and without an additional two-arm spiral potential. The evolution of each disc displayed notable variations that are attributed to different regimes of stability, determined by shear and gravitational collapse. The properties of a typical cloud were largely unaffected by the changes in rotation curve, but the production of small and large cloud associations was strongly dependent on this environment. This suggests that while differing rotation curves can influence where clouds are initially formed, the average bulk properties are effectively independent of the global environment. The addition of a spiral perturbation made the greatest difference to cloud properties, successfully sweeping the gas into larger, seemingly unbound, extended structures and creating large arm-interarm contrasts.

  3. Characterization of Type Ia Supernova Light Curves Using Principal Component Analysis of Sparse Functional Data

    NASA Astrophysics Data System (ADS)

    He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.

    2018-04-01

    With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.

  4. Determination of the human spine curve based on laser triangulation.

    PubMed

    Poredoš, Primož; Čelan, Dušan; Možina, Janez; Jezeršek, Matija

    2015-02-05

    The main objective of the present method was to automatically obtain a spatial curve of the thoracic and lumbar spine based on a 3D shape measurement of a human torso with developed scoliosis. Manual determination of the spine curve, which was based on palpation of the thoracic and lumbar spinous processes, was found to be an appropriate way to validate the method. Therefore a new, noninvasive, optical 3D method for human torso evaluation in medical practice is introduced. Twenty-four patients with confirmed clinical diagnosis of scoliosis were scanned using a specially developed 3D laser profilometer. The measuring principle of the system is based on laser triangulation with one-laser-plane illumination. The measurement took approximately 10 seconds at 700 mm of the longitudinal translation along the back. The single point measurement accuracy was 0.1 mm. Computer analysis of the measured surface returned two 3D curves. The first curve was determined by manual marking (manual curve), and the second was determined by detecting surface curvature extremes (automatic curve). The manual and automatic curve comparison was given as the root mean square deviation (RMSD) for each patient. The intra-operator study involved assessing 20 successive measurements of the same person, and the inter-operator study involved assessing measurements from 8 operators. The results obtained for the 24 patients showed that the typical RMSD between the manual and automatic curve was 5.0 mm in the frontal plane and 1.0 mm in the sagittal plane, which is a good result compared with palpatory accuracy (9.8 mm). The intra-operator repeatability of the presented method in the frontal and sagittal planes was 0.45 mm and 0.06 mm, respectively. The inter-operator repeatability assessment shows that that the presented method is invariant to the operator of the computer program with the presented method. The main novelty of the presented paper is the development of a new, non-contact method that provides a quick, precise and non-invasive way to determine the spatial spine curve for patients with developed scoliosis and the validation of the presented method using the palpation of the spinous processes, where no harmful ionizing radiation is present.

  5. Development of p-y curves of laterally loaded piles in cohesionless soil.

    PubMed

    Khari, Mahdy; Kassim, Khairul Anuar; Adnan, Azlan

    2014-01-01

    The research on damages of structures that are supported by deep foundations has been quite intensive in the past decade. Kinematic interaction in soil-pile interaction is evaluated based on the p-y curve approach. Existing p-y curves have considered the effects of relative density on soil-pile interaction in sandy soil. The roughness influence of the surface wall pile on p-y curves has not been emphasized sufficiently. The presented study was performed to develop a series of p-y curves for single piles through comprehensive experimental investigations. Modification factors were studied, namely, the effects of relative density and roughness of the wall surface of pile. The model tests were subjected to lateral load in Johor Bahru sand. The new p-y curves were evaluated based on the experimental data and were compared to the existing p-y curves. The soil-pile reaction for various relative density (from 30% to 75%) was increased in the range of 40-95% for a smooth pile at a small displacement and 90% at a large displacement. For rough pile, the ratio of dense to loose relative density soil-pile reaction was from 2.0 to 3.0 at a small to large displacement. Direct comparison of the developed p-y curve shows significant differences in the magnitude and shapes with the existing load-transfer curves. Good comparison with the experimental and design studies demonstrates the multidisciplinary applications of the present method.

  6. Development of p-y Curves of Laterally Loaded Piles in Cohesionless Soil

    PubMed Central

    Khari, Mahdy; Kassim, Khairul Anuar; Adnan, Azlan

    2014-01-01

    The research on damages of structures that are supported by deep foundations has been quite intensive in the past decade. Kinematic interaction in soil-pile interaction is evaluated based on the p-y curve approach. Existing p-y curves have considered the effects of relative density on soil-pile interaction in sandy soil. The roughness influence of the surface wall pile on p-y curves has not been emphasized sufficiently. The presented study was performed to develop a series of p-y curves for single piles through comprehensive experimental investigations. Modification factors were studied, namely, the effects of relative density and roughness of the wall surface of pile. The model tests were subjected to lateral load in Johor Bahru sand. The new p-y curves were evaluated based on the experimental data and were compared to the existing p-y curves. The soil-pile reaction for various relative density (from 30% to 75%) was increased in the range of 40–95% for a smooth pile at a small displacement and 90% at a large displacement. For rough pile, the ratio of dense to loose relative density soil-pile reaction was from 2.0 to 3.0 at a small to large displacement. Direct comparison of the developed p-y curve shows significant differences in the magnitude and shapes with the existing load-transfer curves. Good comparison with the experimental and design studies demonstrates the multidisciplinary applications of the present method. PMID:24574932

  7. The effect of the inner-hair-cell mediated transduction on the shape of neural tuning curves

    NASA Astrophysics Data System (ADS)

    Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah

    2018-05-01

    The inner hair cells of the mammalian cochlea transform the vibrations of their stereocilia into releases of neurotransmitter at the ribbon synapses, thereby controlling the activity of the afferent auditory fibers. The mechanical-to-neural transduction is a highly nonlinear process and it introduces differences between the frequency-tuning of the stereocilia and that of the afferent fibers. Using a computational model of the inner hair cell that is based on in vitro data, we estimated that smaller vibrations of the stereocilia are necessary to drive the afferent fibers above threshold at low (≤0.5 kHz) than at high (≥4 kHz) driving frequencies. In the base of the cochlea, the transduction process affects the low-frequency tails of neural tuning curves. In particular, it introduces differences between the frequency-tuning of the stereocilia and that of the auditory fibers resembling those between basilar membrane velocity and auditory fibers tuning curves in the chinchilla base. For units with a characteristic frequency between 1 and 4 kHz, the transduction process yields shallower neural than stereocilia tuning curves as the characteristic frequency decreases. This study proposes that transduction contributes to the progressive broadening of neural tuning curves from the base to the apex.

  8. Acceptance criteria for welds in ASTM A106 grade B steel pipe and plate

    NASA Technical Reports Server (NTRS)

    Hudson, C. M.; Wright, D. B., Jr.; Leis, B. N.

    1986-01-01

    Based on the RECERT Program findings, NASA-Langley funded a fatigue study of code-unacceptable welds. Usage curves were developed which were based on the structural integrity of the welds. The details of this study are presented in NASA CR-178114. The information presented is a condensation and reinterpretation of the information in NASA CR-178114. This condensation and reinterpretation generated usage curves for welds having: (1) indications 0.20 -inch deep by 0.40-inch long, and (2) indications 0.195-inch deep by 8.4-inches long. These curves were developed using the procedures used in formulating the design curves in Section VIII, Division 2 of the American Society of Mechanical Engineers Boiler and Pressure Vessel Code.

  9. Tectonic and kinematics of curved orogenic systems: insights from AMS analysis and paleomagnetism

    NASA Astrophysics Data System (ADS)

    Cifelli, Francesca; Mattei, Massimo

    2016-04-01

    During the past few years, paleomagnetism has been considered a unique tool for constraining kinematic models of curved orogenic systems, because of its great potential in quantifying vertical axis rotations and in discriminating between primary and secondary (orocline s.l.) arcs. In fact, based on the spatio-temporal relationships between deformation and vertical axis rotation, curved orogens can be subdivided as primary or secondary (oroclines s.l.), if they formed respectively in a self-similar manner without undergoing important variations in their original curved shape or if their curvature in map-view is the result of a bending about a vertical axis of rotation. In addition to the kinematics of the arc and the timing of its curvature, a crucial factor for understanding the origin of belts curvature is the knowledge of the geodynamic process governing arc formation. In this context, the detailed reconstruction of the rotational history is mainly based on paleomagnetic and structural analyses (fold axes, kinematic indicators), which include the magnetic fabric. In fact, in curved fold and thrust belts, assuming that the magnetic lineation is tectonically originated and formed during layer-parallel shortening (LPS) before vertical axis rotations, the orientation of the magnetic lineation often strictly follows the curvature of the orogeny. This assumption represents a fundamental prerequisite to fully understand the origin of orogenic arcs and to unravel the geodynamic processes responsible for their curvature. We present two case studies: the central Mediterranean arcs and the Alborz Mts in Iran. The Mediterranean area has represented an attractive region to apply paleomagnetic analysis, as it shows a large number of narrow arcs, whose present-day shape has been driven by the space-time evolution of the Mediterranean subduction system, which define a irregular and rather diffuse plate boundary. The Alborz Mts. form a sinuous range over 1,200 km long, defining from west to east a salient with a southward concavity which results in the wrapping of the South Caspian basin to the north, and a southward reentrant with apex which encircles the Central Iranian block to the south. The integration of paleomagnetic and AMS data indicates that this orogen started to form as an almost straight E-W oriented range and acquired its present-day curved shape by means of opposite vertical axis rotations. Such a process was probably caused by the relative motion between different rigid blocks (South Caspian, Central Iran, and the Eastern Iranian Blocks) forming the collision zone and hence must be a crustal to lithospheric-scale process.

  10. Conformable large-area position-sensitive photodetectors based on luminescence-collecting silicone waveguides

    NASA Astrophysics Data System (ADS)

    Bartu, Petr; Koeppe, Robert; Arnold, Nikita; Neulinger, Anton; Fallon, Lisa; Bauer, Siegfried

    2010-06-01

    Position sensitive detection schemes based on the lateral photoeffect rely on inorganic semiconductors. Such position sensitive devices (PSDs) are reliable and robust, but preparation with large active areas is expensive and use on curved substrates is impossible. Here we present a novel route for the fabrication of conformable PSDs which allows easy preparation on large areas, and use on curved surfaces. Our device is based on stretchable silicone waveguides with embedded fluorescent dyes, used in conjunction with small silicon photodiodes. Impinging laser light (e.g., from a laser pointer) is absorbed by the dye in the PSD and re-emitted as fluorescence light at a larger wavelength. Due to the isotropic emission from the fluorescent dye molecules, most of the re-emitted light is coupled into the planar silicone waveguide and directed to the edges of the device. Here the light signals are detected via embedded small silicon photodiodes arranged in a regular pattern. Using a mathematical algorithm derived by extensive using of models from global positioning system (GPS) systems and human activity monitoring, the position of light spots is easily calculated. Additionally, the device shows high durability against mechanical stress, when clamped in an uniaxial stretcher and mechanically loaded up to 15% strain. The ease of fabrication, conformability, and durability of the device suggests its use as interface devices and as sensor skin for future robots.

  11. Sensitivity of the acid-base properties of clays to the methods of preparation and measurement. 2. Evidence from continuous potentiometric titrations.

    PubMed

    Duc, Myriam; Gaboriaud, Fabien; Thomas, Fabien

    2005-09-01

    The effects of experimental procedures on the acid-base consumption titration curves of montmorillonite suspension were studied using continuous potentiometric titration. For that purpose, the hysteresis amplitudes between the acid and base branches were found to be useful to systematically evaluate the impacts of storage conditions (wet or dried), the atmosphere in titration reactor, the solid-liquid ratio, the time interval between successive increments, and the ionic strength. In the case of storage conditions, the increase of the hysteresis was significantly higher for longer storage of clay in suspension and drying procedures compared to "fresh" clay suspension. The titration carried out under air demonstrated carbonate contamination that could only be cancelled by performing experiments under inert gas. Interestingly, the increase of the time intervals between successive increments of titrant strongly emphasized the amplitude of hysteresis, which could be correlated with the slow kinetic process specifically observed for acid addition in acid media. Thus, such kinetic behavior is probably associated with dissolution processes of clay particles. However, the resulting curves recorded at different ionic strengths under optimized conditions did not show the common intersection point required to define point of zero charge. Nevertheless, the ionic strength dependence of the point of zero net proton charge suggested that the point of zero charge of sodic montmorillonite could be estimated as lower than 5.

  12. A vision-based approach for tramway rail extraction

    NASA Astrophysics Data System (ADS)

    Zwemer, Matthijs H.; van de Wouw, Dennis W. J. M.; Jaspers, Egbert; Zinger, Sveta; de With, Peter H. N.

    2015-03-01

    The growing traffic density in cities fuels the desire for collision assessment systems on public transportation. For this application, video analysis is broadly accepted as a cornerstone. For trams, the localization of tramway tracks is an essential ingredient of such a system, in order to estimate a safety margin for crossing traffic participants. Tramway-track detection is a challenging task due to the urban environment with clutter, sharp curves and occlusions of the track. In this paper, we present a novel and generic system to detect the tramway track in advance of the tram position. The system incorporates an inverse perspective mapping and a-priori geometry knowledge of the rails to find possible track segments. The contribution of this paper involves the creation of a new track reconstruction algorithm which is based on graph theory. To this end, we define track segments as vertices in a graph, in which edges represent feasible connections. This graph is then converted to a max-cost arborescence graph, and the best path is selected according to its location and additional temporal information based on a maximum a-posteriori estimate. The proposed system clearly outperforms a railway-track detector. Furthermore, the system performance is validated on 3,600 manually annotated frames. The obtained results are promising, where straight tracks are found in more than 90% of the images and complete curves are still detected in 35% of the cases.

  13. An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography

    NASA Technical Reports Server (NTRS)

    Ivancic, Will (Technical Monitor); Eddy, Wesley M.

    2005-01-01

    Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.

  14. Declining Rotation Curves at z = 2 in ΛCDM Galaxy Formation Simulations

    NASA Astrophysics Data System (ADS)

    Teklu, Adelheid F.; Remus, Rhea-Silvia; Dolag, Klaus; Arth, Alexander; Burkert, Andreas; Obreja, Aura; Schulze, Felix

    2018-02-01

    Selecting disk galaxies from the cosmological, hydrodynamical simulation Magneticum Pathfinder, we show that almost half of our poster child disk galaxies at z = 2 show significantly declining rotation curves and low dark matter fractions, very similar to recently reported observations. These galaxies do not show any anomalous behavior, they reside in standard dark matter halos, and they typically grow significantly in mass until z = 0, where they span all morphological classes, including disk galaxies matching present-day rotation curves and observed dark matter fractions. Our findings demonstrate that declining rotation curves and low dark matter fractions in rotation-dominated galaxies at z = 2 appear naturally within the ΛCDM paradigm and reflect the complex baryonic physics, which plays a role at the peak epoch of star formation. In addition, we find some dispersion-dominated galaxies at z = 2 that host a significant gas disk and exhibit similar shaped rotation curves as the disk galaxy population, rendering it difficult to differentiate between these two populations with currently available observation techniques.

  15. Linear Titration Curves of Acids and Bases.

    PubMed

    Joseph, N R

    1959-05-29

    The Henderson-Hasselbalch equation, by a simple transformation, becomes pH - pK = pA - pB, where pA and pB are the negative logarithms of acid and base concentrations. Sigmoid titration curves then reduce to straight lines; titration curves of polyelectrolytes, to families of straight lines. The method is applied to the titration of the dipeptide glycyl aminotricarballylic acid, with four titrable groups. Results are expressed as Cartesian and d'Ocagne nomograms. The latter is of a general form applicable to polyelectrolytes of any degree of complexity.

  16. Distinguishing the albedo of exoplanets from stellar activity

    NASA Astrophysics Data System (ADS)

    Serrano, L. M.; Barros, S. C. C.; Oshagh, M.; Santos, N. C.; Faria, J. P.; Demangeon, O.; Sousa, S. G.; Lendl, M.

    2018-03-01

    Context. Light curves show the flux variation from the target star and its orbiting planets as a function of time. In addition to the transit features created by the planets, the flux also includes the reflected light component of each planet, which depends on the planetary albedo. This signal is typically referred to as phase curve and could be easily identified if there were no additional noise. As well as instrumental noise, stellar activity, such as spots, can create a modulation in the data, which may be very difficult to distinguish from the planetary signal. Aims: We analyze the limitations imposed by the stellar activity on the detection of the planetary albedo, considering the limitations imposed by the predicted level of instrumental noise and the short duration of the obervations planned in the context of the CHEOPS mission. Methods: As initial condition, we have assumed that each star is characterized by just one orbiting planet. We built mock light curves that included a realistic stellar activity pattern, the reflected light component of the planet and an instrumental noise level, which we have chosen to be at the same level as predicted for CHEOPS. We then fit these light curves to try to recover the reflected light component, assuming the activity patterns can be modeled with a Gaussian process. Results: We estimate that at least one full stellar rotation is necessary to obtain a reliable detection of the planetary albedo. This result is independent of the level of noise, but it depends on the limitation of the Gaussian process to describe the stellar activity when the light curve time-span is shorter than the stellar rotation. As an additional result, we found that with a 6.5 magnitude star and the noise level of CHEOPS, it is possible to detect the planetary albedo up to a lower limit of Rp = 0.03 R*. Finally, in presence of typical CHEOPS gaps in the simulations, we confirm that it is still possible to obtain a reliable albedo.

  17. Resource Allocation in Dynamic Environments

    DTIC Science & Technology

    2012-10-01

    Utility Curve for the TOC Camera 42 Figure 20: Utility Curves for Ground Vehicle Camera and Squad Camera 43 Figure 21: Facial - Recognition Utility...A Facial - Recognition Server (FRS) can receive images from smartphones the squads use, compare them to a local database, and then return the...fallback. In addition, each squad has the ability to capture images with a smartphone and send them to a Facial - Recognition Server in the TOC to

  18. Photometric light curves for ten rapidly rotating stars in Alpha Persei, the Pleiades, and the field

    NASA Technical Reports Server (NTRS)

    Prosser, Charles F.; Schild, Rudolph E.; Stauffer, John R.; Jones, Burton F.

    1993-01-01

    We present the results from a photometric monitoring program of ten rapidly rotating stars observed during 1991 using the FLWO 48-in. telescope. Brightness variations for an additional six cluster stars observed with the Lick 40-in. telescope are also given. The periods and light curves for seven Alpha Persei members, two Pleiades members, and one naked T Tauri field star are reported.

  19. The Regulus occultation light curve and the real atmosphere of Venus

    NASA Technical Reports Server (NTRS)

    Veverka, J.; Wasserman, L.

    1974-01-01

    An inversion of the light curve observed during the July 7, 1959, occultation of Regulus by Venus leads to the conclusion that the light curve cannot be reconciled with models of the Venus atmosphere based on spacecraft observations. The event occurred in daylight and, under the subsequently difficult observation conditions, it seems likely that the Regulus occultation light curve is marred by a systematic errors in spite of the competence of the observers involved.

  20. Liquid Crystal Based Optical Phased Array for Steering Lasers

    DTIC Science & Technology

    2009-10-01

    profile into the liquid crystal cell, the first step is to characterize the LC cell’s OPD curve with respect to the ramped voltage by a simple one...corresponding voltage value on the OPD vs. 22 Voltage curve , the first entry voltage profile of a positive or negative micro-lens can be thereby...Fig. 2.6 Optical path delay (OPD) profile of ideal objective positive (blue curve ) and negative (green curve ) lens with 552 μm radius, no

  1. Strain- and stress-based forming limit curves for DP 590 steel sheet using Marciniak-Kuczynski method

    NASA Astrophysics Data System (ADS)

    Kumar, Gautam; Maji, Kuntal

    2018-04-01

    This article deals with the prediction of strain-and stress-based forming limit curves for advanced high strength steel DP590 sheet using Marciniak-Kuczynski (M-K) method. Three yield criteria namely Von-Mises, Hill's 48 and Yld2000-2d and two hardening laws i.e., Hollomon power and Swift hardening laws were considered to predict the forming limit curves (FLCs) for DP590 steel sheet. The effects of imperfection factor and initial groove angle on prediction of FLC were also investigated. It was observed that the FLCs shifted upward with the increase of imperfection factor value. The initial groove angle was found to have significant effects on limit strains in the left side of FLC, and insignificant effect for the right side of FLC for certain range of strain paths. The limit strains were calculated at zero groove angle for the right side of FLC, and a critical groove angle was used for the left side of FLC. The numerically predicted FLCs considering the different combinations of yield criteria and hardening laws were compared with the published experimental results of FLCs for DP590 steel sheet. The FLC predicted using the combination of Yld2000-2d yield criterion and swift hardening law was in better coorelation with the experimental data. Stress based forming limit curves (SFLCs) were also calculated from the limiting strain values obtained by M-K model. Theoretically predicted SFLCs were compared with that obtained from the experimental forming limit strains. Stress based forming limit curves were seen to better represent the forming limits of DP590 steel sheet compared to that by strain-based forming limit curves.

  2. Craniofacial Reconstruction Using Rational Cubic Ball Curves

    PubMed Central

    Majeed, Abdul; Mt Piah, Abd Rahni; Gobithaasan, R. U.; Yahya, Zainor Ridzuan

    2015-01-01

    This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application. PMID:25880632

  3. Future orientation, impulsivity, and problem behaviors: a longitudinal moderation model.

    PubMed

    Chen, Pan; Vazsonyi, Alexander T

    2011-11-01

    In the current study, based on a sample of 1,873 adolescents between 11.4 and 20.9 years of age from the first 3 waves of the National Longitudinal Study of Adolescent Health, we investigated the longitudinal effects of future orientation on levels of and developmental changes in problem behaviors, while controlling for the effects by impulsivity; we also tested the moderating effects by future orientation on the impulsivity-problem behaviors link over time. Additionally, we examined future orientation operationalized by items measuring education, marriage, and life domains. Findings based on growth curve analyses provided evidence of longitudinal effects by education and life future orientation on both levels of and developmental changes in problem behaviors; the effect of marriage future orientation was not significant for either test. In addition, only life future orientation moderated the effect by impulsivity on levels of problem behaviors over time. More specifically, impulsivity had a weaker effect on levels of problem behaviors over time for adolescents who reported higher levels of life future orientation.

  4. On Stability of Plane and Cylindrical Poiseuille Flows of Nanofluids

    NASA Astrophysics Data System (ADS)

    Rudyak, V. Ya.; Bord, E. G.

    2017-11-01

    Stability of plane and cylindrical Poiseuille flows of nanofluids to comparatively small perturbations is studied. Ethylene glycol-based nanofluids with silicon dioxide particles are considered. The volume fraction of nanoparticles is varied from 0 to 10%, and the particle size is varied from 10 to 210 nm. Neutral stability curves are constructed, and the most unstable modes of disturbances are found. It is demonstrated that nanofluids are less stable than base fluids; the presence of particles leads to additional destabilization of the flow. The greater the volume fraction of nanoparticles and the smaller the particle size, the greater the degree of this additional destabilization. In this case, the critical Reynolds number significantly decreases, and the spectrum of unstable disturbances becomes different; in particular, even for the volume fraction of particles equal to 5%, the wave length of the most unstable disturbances of the nanofluid with particles approximately 20 nm in size decreases almost by a factor of 4.

  5. Wall jet analysis for circulation control aerodynamics. Part 1: Fundamental CFD and turbulence modeling concepts

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.

    1987-01-01

    An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.

  6. Optical to extreme ultraviolet reddening curves for normal AGN dust and for dust associated with high-velocity outflows

    NASA Astrophysics Data System (ADS)

    Singh, Japneet; Gaskell, Martin; Gill, Jake

    2017-01-01

    We use mid-IR (WIRE), optical (SDSS), and ultraviolet (GALEX) photometry of over 80,000 AGNs to derive mean attenuation curves from the optical to the rest frame extreme ultraviolet (EUV) for (i) “normal” AGN dust dominating the optical reddening of AGNs and (ii) “BAL dust” - the dust causing the additional extinction in AGNs observed to have broad absorption lines (BALs). Our method confirms that the attenuation curve of “normal” AGN dust is flat in the ultraviolet, as found by Gaskell et al. (2004). In striking contrast to this, the attenuation curve for BAL dust is well fit by a steeply-rising, SMC-like curve. We confirm the shape of the theoretical Weingartner & Draine (2001) SMC curve out to 700 Angstroms but the drop in attenuation to still shorter wavelengths (400 Angstroms) seems to be less than predicted. We find identical attenuation curves for high-ionization and low-ionization BALQSOs. We suggest that attenuation curves appearing to be steeper than the SMC are due to differences in underlying spectra and partial covering by BAL dust. This work was This work was performed under the auspices of the Science Internship Program (SIP) of the University of California at Santa Cruz performed under the auspices of the Science Internship Program (SIP) of the University of California at Santa Cruz.

  7. Measurement and modeling of unsaturated hydraulic conductivity: Chapter 21

    USGS Publications Warehouse

    Perkins, Kim S.; Elango, Lakshmanan

    2011-01-01

    This chapter will discuss, by way of examples, various techniques used to measure and model hydraulic conductivity as a function of water content, K(). The parameters that describe the K() curve obtained by different methods are used directly in Richards’ equation-based numerical models, which have some degree of sensitivity to those parameters. This chapter will explore the complications of using laboratory measured or estimated properties for field scale investigations to shed light on how adequately the processes are represented. Additionally, some more recent concepts for representing unsaturated-zone flow processes will be discussed.

  8. Spatial Light Modulators as Optical Crossbar Switches

    NASA Technical Reports Server (NTRS)

    Juday, Richard

    2003-01-01

    A proposed method of implementing cross connections in an optical communication network is based on the use of a spatial light modulator (SLM) to form controlled diffraction patterns that connect inputs (light sources) and outputs (light sinks). Sources would typically include optical fibers and/or light-emitting diodes; sinks would typically include optical fibers and/or photodetectors. The sources and/or sinks could be distributed in two dimensions; that is, on planes. Alternatively or in addition, sources and/or sinks could be distributed in three dimensions -- for example, on curved surfaces or in more complex (including random) three-dimensional patterns.

  9. Incipient plasticity and indentation response of MgO surfaces using molecular dynamics

    NASA Astrophysics Data System (ADS)

    Tran, Anh-Son; Hong, Zheng-Han; Chen, Ming-Yuan; Fang, Te-Hua

    2018-05-01

    The mechanical characteristics of magnesium oxide (MgO) based on nanoindentation are studied using molecular dynamics (MD) simulation. The effects of indenting speed and temperature on the structural deformation and loading-unloading curve are investigated. Results show that the strained surface of the MgO expands to produce a greater relaxation of atoms in the surroundings of the indent. The dislocation propagation and pile-up for MgO occur more significantly with the increasing temperature from 300 K to 973 K. In addition, with increasing temperature, the high strained atoms with a great perturbation appearing at the groove location.

  10. Review Article: A comparison of flood and earthquake vulnerability assessment indicators

    NASA Astrophysics Data System (ADS)

    de Ruiter, Marleen C.; Ward, Philip J.; Daniell, James E.; Aerts, Jeroen C. J. H.

    2017-07-01

    In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquake- and flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of index- and curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.

  11. Population-based analysis of Alzheimer's disease risk alleles implicates genetic interactions.

    PubMed

    Ebbert, Mark T W; Ridge, Perry G; Wilson, Andrew R; Sharp, Aaron R; Bailey, Matthew; Norton, Maria C; Tschanz, JoAnn T; Munger, Ronald G; Corcoran, Christopher D; Kauwe, John S K

    2014-05-01

    Reported odds ratios and population attributable fractions (PAF) for late-onset Alzheimer's disease (LOAD) risk loci (BIN1, ABCA7, CR1, MS4A4E, CD2AP, PICALM, MS4A6A, CD33, and CLU) come from clinically ascertained samples. Little is known about the combined PAF for these LOAD risk alleles and the utility of these combined markers for case-control prediction. Here we evaluate these loci in a large population-based sample to estimate PAF and explore the effects of additive and nonadditive interactions on LOAD status prediction performance. 2419 samples from the Cache County Memory Study were genotyped for APOE and nine LOAD risk loci from AlzGene.org. We used logistic regression and receiver operator characteristic analysis to assess the LOAD status prediction performance of these loci using additive and nonadditive models and compared odds ratios and PAFs between AlzGene.org and Cache County. Odds ratios were comparable between Cache County and AlzGene.org when identical single nucleotide polymorphisms were genotyped. PAFs from AlzGene.org ranged from 2.25% to 37%; those from Cache County ranged from .05% to 20%. Including non-APOE alleles significantly improved LOAD status prediction performance (area under the curve = .80) over APOE alone (area under the curve = .78) when not constrained to an additive relationship (p < .03). We identified potential allelic interactions (p values uncorrected): CD33-MS4A4E (synergy factor = 5.31; p < .003) and CLU-MS4A4E (synergy factor = 3.81; p < .016). Although nonadditive interactions between loci significantly improve diagnostic ability, the improvement does not reach the desired sensitivity or specificity for clinical use. Nevertheless, these results suggest that understanding gene-gene interactions may be important in resolving Alzheimer's disease etiology. Copyright © 2014 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  12. Spectrum simulation in DTSA-II.

    PubMed

    Ritchie, Nicholas W M

    2009-10-01

    Spectrum simulation is a useful practical and pedagogical tool. Particularly with complex samples or trace constituents, a simulation can help to understand the limits of the technique and the instrument parameters for the optimal measurement. DTSA-II, software for electron probe microanalysis, provides both easy to use and flexible tools for simulating common and less common sample geometries and materials. Analytical models based on (rhoz) curves provide quick simulations of simple samples. Monte Carlo models based on electron and X-ray transport provide more sophisticated models of arbitrarily complex samples. DTSA-II provides a broad range of simulation tools in a framework with many different interchangeable physical models. In addition, DTSA-II provides tools for visualizing, comparing, manipulating, and quantifying simulated and measured spectra.

  13. Improved output power of GaN-based light-emitting diodes grown on a nanopatterned sapphire substrate

    NASA Astrophysics Data System (ADS)

    Chan, Chia-Hua; Hou, Chia-Hung; Tseng, Shao-Ze; Chen, Tsing-Jen; Chien, Hung-Ta; Hsiao, Fu-Li; Lee, Chien-Chieh; Tsai, Yen-Ling; Chen, Chii-Chang

    2009-07-01

    This letter describes the improved output power of GaN-based light-emitting diodes (LEDs) formed on a nanopatterned sapphire substrate (NPSS) prepared through etching with a self-assembled monolayer of 750-nm-diameter SiO2 nanospheres used as the mask. The output power of NPSS LEDs was 76% greater than that of LEDs on a flat sapphire substrate. Three-dimensional finite-difference time-domain calculation predicted a 40% enhancement in light extraction efficiency of NPSS LEDs. In addition, the reduction of full widths at half maximum in the ω-scan rocking curves for the (0 0 2) and (1 0 2) planes of GaN on NPSS suggested improved crystal quality.

  14. A PC-based inverse design method for radial and mixed flow turbomachinery

    NASA Technical Reports Server (NTRS)

    Skoe, Ivar Helge

    1991-01-01

    An Inverse Design Method suitable for radial and mixed flow turbomachinery is presented. The codes are based on the streamline curvature concept; therefore, it is applicable for current personal computers from the 286/287 range. In addition to the imposed aerodynamic constraints, mechanical constraints are imposed during the design process to ensure that the resulting geometry satisfies production consideration and that structural considerations are taken into account. By the use of Bezier Curves in the geometric modeling, the same subroutine is used to prepare input for both aero and structural files since it is important to ensure that the geometric data is identical to both structural analysis and production. To illustrate the method, a mixed flow turbine design is shown.

  15. Estimates of genetic parameters and eigenvector indices for milk production of Holstein cows.

    PubMed

    Savegnago, R P; Rosa, G J M; Valente, B D; Herrera, L G G; Carneiro, R L R; Sesana, R C; El Faro, L; Munari, D P

    2013-01-01

    The objectives of the present study were to estimate genetic parameters of monthly test-day milk yield (TDMY) of the first lactation of Brazilian Holstein cows using random regression (RR), and to compare the genetic gains for milk production and persistency, derived from RR models, using eigenvector indices and selection indices that did not consider eigenvectors. The data set contained monthly TDMY of 3,543 first lactations of Brazilian Holstein cows calving between 1994 and 2011. The RR model included the fixed effect of the contemporary group (herd-month-year of test days), the covariate calving age (linear and quadratic effects), and a fourth-order regression on Legendre orthogonal polynomials of days in milk (DIM) to model the population-based mean curve. Additive genetic and nongenetic animal effects were fit as RR with 4 classes of residual variance random effect. Eigenvector indices based on the additive genetic RR covariance matrix were used to evaluate the genetic gains of milk yield and persistency compared with the traditional selection index (selection index based on breeding values of milk yield until 305 DIM). The heritability estimates for monthly TDMY ranged from 0.12 ± 0.04 to 0.31 ± 0.04. The estimates of additive genetic and nongenetic animal effects correlation were close to 1 at adjacent monthly TDMY, with a tendency to diminish as the time between DIM classes increased. The first eigenvector was related to the increase of the genetic response of the milk yield and the second eigenvector was related to the increase of the genetic gains of the persistency but it contributed to decrease the genetic gains for total milk yield. Therefore, using this eigenvector to improve persistency will not contribute to change the shape of genetic curve pattern. If the breeding goal is to improve milk production and persistency, complete sequential eigenvector indices (selection indices composite with all eigenvectors) could be used with higher economic values for persistency. However, if the breeding goal is to improve only milk yield, the traditional selection index is indicated. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Are driving and overtaking on right curves more dangerous than on left curves?

    PubMed

    Othman, Sarbaz; Thomson, Robert; Lannér, Gunnar

    2010-01-01

    It is well known that crashes on horizontal curves are a cause for concern in all countries due to the frequency and severity of crashes at curves compared to road tangents. A recent study of crashes in western Sweden reported a higher rate of crashes in right curves than left curves. To further understand this result, this paper reports the results of novel analyses of the responses of vehicles and drivers during negotiating and overtaking maneuvers on curves for right hand traffic. The overall objectives of the study were to find road parameters for curves that affect vehicle dynamic responses, to analyze these responses during overtaking maneuvers on curves, and to link the results with driver behavior for different curve directions. The studied road features were speed, super-elevation, radius and friction including their interactions, while the analyzed vehicle dynamic factors were lateral acceleration and yaw angular velocity. A simulation program, PC-Crash, has been used to simulate road parameters and vehicle response interaction in curves. Overtaking maneuvers have been simulated for all road feature combinations in a total of 108 runs. Analysis of variances (ANOVA) was performed, using two sided randomized block design, to find differences in vehicle responses for the curve parameters. To study driver response, a field test using an instrumented vehicle and 32 participants was reviewed as it contained longitudinal speed and acceleration data for analysis. The simulation results showed that road features affect overtaking performance in right and left curves differently. Overtaking on right curves was sensitive to radius and the interaction of radius with road condition; while overtaking on left curves was more sensitive to super-elevation. Comparisons of lateral acceleration and yaw angular velocity during these maneuvers showed different vehicle response configurations depending on curve direction and maneuver path. The field test experiments also showed that drivers behave differently depending on the curve direction where both speed and acceleration were higher on right than left curves. The implication of this study is that curve direction should be taken into consideration to a greater extent when designing and redesigning curves. It appears that the driver and the vehicle are influenced by different infrastructure factors depending on the curve direction. In addition, the results suggest that the vehicle dynamics response alone cannot explain the higher crash risk in right curves. Further studies of the links between driver, vehicle, and highway characteristics are needed, such as naturalistic driving studies, to identify the key safety indicators for highway safety.

  17. Characterization and Predictive Value of Segmental Curve Flexibility in Adolescent Idiopathic Scoliosis Patients.

    PubMed

    Yao, Guanfeng; Cheung, Jason P Y; Shigematsu, Hideki; Ohrt-Nissen, Søren; Cheung, Kenneth M C; Luk, Keith D K; Samartzis, Dino

    2017-11-01

    A prospective radiographic analysis of adolescent idiopathic scoliosis (AIS) patients managed with alternate-level pedicle screw fixation was performed. The objective of this study was to characterize segmental curve flexibility and to determine its predictive value in curve correction in AIS patients. Little is known regarding the distinct segmental curve characteristics and their ability to predict curve correction in patients with AIS. The segmental Cobb angle was measured on posteroanterior standing radiographs and on fulcrum bending radiographs. Radiographs were analyzed preoperatively and at 2 years postoperatively and the curve was divided into upper, mid, and lower segments based on predefined criteria. The segmental flexibility and the segmental fulcrum bending correction index (FBCI) were calculated. Eighty patients were included with mean age of 15 years. Preoperative mean segmental Cobb angles were 18, 31, and 17 degrees in the upper, mid, and lower segments, respectively. Segmental bending Cobb angles were 6, 13, and 4 degrees, respectively, corresponding to segmental flexibilities of 50%, 47%, and 83% in the upper, mid, and lower segments, respectively (P < 0.001). At 2-year follow up, the mean segmental FBCI were 155%, 131%, and 100% in the upper, mid, and lower segments, respectively (P < 0.001), which suggested that the lower segment of the curve was more flexible than the other segments and that higher correction was noted in the upper segments. A significant, positive correlation was noted between the segmental bending Cobb angle and the segmental FBCI (P < 0.05), whereby the strength of the correlation varied based on the curve segment. This is the first study to demonstrate the segmental variations in curve flexibility using the fulcrum bending radiograph in AIS patients. Curve flexibility is not uniform throughout the curve and different segments exhibit greater flexibility/correctibility than others. Segmental flexibility should be considered in assessing AIS patients and in the clinical decision-making strategy to optimize curve correction outcomes. 03.

  18. TU-EF-304-06: A Comparison of CT Number to Relative Linear Stopping Power Conversion Curves Used by Proton Therapy Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, P; Lowenstein, J; Kry, S

    Purpose: To compare the CT Number (CTN) to Relative Linear Stopping Power (RLSP) conversion curves used by 14 proton institutions in their dose calculations. Methods: The proton institution’s CTN to RLSP conversion curves were collected by the Imaging and Radiation Oncology Core (IROC) Houston QA Center during its on-site dosimetry review audits. The CTN values were converted to scaled CT Numbers. The scaling assigns a CTN of 0 to air and 1000 to water to allow intercomparison. The conversion curves were compared and the mean curve was calculated based on institutions’ predicted RLSP values for air (CTN 0), lung (CTNmore » 250), fat (CTN 950), water (1000), liver (CTN 1050), and bone (CTN 2000) points. Results: One institution’s curve was found to have a unique curve shape between the scaled CTN of 1025 to 1225. This institution modified its curve based on the findings. Another institution had higher RLSP values than expected for both low and high CTNs. This institution recalibrated their two CT scanners and the new data placed their curve closer to the mean of all institutions. After corrections were made to several conversion curves, four institutions still fall outside 2 standard deviations at very low CTNs (100–200), and two institutions fall outside between CTN 850–900. The largest percent difference in RLSP values between institutions for the specific tissues reviewed was 22% for the lung point. Conclusion: The review and comparison of CTN to RLSP conversion curves allows IROC Houston to identify any outliers and make recommendations for improvement. Several institutions improved their clinical dose calculation accuracy as a Result of this review. There is still area for improvement, particularly in the lung area of the curve. The IROC Houston QA Center is supported by NCI grant CA180803.« less

  19. Uninterrupted optical light curves of main-belt asteroids from the K2 mission

    NASA Astrophysics Data System (ADS)

    Szabó, R.; Pál, A.; Sárneczky, K.; Szabó, Gy. M.; Molnár, L.; Kiss, L. L.; Hanyecz, O.; Plachy, E.; Kiss, Cs.

    2016-11-01

    Context. Because the second reaction wheel failed, a new mission was conceived for the otherwise healthy Kepler space telescope. In the course of the K2 mission, the telescope is staring at the plane of the Ecliptic. Thousands of solar system bodies therefore cross the K2 fields and usually cause additional noise in the highly accurate photometric data. Aims: We here follow the principle that some person's noise is another person's signal and investigate the possibility of deriving continuous asteroid light curves. This is the first such endeavor. In general, we are interested in the photometric precision that the K2 mission can deliver on moving solar system bodies. In particular, we investigate space photometric optical light curves of main-belt asteroids. Methods: We studied the K2 superstamps that cover the fields of M35, and Neptune together with Nereid, which were observed in the long-cadence mode (29.4 min sampling). Asteroid light curves were generated by applying elongated apertures. We used the Lomb-Scargle method to determine periodicities that are due to rotation. Results: We derived K2 light curves of 924 main-belt asteroids in the M35 field and 96 in the path of Neptune and Nereid. The light curves are quasi-continuous and several days long. K2 observations are sensitive to longer rotational periods than typical ground-based surveys. Rotational periods are derived for 26 main-belt asteroids for the first time. The asteroid sample is dominated by faint objects (>20 mag). Owing to the faintness of the asteroids and the high density of stars in the M35 field, only 4.0% of the asteroids with at least 12 data points show clear periodicities or trends that signal a long rotational period, as opposed to 15.9% in the less crowded Neptune field. We found that the duty cycle of the observations had to reach 60% to successfully recover rotational periods. Full Tables 1-4 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/596/A40

  20. Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models

    NASA Astrophysics Data System (ADS)

    Southworth, John

    2010-11-01

    I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the effective temperature of the star. I find that the correlations of planetary surface gravity and mass with orbital period have significance levels of only 3.1σ and 2.3σ, respectively. The significance of the latter has not increased with the addition of new data since Paper II. The division of planets into two classes based on Safronov number is increasingly blurred. Most of the objects studied here would benefit from improved photometric and spectroscopic observations, as well as improvements in our understanding of low-mass stars and their effective temperature scale.

  1. Regional Curves of Bankfull Channel Geometry for Non-Urban Streams in the Piedmont Physiographic Province, Virginia

    USGS Publications Warehouse

    Lotspeich, R. Russell

    2009-01-01

    Natural-channel design involves constructing a stream channel with the dimensions, slope, and plan-view pattern that would be expected to transport water and sediment and yet maintain habitat and aesthetics consistent with unimpaired stream segments, or reaches. Regression relations for bankfull stream characteristics based on drainage area, referred to as 'regional curves,' are used in natural stream channel design to verify field determinations of bankfull discharge and stream channel characteristics. One-variable, ordinary least-squares regressions relating bankfull discharge, bankfull cross-sectional area, bankfull width, bankfull mean depth, and bankfull slope to drainage area were developed on the basis of data collected at 17 streamflow-gaging stations in rural areas with less than 20 percent urban land cover within the basin area (non-urban areas) of the Piedmont Physiographic Province in Virginia. These regional curves can be used to estimate the bankfull discharge and bankfull channel geometry when the drainage area of a watershed is known. Data collected included bankfull cross-sectional geometry, flood-plain geometry, and longitudinal profile data. In addition, particle-size distributions of streambed material were determined, and data on basin characteristics were compiled for each reach. Field data were analyzed to determine bankfull cross-sectional area, bankfull width, bankfull mean depth, bankfull discharge, bankfull channel slope, and D50 and D84 particle sizes at each site. The bankfull geometry from the 17 sites surveyed during this study represents the average of two riffle cross sections for each site. Regional curves developed for the 17 sites had coefficient of determination (R2) values of 0.950 for bankfull cross-sectional area, 0.913 for bankfull width, 0.915 for bankfull mean depth, 0.949 for bankfull discharge, and 0.497 for bankfull channel slope. The regional curves represent conditions for streams with defined channels and bankfull features in the Piedmont Physiographic Province in Virginia with drainage areas ranging from 0.29 to 111 square miles. All sites included in the development of the regional curves were located on streams with current or historical U.S. Geological Survey streamflow-gaging stations. These curves can be used to verify bankfull features identified in the field and bankfull stage for ungaged streams in non-urban areas.

  2. Direct numerical simulation of supersonic turbulent boundary layer subjected to a curved compression ramp

    NASA Astrophysics Data System (ADS)

    Tong, Fulin; Li, Xinliang; Duan, Yanhui; Yu, Changping

    2017-12-01

    Numerical investigations on a supersonic turbulent boundary layer over a longitudinal curved compression ramp are conducted using direct numerical simulation for a free stream Mach number M∞ = 2.9 and Reynolds number Reθ = 2300. The total turning angle is 24°, and the concave curvature radius is 15 times the thickness of the incoming turbulent boundary layer. Under the selected conditions, the shock foot is transferred to a fan of the compression wave because of the weaker adverse pressure gradient. The time-averaged flow-field in the curved ramp is statistically attached where the instantaneous flow-field is close to the intermittent transitory detachment state. Studies on coherent vortex structures have shown that large-scale vortex packets are enhanced significantly when the concave curvature is aligned in the spanwise direction. Consistent with findings of previous experiments, the effect of the concave curvature on the logarithmic region of the mean velocity profiles is found to be small. The intensity of the turbulent fluctuations is amplified across the curved ramp. Based on the analysis of the Reynolds stress anisotropy tensor, the evolutions of the turbulence state in the inner and outer layers of the boundary layer are considerably different. The curvature effect on the transport mechanism of the turbulent kinetic energy is studied using the balance analysis of the contributing terms in the transport equation. Furthermore, the Görtler instability in the curved ramp is quantitatively analyzed using a stability criterion. The instantaneous streamwise vorticity confirms the existence of the Görtler-like structures. These structures are characterized by an unsteady motion. In addition, the dynamic mode decomposition analysis of the instantaneous flow field at the spanwise/wall-normal plane reveals that four dynamical relevant modes with performance loss of 16% provide an optimal low-order representation of the essential characteristics of the numerical data. The spatial structures of the dominated low-frequency dynamic modes are found to be similar to that of the Görtler-like vortices.

  3. Identifying Blocks Formed by Curbed Fractures Using Exact Arithmetic

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Xia, L.; Yu, Q.; Zhang, X.

    2015-12-01

    Identifying blocks formed by fractures is important in rock engineering. Most studies assume the fractures to be perfect planar whereas curved fractures are rarely considered. However, large fractures observed in the field are often curved. This paper presents a new method for identifying rock blocks formed by both curved and planar fractures based on the element-block-assembling approach. The curved and planar fractures are represented as triangle meshes and planar discs, respectively. In the beginning of the identification method, the intersection segments between different triangle meshes are calculated and the intersected triangles are re-meshed to construct a piecewise linear complex (PLC). Then, the modeling domain is divided into tetrahedral subdomains under the constraint of the PLC and these subdomains are further decomposed into element blocks by extended planar fractures. Finally, the element blocks are combined and the subdomains are assembled to form complex blocks. The combination of two subdomains is skipped if and only if the common facet lies on a curved fracture. In this study, the exact arithmetic is used to handle the computational errors, which may threat the robustness of the block identification program when the degenerated cases are encountered. Specifically, a real number is represented as the ratio between two integers and the basic arithmetic such as addition, subtraction, multiplication and division between different real numbers can be performed exactly if an arbitrary precision integer package is used. In this way, the exact construction of blocks can be achieved without introducing computational errors. Several analytical examples are given in this paper and the results show effectiveness of this method in handling arbitrary shaped blocks. Moreover, there is no limitation on the number of blocks in a block system. The results also show (suggest) that the degenerated cases can be handled without affecting the robustness of the identification program.

  4. Estimating the Exceedance Probability of the Reservoir Inflow Based on the Long-Term Weather Outlooks

    NASA Astrophysics Data System (ADS)

    Huang, Q. Z.; Hsu, S. Y.; Li, M. H.

    2016-12-01

    The long-term streamflow prediction is important not only to estimate water-storage of a reservoir but also to the surface water intakes, which supply people's livelihood, agriculture, and industry. Climatology forecasts of streamflow have been traditionally used for calculating the exceedance probability curve of streamflow and water resource management. In this study, we proposed a stochastic approach to predict the exceedance probability curve of long-term streamflow with the seasonal weather outlook from Central Weather Bureau (CWB), Taiwan. The approach incorporates a statistical downscale weather generator and a catchment-scale hydrological model to convert the monthly outlook into daily rainfall and temperature series and to simulate the streamflow based on the outlook information. Moreover, we applied Bayes' theorem to derive a method for calculating the exceedance probability curve of the reservoir inflow based on the seasonal weather outlook and its imperfection. The results show that our approach can give the exceedance probability curves reflecting the three-month weather outlook and its accuracy. We also show how the improvement of the weather outlook affects the predicted exceedance probability curves of the streamflow. Our approach should be useful for the seasonal planning and management of water resource and their risk assessment.

  5. Vacuum-Ultraviolet photoionization studies of the microhydrationof DNA bases (Guanine, Cytosine, Adenine and Thymine)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belau, L.; Wilson, K.R.; Leone, S.R.

    2007-01-22

    In this work, we report on a photoionization study of the microhydration of the four DNA bases. Gas-phase clusters of water with DNA bases [guanine (G), cytosine (C), adenine (A), and thymine (T)] are generated via thermal vaporization of the bases and expansion of the resultant vapor in a continuous supersonic jet expansion of water seeded in Ar. The resulting clusters are investigated by single-photon ionization with tunable vacuum-ultraviolet synchrotron radiation and mass analyzed using reflectron mass spectrometry. Photoionization efficiency (PIE) curves are recorded for the DNA bases and the following water (W) clusters: G, GW{sub n} (n = 1-3);more » C, CW{sub n} (n = 1-3); A, AW{sub n} (n = 1,2); and T, TW{sub n} (n = 1-3). Appearance energies (AE) are derived from the onset of these PIE curves (all energies in eV): G (8.1 {+-} 0.1), GW (8.0 {+-} 0.1), GW{sub 2} (8.0 {+-} 0.1), and GW{sub 3} (8.0); C (8.65 {+-} 0.05), CW (8.45 {+-} 0.05), CW{sub 2} (8.4 {+-} 0.1), and CW{sub 3} (8.3 {+-} 0.1); A (8.30 {+-} 0.05), AW (8.20 {+-} 0.05), and AW{sub 2} (8.1 {+-} 0.1); T (8.90 {+-} 0.05); and TW (8.75 {+-} 0.05), TW{sub 2} (8.6 {+-} 0.1), and TW{sub 3} (8.6 {+-} 0.1). The AEs of the DNA bases decrease slightly with the addition of water molecules (up to three) but do not converge to values found for photoinduced electron removal from DNA bases in solution.« less

  6. Tailored Algorithm for Sensitivity Enhancement of Gas Concentration Sensors Based on Tunable Laser Absorption Spectroscopy.

    PubMed

    Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana Dinora; Baeza-Serrato, Roberto

    2018-06-04

    In this work, a novel tailored algorithm to enhance the overall sensitivity of gas concentration sensors based on the Direct Absorption Tunable Laser Absorption Spectroscopy (DA-ATLAS) method is presented. By using this algorithm, the sensor sensitivity can be custom-designed to be quasi constant over a much larger dynamic range compared with that obtained by typical methods based on a single statistics feature of the sensor signal output (peak amplitude, area under the curve, mean or RMS). Additionally, it is shown that with our algorithm, an optimal function can be tailored to get a quasi linear relationship between the concentration and some specific statistics features over a wider dynamic range. In order to test the viability of our algorithm, a basic C 2 H 2 sensor based on DA-ATLAS was implemented, and its experimental measurements support the simulated results provided by our algorithm.

  7. Updated Intensity - Duration - Frequency Curves Under Different Future Climate Scenarios

    NASA Astrophysics Data System (ADS)

    Ragno, E.; AghaKouchak, A.

    2016-12-01

    Current infrastructure design procedures rely on the use of Intensity - Duration - Frequency (IDF) curves retrieved under the assumption of temporal stationarity, meaning that occurrences of extreme events are expected to be time invariant. However, numerous studies have observed more severe extreme events over time. Hence, the stationarity assumption for extreme analysis may not be appropriate in a warming climate. This issue raises concerns regarding the safety and resilience of the existing and future infrastructures. Here we employ historical and projected (RCP 8.5) CMIP5 runs to investigate IDF curves of 14 urban areas across the United States. We first statistically assess changes in precipitation extremes using an energy-based test for equal distributions. Then, through a Bayesian inference approach for stationary and non-stationary extreme value analysis, we provide updated IDF curves based on climatic model projections. This presentation summarizes the projected changes in statistics of extremes. We show that, based on CMIP5 simulations, extreme precipitation events in some urban areas can be 20% more severe in the future, even when projected annual mean precipitation is expected to remain similar to the ground-based climatology.

  8. Boosting structured additive quantile regression for longitudinal childhood obesity data.

    PubMed

    Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael

    2013-07-25

    Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.

  9. Diagnostic value of fibronectin discriminant score for predicting liver fibrosis stages in chronic hepatitis C virus patients.

    PubMed

    Attallah, Abdelfattah M; Abdallah, Sanaa O; Attallah, Ahmed A; Omran, Mohamed M; Farid, Khaled; Nasif, Wesam A; Shiha, Gamal E; Abdel-Aziz, Abdel-Aziz F; Rasafy, Nancy; Shaker, Yehia M

    2013-01-01

    Several noninvasive predictive models were developed to substitute liver biopsy for fibrosis assessment. To evaluate the diagnostic value of fibronectin which reflect extracellular matrix metabolism and standard liver functions tests which reflect alterations in hepatic functions. Chronic hepatitis C (CHC) patients (n = 145) were evaluated using ROC curves and stepwise multivariate discriminant analysis (MDA) and was validated in 180 additional patients. Liver biochemical profile including transaminases, bilirubin, alkaline phosphatase, albumin, complete blood count were estimated. Fibronectin concentration was determined using monoclonal antibody and ELISA. A novel index named fibronectin discriminant score (FDS) based on fibronectin, APRI and albumin was developed. FDS produced areas under ROC curves (AUC) of 0.91 for significant fibrosis and 0.81 for advanced fibrosis. The FDS correctly classified 79% of the significant liver fibrosis patients (F2-F4) with 87% sensitivity and 75% specificity. The relative risk [odds ratio (OR)] of having significant liver fibrosis using the cut-off values determined by ROC curve analyses were 6.1 for fibronectin, 4.9 for APRI, and 4.2 for albumin. FDS predicted liver fibrosis with an OR of 16.8 for significant fibrosis and 8.6 for advanced fibrosis. The FDS had similar AUC and OR in the validation group to the estimation group without statistically significant difference. FDS predicted liver fibrosis with high degree of accuracy, potentially decreasing the number of liver biopsy required.

  10. First photometric analysis of magnetic activity and orbital period variations for the semi-detached binary BU Vulpeculae

    NASA Astrophysics Data System (ADS)

    Wang, Jingjing; Zhang, Bin; Yu, Jing; Liu, Liang; Tian, Xiaoman

    2018-06-01

    Four sets of multi-color CCD photometric observations of the close binary BU Vul were carried out for four successive months in 2010. From our observations, there are obvious variations and asymmetry of light curves on the timescale of a month, indicating high-level stellar spot activity on the surface of at least one component. The Wilson-Devinney (2010) program was used to determine the photometric solutions, which suggest that BU Vul is a semi-detached binary with the cool, less massive component filling with the critical Roche lobe. The solutions also reveal that the spots on the primary and the secondary have changed and drifted in 2010 July, August, and September. Based on analysis of the O - C curves of BU Vul, its orbital period shows a cyclic oscillation (T3 = 22.4 yr, A3 = 0.0029 d) superimposed on a secular increase. The continuous increase is possibly a result of mass transfer from the less massive component to the more massive one at a rate of dM/dt = -2.95 × 10-9 M⊙ yr-1. The cyclic variation maybe be caused by the presence of a tertiary companion with extremely low luminosity. Combined with the distortions of the light curve on 2009 November 4, we infer that BU Vul has two additional companions in a quadruple system.

  11. Another look at the safety effects of horizontal curvature on rural two-lane highways.

    PubMed

    Saleem, Taha; Persaud, Bhagwant

    2017-09-01

    Crash Modification Factors (CMFs) are used to represent the effects on crashes of changes to highway design elements and are usually obtained from observational studies based on reported crashes. The design element of interest for this paper is horizontal curvature on rural 2-lane highways. The data for this study came from the Washington State database in the Highway Safety Information System (HSIS). Crash prediction models are developed for curve sections on rural 2-lane highway and the tangent sections up- and down-stream of the curve sections. Different negative binomial models were developed for segments on level grades (<3%), moderate grades (3-6%), and steep grades (>6%) to account for the confounding effects of gradient. The relationships between crashes at different traffic volumes and deflection angles are explored to illustrate how to get estimates of CMFs for increases in the minimum radius, considering the effect of increased tangent length for sharper curves, an effect that is overlooked in the Highway Safety Manual CMF, in addition to the effect of gradient. The results of that exploration indicated that even at different design speeds and deflection angles, the CMF estimates for incremental increases in radius lie within the same range, and that the crash reduction rate (CRR) is higher at segments on higher grades compared to the ones on lower grades. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. The Milky Way's Circular Velocity Curve and Its Constraint on the Galactic Mass with RR Lyrae Stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablimit, Iminhaji; Zhao, Gang, E-mail: iminhaji@nao.cas.cn, E-mail: gzhao@nao.cas.cn

    We present a sample of 1148 ab-type RR Lyrae (RRLab) variables identified from Catalina Surveys Data Release 1, combined with SDSS DR8 and LAMOST DR4 spectral data. We first use a large sample of 860 Galactic halo RRLab stars and derive the circular velocity distributions for the stellar halo. With the precise distances and carefully determined radial velocities (the center-of-mass radial velocities) and by considering the pulsation of the RRLab stars in our sample, we can obtain a reliable and comparable stellar halo circular velocity curve. We follow two different prescriptions for the velocity anisotropy parameter β in the Jeansmore » equation to study the circular velocity curve and mass profile. Additionally, we test two different solar peculiar motions in our calculation. The best result we obtained with the adopted solar peculiar motion 1 of ( U , V , W ) = (11.1, 12, 7.2) km s{sup −1} is that the enclosed mass of the Milky Way within 50 kpc is (3.75 ± 1.33) × 10{sup 11} M {sub ⊙} based on β = 0 and the circular velocity 180 ± 31.92 (km s{sup −1}) at 50 kpc. This result is consistent with dynamical model results, and it is also comparable to the results of previous similar works.« less

  13. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    PubMed

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  14. A Secure ECC-based RFID Mutual Authentication Protocol to Enhance Patient Medication Safety.

    PubMed

    Jin, Chunhua; Xu, Chunxiang; Zhang, Xiaojun; Li, Fagen

    2016-01-01

    Patient medication safety is an important issue in patient medication systems. In order to prevent medication errors, integrating Radio Frequency Identification (RFID) technology into automated patient medication systems is required in hospitals. Based on RFID technology, such systems can provide medical evidence for patients' prescriptions and medicine doses, etc. Due to the mutual authentication between the medication server and the tag, RFID authentication scheme is the best choice for automated patient medication systems. In this paper, we present a RFID mutual authentication scheme based on elliptic curve cryptography (ECC) to enhance patient medication safety. Our scheme can achieve security requirements and overcome various attacks existing in other schemes. In addition, our scheme has better performance in terms of computational cost and communication overhead. Therefore, the proposed scheme is well suitable for patient medication systems.

  15. Predictive aging results for cable materials in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen, K.T.; Clough, R.L.

    1990-11-01

    In this report, we provide a detailed discussion of methodology of predicting cable degradation versus dose rate, temperature, and exposure time and its application to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicon rubber and two ethylenetetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7 to 9 years), low dose-rate results recently obtained for the same material types actually aged under nuclear power plant conditions. Based on a combination of the modelling and long-term results, we findmore » indications of reasonably similar degradation responses among several different commercial formulations for each of the following generic'' materials: hypalon, ethylenetetrafluoroethylene, silicone rubber and PVC. If such generic'' behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated. Finally, to aid utilities in their cable life extension decisions, we utilize our modelling results to generate lifetime prediction curves for the materials modelled to data. These curves plot expected material lifetime versus dose rate and temperature down to the levels of interest to nuclear power plant aging. 18 refs., 30 figs., 3 tabs.« less

  16. In situ synthesis of exopolysaccharides by Leuconostoc spp. and Weissella spp. and their rheological impacts in fava bean flour.

    PubMed

    Xu, Yan; Wang, Yaqin; Coda, Rossana; Säde, Elina; Tuomainen, Päivi; Tenkanen, Maija; Katina, Kati

    2017-05-02

    Fava bean flour is regarded as a potential plant-based protein source, but the addition of it at high concentration is restricted by its poor texture-improving ability and by anti-nutritional factors (ANF). Exopolysaccharides (EPS) produced by lactic acid bacteria (LAB) are regarded as good texture modifiers. In this study, fava bean flour was fermented with Leuconostoc spp. and Weissella spp. with or without sucrose addition, in order to evaluate their potential in EPS production. The contents of free sugars, organic acids, mannitol and EPS in all fermented fava bean doughs were measured. Rheological properties of sucrose-enriched doughs, including viscosity flow curves, hysteresis loop and dynamic oscillatory sweep curves, were measured after fermentation. As one of the ANF, the degradation of raffinose family oligosaccharides (RFO) was also studied by analyzing RFO profiles of different doughs. Quantification of EPS revealed the potential of Leuconostoc pseudomesenteroides DSM 20193 in EPS production, and the rheological analysis showed that the polymers produced by this strain has the highest thickening and gelling capability. Furthermore, the viscous fava bean doughs containing plant proteins and synthesized in situ EPS may have a potential application in the food industry and fulfill consumers' increasing demands for "clean labels" and plant-originated food materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Modeling the Effects of Inhomogeneous Aerosols on the Hot Jupiter Kepler-7b’s Atmospheric Circulation

    NASA Astrophysics Data System (ADS)

    Roman, Michael; Rauscher, Emily

    2017-11-01

    Motivated by observational evidence of inhomogeneous clouds in exoplanetary atmospheres, we investigate how proposed simple cloud distributions can affect atmospheric circulations and infrared emission. We simulated temperatures and winds for the hot Jupiter Kepler-7b using a three-dimensional atmospheric circulation model that included a simplified aerosol radiative transfer model. We prescribed fixed cloud distributions and scattering properties based on results previously inferred from Kepler-7b optical phase curves, including inhomogeneous aerosols centered along the western terminator and hypothetical cases in which aerosols additionally extended across much of the planet’s nightside. In all cases, a strong jet capable of advecting aerosols from a cooler nightside to dayside was found to persist, but only at the equator. Colder temperatures at mid and polar latitudes might permit aerosol to form on the dayside without the need for advection. By altering the deposition and redistribution of heat, aerosols along the western terminator produced an asymmetric heating that effectively shifts the hottest spot further east of the substellar point than expected for a uniform distribution. The addition of opaque high clouds on the nightside can partly mitigate this enhanced shift by retaining heat that contributes to warming west of the hotspot. These expected differences in infrared phase curves could place constraints on proposed cloud distributions and their infrared opacities for brighter hot Jupiters.

  18. Designing the Alluvial Riverbeds in Curved Paths

    NASA Astrophysics Data System (ADS)

    Macura, Viliam; Škrinár, Andrej; Štefunková, Zuzana; Muchová, Zlatica; Majorošová, Martina

    2017-10-01

    The paper presents the method of determining the shape of the riverbed in curves of the watercourse, which is based on the method of Ikeda (1975) developed for a slightly curved path in sandy riverbed. Regulated rivers have essentially slightly and smoothly curved paths; therefore, this methodology provides the appropriate basis for river restoration. Based on the research in the experimental reach of the Holeška Brook and several alluvial mountain streams the methodology was adjusted. The method also takes into account other important characteristics of bottom material - the shape and orientation of the particles, settling velocity and drag coefficients. Thus, the method is mainly meant for the natural sand-gravel material, which is heterogeneous and the particle shape of the bottom material is very different from spherical. The calculation of the river channel in the curved path provides the basis for the design of optimal habitat, but also for the design of foundations of armouring of the bankside of the channel. The input data is adapted to the conditions of design practice.

  19. Conducting Meta-Analyses Based on p Values

    PubMed Central

    van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.

    2016-01-01

    Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466

  20. High Temperature Texturing of Engineered Materials in a Magnetic Field

    DTIC Science & Technology

    2003-03-01

    by 43.7 % after magnetic annealing in a 19 T field. The kink at the demagnetization curve disappeared and, in addition, a much better squareness of...the demagnetization curves was observed after the magnetic annealing (Figure 10). The improvement in the hard magnetic properties after magnetic ...A number of materials systems have been tested in a variety of magnetic fields (8-20 Tesla) and temperatures (500 to 1250oC). Four materials

Top