On cyclic yield strength in definition of limits for characterisation of fatigue and creep behaviour
NASA Astrophysics Data System (ADS)
Gorash, Yevgen; MacKenzie, Donald
2017-06-01
This study proposes cyclic yield strength as a potential characteristic of safe design for structures operating under fatigue and creep conditions. Cyclic yield strength is defined on a cyclic stress-strain curve, while monotonic yield strength is defined on a monotonic curve. Both values of strengths are identified using a two-step procedure of the experimental stress-strain curves fitting with application of Ramberg-Osgood and Chaboche material models. A typical S-N curve in stress-life approach for fatigue analysis has a distinctive minimum stress lower bound, the fatigue endurance limit. Comparison of cyclic strength and fatigue limit reveals that they are approximately equal. Thus, safe fatigue design is guaranteed in the purely elastic domain defined by the cyclic yielding. A typical long-term strength curve in time-to-failure approach for creep analysis has two inflections corresponding to the cyclic and monotonic strengths. These inflections separate three domains on the long-term strength curve, which are characterised by different creep fracture modes and creep deformation mechanisms. Therefore, safe creep design is guaranteed in the linear creep domain with brittle failure mode defined by the cyclic yielding. These assumptions are confirmed using three structural steels for normal and high-temperature applications. The advantage of using cyclic yield strength for characterisation of fatigue and creep strength is a relatively quick experimental identification. The total duration of cyclic tests for a cyclic stress-strain curve identification is much less than the typical durations of fatigue and creep rupture tests at the stress levels around the cyclic yield strength.
Jaime-Pérez, José Carlos; Jiménez-Castillo, Raúl Alberto; Vázquez-Hernández, Karina Elizabeth; Salazar-Riojas, Rosario; Méndez-Ramírez, Nereida; Gómez-Almaguer, David
2017-10-01
Advances in automated cell separators have improved the efficiency of plateletpheresis and the possibility of obtaining double products (DP). We assessed cell processor accuracy of predicted platelet (PLT) yields with the goal of a better prediction of DP collections. This retrospective proof-of-concept study included 302 plateletpheresis procedures performed on a Trima Accel v6.0 at the apheresis unit of a hematology department. Donor variables, software predicted yield and actual PLT yield were statistically evaluated. Software prediction was optimized by linear regression analysis and its optimal cut-off to obtain a DP assessed by receiver operating characteristic curve (ROC) modeling. Three hundred and two plateletpheresis procedures were performed; in 271 (89.7%) occasions, donors were men and in 31 (10.3%) women. Pre-donation PLT count had the best direct correlation with actual PLT yield (r = 0.486. P < .001). Means of software machine-derived values differed significantly from actual PLT yield, 4.72 × 10 11 vs.6.12 × 10 11 , respectively, (P < .001). The following equation was developed to adjust these values: actual PLT yield= 0.221 + (1.254 × theoretical platelet yield). ROC curve model showed an optimal apheresis device software prediction cut-off of 4.65 × 10 11 to obtain a DP, with a sensitivity of 82.2%, specificity of 93.3%, and an area under the curve (AUC) of 0.909. Trima Accel v6.0 software consistently underestimated PLT yields. Simple correction derived from linear regression analysis accurately corrected this underestimation and ROC analysis identified a precise cut-off to reliably predict a DP. © 2016 Wiley Periodicals, Inc.
A Microlensing Analysis of the Central Engine in the Lensed Quasar WFI J2033-4723
Chile. We combined these new data with published measurements from Vuissoz et al. (2008 )to create a 13-season set of optical light curves. Employing the...Bayesian Monte Carlo micro lensing analysis technique of Kochanek (2004), we analyzed these light curves to yield the first-ever measurement of the
Reliable yields of public water-supply wells in the fractured-rock aquifers of central Maryland, USA
NASA Astrophysics Data System (ADS)
Hammond, Patrick A.
2018-02-01
Most studies of fractured-rock aquifers are about analytical models used for evaluating aquifer tests or numerical methods for describing groundwater flow, but there have been few investigations on how to estimate the reliable long-term drought yields of individual hard-rock wells. During the drought period of 1998 to 2002, many municipal water suppliers in the Piedmont/Blue Ridge areas of central Maryland (USA) had to institute water restrictions due to declining well yields. Previous estimates of the yields of those wells were commonly based on extrapolating drawdowns, measured during short-term single-well hydraulic pumping tests, to the first primary water-bearing fracture in a well. The extrapolations were often made from pseudo-equilibrium phases, frequently resulting in substantially over-estimated well yields. The methods developed in the present study to predict yields consist of extrapolating drawdown data from infinite acting radial flow periods or by fitting type curves of other conceptual models to the data, using diagnostic plots, inverse analysis and derivative analysis. Available drawdowns were determined by the positions of transition zones in crystalline rocks or thin-bedded consolidated sandstone/limestone layers (reservoir rocks). Aquifer dewatering effects were detected by type-curve matching of step-test data or by breaks in the drawdown curves constructed from hydraulic tests. Operational data were then used to confirm the predicted yields and compared to regional groundwater levels to determine seasonal variations in well yields. Such well yield estimates are needed by hydrogeologists and water engineers for the engineering design of water systems, but should be verified by the collection of long-term monitoring data.
Decision curve analysis: a novel method for evaluating prediction models.
Vickers, Andrew J; Elkin, Elena B
2006-01-01
Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction. This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities. Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.
High-resolution mapping of yield curve shape and evolution for high porosity sandstones
NASA Astrophysics Data System (ADS)
Bedford, J. D.; Faulkner, D.; Wheeler, J.; Leclere, H.
2017-12-01
The onset of permanent inelastic deformation for porous rock is typically defined by a yield curve plotted in P-Q space, where P is the effective mean stress and Q is the differential stress. Sandstones usually have broadly elliptical shaped yield curves, with the low pressure side of the ellipse associated with localized brittle faulting (dilation) and the high pressure side with distributed ductile deformation (compaction). However recent works have shown that these curves might not be perfectly elliptical and that significant evolution in shape occurs with continued deformation. We therefore use a novel stress-probing methodology to map in high-resolution the yield curve shape for Boise and Idaho Gray sandstones (36-38% porosity) and also investigate curve evolution with increasing deformation. The data reveal yield curves with a much flatter geometry than previously recorded for porous sandstone and that the compactive side of the curve is partly comprised of a near vertical limb. The yield curve evolution is found to be strongly dependent on the nature of inelastic strain. Samples that were compacted under a deviatoric load, with a component of inelastic shear strain, were found to have yield curves with peaks that are approximately 50% higher than similar porosity samples that were hydrostatically compacted (i.e. purely volumetric strain). The difference in yield curve evolution along the different loading paths is attributed to mechanical anisotropy that develops during deviatoric loading by the closure of preferentially orientated fractures. Increased shear strain also leads to the formation of a plateau at the peak of the yield curve as samples deform along the deviatoric loading path. These results have important implications for understanding how the strength of porous rock evolves along different stress paths, including during fluid extraction from hydrocarbon reservoirs where the stress state is rarely isotropic.
Conducting Meta-Analyses Based on p Values
van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.
2016-01-01
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466
Pietzka, Ariane T.; Stöger, Anna; Huhulescu, Steliana; Allerberger, Franz; Ruppitsch, Werner
2011-01-01
The ability to accurately track Listeria monocytogenes strains involved in outbreaks is essential for control and prevention of listeriosis. Because current typing techniques are time-consuming, cost-intensive, technically demanding, and difficult to standardize, we developed a rapid and cost-effective method for typing of L. monocytogenes. In all, 172 clinical L. monocytogenes isolates and 20 isolates from culture collections were typed by high-resolution melting (HRM) curve analysis of a specific locus of the internalin B gene (inlB). All obtained HRM curve profiles were verified by sequence analysis. The 192 tested L. monocytogenes isolates yielded 15 specific HRM curve profiles. Sequence analysis revealed that these 15 HRM curve profiles correspond to 18 distinct inlB sequence types. The HRM curve profiles obtained correlated with the five phylogenetic groups I.1, I.2, II.1, II.2, and III. Thus, HRM curve analysis constitutes an inexpensive assay and represents an improvement in typing relative to classical serotyping or multiplex PCR typing protocols. This method provides a rapid and powerful screening tool for simultaneous preliminary typing of up to 384 samples in approximately 2 hours. PMID:21227395
Greinert, R; Detzler, E; Volkmer, B; Harder, D
1995-11-01
Human lymphocytes irradiated with graded doses of up to 5 Gy of 150 kV X rays were fused with mitotic CHO cells after delay times ranging from 0 to 14 h after irradiation. The yields of dicentrics seen under PCC conditions, using C-banding for centromere detection, and of excess acentric fragments observed in the PCC experiment were determined by image analysis. At 4 Gy the time course of the yield of dicentrics shows an early plateau for delay times up to 2 h, then an S-shaped rise and a final plateau which is reached after a delay time of about 8 to 10 h. Whereas the dose-yield curve measured at zero delay time is strictly linear, the shape of the curve obtained for 8 h delay time is linear-quadratic. The linear yield component, alpha D, is formed entirely in the fast process manifested in the early plateau, while component beta D2 is developed slowly in the subsequent hours. Analysis of the kinetics of the rise of the S-shaped curve for yield as a function of time leads to the postulate of an "intermediate product" of pairwise DNA lesion interaction, still fragile when subjected to the stress of PCC, but gradually processed into a stable dicentric chromosome. It is concluded that the observed difference in the kinetics of the alpha and beta components explains a number of earlier results, especially the disappearance of the beta component at high LET, and opens possibilities for chemical and physical modification of the beta component during the extended formation process after irradiation observed here.
Introduction to basic solar cell measurements
NASA Technical Reports Server (NTRS)
Brandhorst, H. W., Jr.
1976-01-01
The basic approaches to solar cell performance and diagnostic measurements are described. The light sources, equipment for I-V curve measurement, and the test conditions and procedures for performance measurement are detailed. Solar cell diagnostic tools discussed include analysis of I-V curves, series resistance and reverse saturation current determination, spectral response/quantum yield measurement, and diffusion length/lifetime determination.
NASA Astrophysics Data System (ADS)
Bedford, John D.; Faulkner, Daniel R.; Leclère, Henri; Wheeler, John
2018-02-01
Porous rock deformation has important implications for fluid flow in a range of crustal settings as compaction can increase fluid pressure and alter permeability. The onset of inelastic strain for porous materials is typically defined by a yield curve plotted in differential stress (Q) versus effective mean stress (P) space. Empirical studies have shown that these curves are broadly elliptical in shape. Here conventional triaxial experiments are first performed to document (a) the yield curve of porous bassanite (porosity ≈ 27-28%), a material formed from the dehydration of gypsum, and (b) the postyield behavior, assuming that P and Q track along the yield surface as inelastic deformation accumulates. The data reveal that after initial yield, the yield surface cannot be perfectly elliptical and must evolve significantly as inelastic strain is accumulated. To investigate this further, a novel stress-probing methodology is developed to map precisely the yield curve shape and subsequent evolution for a single sample. These measurements confirm that the high-pressure side of the curve is partly composed of a near-vertical limb. Yield curve evolution is shown to be dependent on the nature of the loading path. Bassanite compacted under differential stress develops a heterogeneous microstructure and has a yield curve with a peak that is almost double that of an equal porosity sample that has been compacted hydrostatically. The dramatic effect of different loading histories on the strength of porous bassanite highlights the importance of understanding the associated microstructural controls on the nature of inelastic deformation in porous rock.
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1991-01-01
The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.
On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.
López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J
2015-04-01
Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Experimental determination of the yield stress curve of the scotch pine wood materials
NASA Astrophysics Data System (ADS)
Günay, Ezgi; Aygün, Cevdet; Kaya, Şükrü Tayfun
2013-12-01
Yield stress curve is determined for the pine wood specimens by conducting a series of tests. In this work, pinewood is modeled as a composite material with transversely isotropic fibers. Annual rings (wood grain) of the wood specimens are taken as the major fiber directions with which the strain gauge directions are aligned. For this purpose, three types of tests are arranged. These are tensile, compression and torsion loading tests. All of the tests are categorized with respect to fiber orientations and their corresponding loading conditions. Each test within these categories is conducted separately. Tensile and compression tests are conducted in accordance with standards of Turkish Standards Institution (TSE) whereas torsion tests are conducted in accordance with Standards Australia. Specimens are machined from woods of Scotch pine which is widely used in boat building industries and in other structural engineering applications. It is determined that this species behaves more flexibly than the others. Strain gauges are installed on the specimen surfaces in such a way that loading measurements are performed along directions either parallel or perpendicular to the fiber directions. During the test and analysis phase of yield stress curve, orientation of strain gauge directions with respect to fiber directions are taken into account. The diagrams of the normal stress vs. normal strain or the shear stress vs. shear strain are plotted for each test. In each plot, the yield stress is determined by selecting the point on the diagram, the tangent of which is having a slope of 5% less than the slope of the elastic portion of the diagram. The geometric locus of these selected points constitutes a single yield stress curve on σ1-σ2 principal plane. The resulting yield stress curve is plotted as an approximate ellipse which resembles Tsai-Hill failure criterion. The results attained in this work, compare well with the results which are readily available in the literature.
Fission yield calculation using toy model based on Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jubaidah, Kurniadi, Rizal
2015-09-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Small-scale seismic inversion using surface waves extracted from noise cross correlation.
Gouédard, Pierre; Roux, Philippe; Campillo, Michel
2008-03-01
Green's functions can be retrieved between receivers from the correlation of ambient seismic noise or with an appropriate set of randomly distributed sources. This principle is demonstrated in small-scale geophysics using noise sources generated by human steps during a 10-min walk in the alignment of a 14-m-long accelerometer line array. The time-domain correlation of the records yields two surface wave modes extracted from the Green's function between each pair of accelerometers. A frequency-wave-number Fourier analysis yields each mode contribution and their dispersion curve. These dispersion curves are then inverted to provide the one-dimensional shear velocity of the near surface.
A study of Lusitano mare lactation curve with Wood's model.
Santos, A S; Silvestre, A M
2008-02-01
Milk yield and composition data from 7 nursing Lusitano mares (450 to 580 kg of body weight and 2 to 9 parities) were used in this study (5 measurements per mare for milk yield and 8 measurements for composition). Wood's lactation model was used to describe milk fat, protein, and lactose lactation curves. Mean values for the concentration of major milk components across the lactation period (180 d) were 5.9 g/kg of fat, 18.4 g/kg of protein, and 60.8 g/kg of lactose. Milk fat and protein (g/kg) decreased and lactose (g/kg) increased during the 180 d of lactation. Curves for milk protein and lactose yields (g) were similar in shape to the milk yield curve; protein yield peaked at 307 g on d 10 and lactose peaked at 816 g on d 45. The fat (g) curve was different in shape compared with milk, protein, and lactose yields. Total production of the major milk constituents throughout the 180 d of lactation was estimated to be 12.0, 36.1, and 124 kg for fat, protein, and lactose, respectively. The algebraic model fitted by a nonlinear regression procedure to the data resulted in reasonable prediction curves for milk yield (R(a)(2) of 0.89) and the major constituents (R(a)(2) ranged from 0.89 to 0.95). The lactation curves of major milk constituents in Lusitano mares were similar, both in shape and values, to those found in other horse breeds. The established curves facilitate the estimation of milk yield and variation of milk constituents at different stages of lactation for both nursing and dairy mares, providing important information relative to weaning time and foal supplementation.
Fission yield calculation using toy model based on Monte Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jubaidah, E-mail: jubaidah@student.itb.ac.id; Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221; Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. Theremore » are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90« less
HS 0705+6700: a New Eclipsing sdB Binary
NASA Astrophysics Data System (ADS)
Drechsel, H.; Heber, U.; Napiwotzki, R.; Ostensen, R.; Solheim, J.-E.; Deetjen, J.; Schuh, S.
HS 0705+6700 is a newly discovered eclipsing sdB binary system consisting of an sdB primary and a cool secondary main sequence star. CCD photometry obtained in October and November 2000 with the 2.5m Nordic (NOT) telescope (La Palma, Tenerife) in the B passband and with the 2.2m Calar Alto telescope (CAFOS, R filter) yielded eclipse light curves with complete orbital phase coverage at high time resolution. A periodogram analysis of 12 primary minimum times distributed over the time span from October 2000 to March 2001 allowed to derive the following exact period and linear ephemeris: prim. min. = HJD 2451822.759782(22) + 0.09564665(39) ṡ E A total of 15 spectra taken with the 3.5m Calar Alto telescope (TWIN spectrograph) on March 11-12, 2001, were used to establish the radial velocity curve of the primary star (K1 = 85.8 km/s) , and to determine its basic atmospheric parameters (Teff = 29300 K, log g = 5.47). The B and R light curves were solved using our Wilson-Devinney based light curve analysis code MORO (Drechsel et al. 1995, A&A 294, 723). The best fit solution yielded exact system parameters consistent with the spectroscopic results. Detailed results will be published elsewhere (Drechsel et al. 2001, A&A, in preparation).
Zhao, Bingwei; Wang, Xin; Yang, Xiaoyi
2015-12-01
Co-pyrolysis characteristics of Isochrysis (high lipid) and Chlorella (high protein) were investigated qualitatively and quantitatively based on DTG curves, biocrude yield and composition by individual pyrolysis and co-pyrolysis. DTG curves in co-pyrolysis have been compared accurately with those in individual pyrolysis. An interaction has been detected at 475-500°C in co-pyrolysis based on biocrude yields, and co-pyrolysis reaction mechanism appear three-dimensional diffusion in comparison with random nucleation followed by growth in individual pyrolysis based on kinetic analysis. There is no obvious difference in the maximum biocrude yields for individual pyrolysis and co-pyrolysis, but carboxylic acids (IC21) decreased and N-heterocyclic compounds (IC12) increased in co-pyrolysis. Simulation results of biocrude yield by Components Biofuel Model and Kinetics Biofuel Model indicate that the processes of co-pyrolysis comply with those of individual pyrolysis in solid phase by and large. Variation of percentage content in co-pyrolysis and individual pyrolysis biocrude indicated interaction in gas phase. Copyright © 2015. Published by Elsevier Ltd.
Lovell, Charles R; Decker, Peter V; Bagwell, Christopher E; Thompson, Shelly; Matsui, George Y
2008-05-01
Methods to assess the diversity of the diazotroph assemblage in the rhizosphere of the salt marsh cordgrass, Spartina alterniflora were examined. The effectiveness of nifH PCR-denaturing gradient gel electrophoresis (DGGE) was compared to that of nifH clone library analysis. Seventeen DGGE gel bands were sequenced and yielded 58 nonidentical nifH sequences from a total of 67 sequences determined. A clone library constructed using the GC-clamp nifH primers that were employed in the PCR-DGGE (designated the GC-Library) yielded 83 nonidentical sequences from a total of 257 nifH sequences. A second library constructed using an alternate set of nifH primers (N-Library) yielded 83 nonidentical sequences from a total of 138 nifH sequences. Rarefaction curves for the libraries did not reach saturation, although the GC-Library curve was substantially dampened and appeared to be closer to saturation than the N-Library curve. Phylogenetic analyses showed that DGGE gel band sequencing recovered nifH sequences that were frequently sampled in the GC-Library, as well as sequences that were infrequently sampled, and provided a species composition assessment that was robust, efficient, and relatively inexpensive to obtain. Further, the DGGE method permits a large number of samples to be examined for differences in banding patterns, after which bands of interest can be sampled for sequence determination.
Design, Fabrication and Test of Composite Curved Frames for Helicopter Fuselage Structure
NASA Technical Reports Server (NTRS)
Lowry, D. W.; Krebs, N. E.; Dobyns, A. L.
1984-01-01
Aspects of curved beam effects and their importance in designing composite frame structures are discussed. The curved beam effect induces radial flange loadings which in turn causes flange curling. This curling increases the axial flange stresses and induces transverse bending. These effects are more important in composite structures due to their general inability to redistribute stresses by general yielding, such as in metal structures. A detailed finite element analysis was conducted and used in the design of composite curved frame specimens. Five specimens were statically tested and compared with predicted and test strains. The curved frame effects must be accurately accounted for to avoid premature fracture; finite element methods can accurately predict most of the stresses and no elastic relief from curved beam effects occurred in the composite frames tested. Finite element studies are presented for comparative curved beam effects on composite and metal frames.
Enrollment Projection within a Decision-Making Framework.
ERIC Educational Resources Information Center
Armstrong, David F.; Nunley, Charlene Wenckowski
1981-01-01
Two methods used to predict enrollment at Montgomery College in Maryland are compared and evaluated, and the administrative context in which they are used is considered. The two methods involve time series analysis (curve fitting) and indicator techniques (yield from components). (MSE)
Pluto's Atmosphere, Then and Now
NASA Astrophysics Data System (ADS)
Elliot, J. L.; Buie, M.; Person, M. J.; Qu, S.
2002-09-01
The KAO light curve for the 1988 stellar occultation by Pluto exhibits a sharp drop just below half light, but above this level the light curve is consistent with that of an isothermal atmosphere (T = 105 +/- 8 K, with N2 as its major constituent). The sharp drop in the light curve has been interpreted as being caused by: (i) a haze layer, (ii) a large thermal gradient, or (iii) some combination of these two. Modeling Pluto's atmosphere with a haze layer yields a normal optical depth >= 0.145 (Elliot & Young 1992, AJ 103, 991). On the other hand, if Pluto's atmosphere is assumed to be clear, the occultation light curve can be inverted with a new method that avoids the large-body approximations. Inversion of the KAO light curve with this method yields an upper isothermal part, followed by a sharp thermal gradient that reaches a maximum magnitude of -3.9 +/- 0.6 K km-1 at the end of the inversion (r = 1206 +/- 10 km). Even though we do not yet understand the cause of the sharp drop, the KAO light curve can be used as a benchmark for examining subsequent Pluto occultation light curves to determine whether Pluto's atmospheric structure has changed since 1988. As an example, the Mamiña light curve for the 2002 July 20 Pluto occultation of P126A was compared with the KAO light curve by Buie et al. (this conference), who concluded that Pluto's atmospheric structure has changed significantly since 1988. Further analysis and additional light curves from this and subsequent occultations (e.g. 2002 August 21) will allow us to elucidate the nature of these changes. This work was supported, in part, by grants from NASA (NAG5-9008 and NAG5-10444) and NSF (AST-0073447).
An Improvement of the Anisotropy and Formability Predictions of Aluminum Alloy Sheets
NASA Astrophysics Data System (ADS)
Banabic, D.; Comsa, D. S.; Jurco, P.; Wagner, S.; Vos, M.
2004-06-01
The paper presents an yield criterion for orthotropic sheet metals and its implementation in a theoretical model in order to calculate the Forming Limit Curves. The proposed yield criterion has been validated for two aluminum alloys: AA3103-0 and AA5182-0, respectively. The biaxial tensile test of cross specimens has been used for the determination of the experimental yield locus. The new yield criterion has been implemented in the Marciniak-Kuczynski model for the calculus of limit strains. The calculated Forming Limit Curves have been compared with the experimental ones, determined by frictionless test: bulge test, plane strain test and uniaxial tensile test. The predicted Forming Limit Curves using the new yield criterion are in good agreement with the experimental ones.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le, K. C.; Tran, T. M.; Langer, J. S.
The statistical-thermodynamic dislocation theory developed in previous papers is used here in an analysis of high-temperature deformation of aluminum and steel. Using physics-based parameters that we expect theoretically to be independent of strain rate and temperature, we are able to fit experimental stress-strain curves for three different strain rates and three different temperatures for each of these two materials. Here, our theoretical curves include yielding transitions at zero strain in agreement with experiment. We find that thermal softening effects are important even at the lowest temperatures and smallest strain rates.
Intraday X-Ray Variability of QSOs/AGN Using the Chandra Archives
NASA Astrophysics Data System (ADS)
Tartamella, C.; Busche, J.
2005-05-01
X-ray variability is a common characteristic of Active Galactic Nuclei (AGN), and it can be used to probe the nuclear region at short time scales. Quantitative analysis of this variability has been difficult due to low signal-to-noise ratios and short time baselines, but serendipitous Chandra data acquired within the last six years have opened the door to such analysis. Cross-correlation of the Chandra archives with QSO/AGN catalogs on NASA's HEASARC website (e.g. Veron, Sloan) yields a sample of 50+ objects that satisfy the following criteria: absolute magnitude M≤ -22.5, proper time baselines greater than 2 hours, and count rates leading to 10% error bars for 8+ flux points on the light curve. The sample includes a range of red-shifts, magnitudes, and type (e.g. radio loud, radio quiet), and hence may yield empirical clues about luminosity or evolutionary trends. As a beginning of such analysis, we present 11 light curves for 9 objects for which the exposure time was greater than 10 hours. The variability was analyzed using three different statistical methods. The Kolmogorov-Smirnov (KS) test proved to be impractical because of the unavoidably small number of data points and the simplistic nature of the test. A χ2 test indicated in most cases that there were significant departures from constant brightness (as expected). Autocorrelation plots were also generated for each light curve. With more work and a larger sample size, these plots can be used to identify any trends in the lightcurve such as whether the variability is stochastic or periodic in nature. This test was useful even with the small number of datapoints available. In future work, more sophisticated analyses based on Fourier series, power density spectra, or wavelets are likely to yield more meaningful and useful results.
Maximized exoEarth candidate yields for starshades
NASA Astrophysics Data System (ADS)
Stark, Christopher C.; Shaklan, Stuart; Lisman, Doug; Cady, Eric; Savransky, Dmitry; Roberge, Aki; Mandell, Avi M.
2016-10-01
The design and scale of a future mission to directly image and characterize potentially Earth-like planets will be impacted, to some degree, by the expected yield of such planets. Recent efforts to increase the estimated yields, by creating observation plans optimized for the detection and characterization of Earth-twins, have focused solely on coronagraphic instruments; starshade-based missions could benefit from a similar analysis. Here we explore how to prioritize observations for a starshade given the limiting resources of both fuel and time, present analytic expressions to estimate fuel use, and provide efficient numerical techniques for maximizing the yield of starshades. We implemented these techniques to create an approximate design reference mission code for starshades and used this code to investigate how exoEarth candidate yield responds to changes in mission, instrument, and astrophysical parameters for missions with a single starshade. We find that a starshade mission operates most efficiently somewhere between the fuel- and exposuretime-limited regimes and, as a result, is less sensitive to photometric noise sources as well as parameters controlling the photon collection rate in comparison to a coronagraph. We produced optimistic yield curves for starshades, assuming our optimized observation plans are schedulable and future starshades are not thrust-limited. Given these yield curves, detecting and characterizing several dozen exoEarth candidates requires either multiple starshades or an η≳0.3.
Davis, Brett; Birch, Gavin
2010-08-01
Trace metal export by stormwater runoff from a major road and local street in urban Sydney, Australia, is compared using pollutant yield rating curves derived from intensive sampling data. The event loads of copper, lead and zinc are well approximated by logarithmic relationships with respect to total event discharge owing to the reliable appearance of a first flush in pollutant mass loading from urban roads. Comparisons of the yield rating curves for these three metals show that copper and zinc export rates from the local street are comparable with that of the major road, while lead export from the local street is much higher, despite a 45-fold difference in traffic volume. The yield rating curve approach allows problematic environmental data to be presented in a simple yet meaningful manner with less information loss. Copyright 2010 Elsevier Ltd. All rights reserved.
Piecewise-homotopy analysis method (P-HAM) for first order nonlinear ODE
NASA Astrophysics Data System (ADS)
Chin, F. Y.; Lem, K. H.; Chong, F. S.
2013-09-01
In homotopy analysis method (HAM), the determination for the value of the auxiliary parameter h is based on the valid region of the h-curve in which the horizontal segment of the h-curve will decide the valid h-region. All h-value taken from the valid region, provided that the order of deformation is large enough, will in principle yield an approximation series that converges to the exact solution. However it is found out that the h-value chosen within this valid region does not always promise a good approximation under finite order. This paper suggests an improved method called Piecewise-HAM (P-HAM). In stead of a single h-value, this method suggests using many h-values. Each of the h-values comes from an individual h-curve while each h-curve is plotted by fixing the time t at a different value. Each h-value is claimed to produce a good approximation only about a neighborhood centered at the corresponding t which the h-curve is based on. Each segment of these good approximations is then joined to form the approximation curve. By this, the convergence region is enhanced further. The P-HAM is illustrated and supported by examples.
Cardona, Samir Julián Calvo; Cadavid, Henry Cardona; Corrales, Juan David; Munilla, Sebastián; Cantet, Rodolfo J C; Rogberg-Muñoz, Andrés
2016-09-01
The κ-casein (CSN-3) and β-lactoglobulin (BLG) genes are extensively polymorphic in ruminants. Several association studies have estimated the effects of polymorphisms in these genes on milk yield, milk composition, and cheese-manufacturing properties. Usually, these results are based on production integrated over the lactation curve or on cross-sectional studies at specific days in milk (DIM). However, as differential expression of milk protein genes occurs over lactation, the effect of the polymorphisms may change over time. In this study, we fitted a mixed-effects regression model to test-day records of milk yield and milk quality traits (fat, protein, and total solids yields) from Colombian tropical dairy goats. We used the well-characterized A/B polymorphisms in the CSN-3 and BLG genes. We argued that this approach provided more efficient estimators than cross-sectional designs, given the same number and pattern of observations, and allowed exclusion of between-subject variation from model error. The BLG genotype AA showed a greater performance than the BB genotype for all traits along the whole lactation curve, whereas the heterozygote showed an intermediate performance. We observed no such constant pattern for the CSN-3 gene between the AA homozygote and the heterozygote (the BB genotype was absent from the sample). The differences among the genotypic effects of the BLG and the CSN-3 polymorphisms were statistically significant during peak and mid lactation (around 40-160 DIM) for the BLG gene and only for mid lactation (80-145 DIM) for the CSN-3 gene. We also estimated the additive and dominant effects of the BLG locus. The locus showed a statistically significant additive behavior along the whole lactation trajectory for all quality traits, whereas for milk yield the effect was not significant at later stages. In turn, we detected a statistically significant dominance effect only for fat yield in the early and peak stages of lactation (at about 1-45 DIM). The longitudinal analysis of test-day records allowed us to estimate the differential effects of polymorphisms along the lactation curve, pointing toward stages that could be affected by the gene. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN
2010-08-03
A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.
Development of growth and yield models for southern hardwoods: site index determinations
John Paul McTague; Daniel J. Robison; David O' Loughlin; Joseph Roise; Robert Kellison
2006-01-01
Growth and yield data from across 13 southern States, collected from 1967 to 2004 from fully-stocked even-aged southern hardwood forests on a variety of site types, was used to calculate site index curves. These derived curves provide an efficient means to evaluate the productivity-age relation which varies across many sites. These curves were derived for mixed-species...
Behavioral Economics and Empirical Public Policy
ERIC Educational Resources Information Center
Hursh, Steven R.; Roma, Peter G.
2013-01-01
The application of economics principles to the analysis of behavior has yielded novel insights on value and choice across contexts ranging from laboratory animal research to clinical populations to national trends of global impact. Recent innovations in demand curve methods provide a credible means of quantitatively comparing qualitatively…
NASA Astrophysics Data System (ADS)
Hoffmann, Ryan; Dennison, J. R.; Abbott, Jonathan
2006-03-01
When incident energetic electrons interact with a material, they excite electrons within the material to escape energies. The electron emission is quantified as the ratio of emitted electrons to incident particle flux, termed electron yield. Measuring the electron yield of insulators is difficult due to dynamic surface charge accumulation which directly affects landing energies and the potential barrier that emitted electrons must overcome. Our recent measurements of highly insulating materials have demonstrated significant changes in total yield curves and yield decay curves for very small electron doses equivalent to a trapped charge density of <10^10 electrons /cm^3. The Chung-Everhart theory provides a basic model for the behavior of the electron emission spectra which we relate to yield decay curves as charge is allowed to accumulate. Yield measurements as a function of dose for polyimide (Kapton^TM) and microcrystalline SiO2 will be presented. We use our data and model to address the question of whether there is a minimal dose threshold at which the accumulated charge no longer affects the yield.
Potential advantages of curve sawing non-straight hardwood logs
Philip A. Araman
2007-01-01
Curve sawing is not new to the softwood industry. Softwood sawmill managers think about how fast they can push logs through their sawmill to maximize the yield of 1x and 2x lumber. Curve sawing helps mills maximize yield when sawing non-straight logs. Hardwood sawmill managers donât want to push logs through their sawmills, because they want to maximize lumber value...
Neck curve polynomials in neck rupture model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul
2012-06-06
The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of {sup 280}X{sub 90} with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.
NASA Astrophysics Data System (ADS)
Nakwattanaset, Aeksuwat; Suranuntchai, Surasak
2018-03-01
Normally, Forming Limit Curves (FLCs) can’t explain for shear fracture better than Damage Curve, this article aims to show the experimental of Forming Limit Curve (FLC) for Advanced High Strength Steel (AHSS) sheets grade JAC780Y with the Nakazima forming test and tensile tests of different sample geometries. From these results, the Forming Limit Curve (strain space) was transformed to damage curve (stress space) between plastic strain and stress triaxiality. Therefore, Stress space transformed using by Hill-48 and von-Mises yield function. This article shows that two of these yield criterions can use in the transformation.
NASA Technical Reports Server (NTRS)
Yim, John T.
2017-01-01
A survey of low energy xenon ion impact sputter yields was conducted to provide a more coherent baseline set of sputter yield data and accompanying fits for electric propulsion integration. Data uncertainties are discussed and different available curve fit formulas are assessed for their general suitability. A Bayesian parameter fitting approach is used with a Markov chain Monte Carlo method to provide estimates for the fitting parameters while characterizing the uncertainties for the resulting yield curves.
The effect of flow data resolution on sediment yield estimation and channel design
NASA Astrophysics Data System (ADS)
Rosburg, Tyler T.; Nelson, Peter A.; Sholtes, Joel S.; Bledsoe, Brian P.
2016-07-01
The decision to use either daily-averaged or sub-daily streamflow records has the potential to impact the calculation of sediment transport metrics and stream channel design. Using bedload and suspended load sediment transport measurements collected at 138 sites across the United States, we calculated the effective discharge, sediment yield, and half-load discharge using sediment rating curves over long time periods (median record length = 24 years) with both daily-averaged and sub-daily streamflow records. A comparison of sediment transport metrics calculated with both daily-average and sub-daily stream flow data at each site showed that daily-averaged flow data do not adequately represent the magnitude of high stream flows at hydrologically flashy sites. Daily-average stream flow data cause an underestimation of sediment transport and sediment yield (including the half-load discharge) at flashy sites. The degree of underestimation was correlated with the level of flashiness and the exponent of the sediment rating curve. No consistent relationship between the use of either daily-average or sub-daily streamflow data and the resultant effective discharge was found. When used in channel design, computed sediment transport metrics may have errors due to flow data resolution, which can propagate into design slope calculations which, if implemented, could lead to unwanted aggradation or degradation in the design channel. This analysis illustrates the importance of using sub-daily flow data in the calculation of sediment yield in urbanizing or otherwise flashy watersheds. Furthermore, this analysis provides practical charts for estimating and correcting these types of underestimation errors commonly incurred in sediment yield calculations.
Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V
2007-02-01
Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.
J. Chris Toney; Karen G. Schleeweis; Jennifer Dungan; Andrew Michaelis; Todd Schroeder; Gretchen G. Moisen
2015-01-01
The North American Forest Dynamics (NAFD) projectâs Attribution Team is completing nationwide processing of historic Landsat data to provide a comprehensive annual, wall-to-wall analysis of US disturbance history, with attribution, over the last 25+ years. Per-pixel time series analysis based on a new nonparametric curve fitting algorithm yields several metrics useful...
NASA Astrophysics Data System (ADS)
Khaleghi, Mohammad Reza; Varvani, Javad
2018-02-01
Complex and variable nature of the river sediment yield caused many problems in estimating the long-term sediment yield and problems input into the reservoirs. Sediment Rating Curves (SRCs) are generally used to estimate the suspended sediment load of the rivers and drainage watersheds. Since the regression equations of the SRCs are obtained by logarithmic retransformation and have a little independent variable in this equation, they also overestimate or underestimate the true sediment load of the rivers. To evaluate the bias correction factors in Kalshor and Kashafroud watersheds, seven hydrometric stations of this region with suitable upstream watershed and spatial distribution were selected. Investigation of the accuracy index (ratio of estimated sediment yield to observed sediment yield) and the precision index of different bias correction factors of FAO, Quasi-Maximum Likelihood Estimator (QMLE), Smearing, and Minimum-Variance Unbiased Estimator (MVUE) with LSD test showed that FAO coefficient increases the estimated error in all of the stations. Application of MVUE in linear and mean load rating curves has not statistically meaningful effects. QMLE and smearing factors increased the estimated error in mean load rating curve, but that does not have any effect on linear rating curve estimation.
NASA Astrophysics Data System (ADS)
Zaman, Shakil Bin; Barlat, Frédéric; Kim, Jin Hwan
2018-05-01
Large-scale advanced high strength steel (AHSS) sheet specimens were deformed in uniaxial tension, using a novel grip system mounted on a MTS universal tension machine. After pre-strain, they were used as a pre-strained material to examine the anisotropic response in the biaxial tension tests with various load ratios, and orthogonal tension tests at 45° and 90° from the pre-strain axis. The flow curve and the instantaneous r-value of the pre-strained steel in each of the aforementioned uniaxial testing conditions were also measured and compared with those of the undeformed steel. Furthermore, an exhaustive analysis of the yield surface was also conducted and the results, prior and post-prestrain were represented and compared. The homogeneous anisotropic hardening (HAH) model [1] was employed to predict the behavior of the pre-strained material. It was found that the HAH-predicted flow curves after non-linear strain path change and the yield loci after uniaxial pre-strain were in good agreement with the experiments, while the r-value evolution after strain path change was qualitatively well predicted.
On the minimum quantum requirement of photosynthesis.
Zeinalov, Yuzeir
2009-01-01
An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.
Universal rescaling of flow curves for yield-stress fluids close to jamming
NASA Astrophysics Data System (ADS)
Dinkgreve, M.; Paredes, J.; Michels, M. A. J.; Bonn, D.
2015-07-01
The experimental flow curves of four different yield-stress fluids with different interparticle interactions are studied near the jamming concentration. By appropriate scaling with the distance to jamming all rheology data can be collapsed onto master curves below and above jamming that meet in the shear-thinning regime and satisfy the Herschel-Bulkley and Cross equations, respectively. In spite of differing interactions in the different systems, master curves characterized by universal scaling exponents are found for the four systems. A two-state microscopic theory of heterogeneous dynamics is presented to rationalize the observed transition from Herschel-Bulkley to Cross behavior and to connect the rheological exponents to microscopic exponents for the divergence of the length and time scales of the heterogeneous dynamics. The experimental data and the microscopic theory are compared with much of the available literature data for yield-stress systems.
NASA Astrophysics Data System (ADS)
Bang, Sungsik; Rickhey, Felix; Kim, Minsoo; Lee, Hyungyil; Kim, Naksoo
2013-12-01
In this study we establish a process to predict hardening behavior considering the Bauschinger effect for zircaloy-4 sheets. When a metal is compressed after tension in forming, the yield strength decreases. For this reason, the Bauschinger effect should be considered in FE simulations of spring-back. We suggested a suitable specimen size and a method for determining the optimum tightening torque for simple shear tests. Shear stress-strain curves are obtained for five materials. We developed a method to convert the shear load-displacement curve to the effective stress-strain curve with FEA. We simulated the simple shear forward/reverse test using the combined isotropic/kinematic hardening model. We also investigated the change of the load-displacement curve by varying the hardening coefficients. We determined the hardening coefficients so that they follow the hardening behavior of zircaloy-4 in experiments.
Lactation curves of dairy camels in an intensive system.
Musaad, Abdelgadir; Faye, Bernard; Nikhela, Abdelmoneim Abu
2013-04-01
Weekly milk records of 47 she-camels in a multibreed dairy camel herd were collected for over a period of 5 years. A total of 72 lactation curves were defined, and relationships with parity, calving season, lactation length, milk production level, following lactations, and dam weight were analyzed. Overall mean values were milk yield up to 12 months, 1,970 ± 790 l; lactation length, 12.5 months; persistency, 94.7 %; weekly peak yield, 50.7 l; monthly peak yield, 220 ± 90 l; and the number of weeks to reach peak yield, 28. The highest productivity was recorded in summer with a weekly mean of 48.2 ± 19.4 l, compared with 34.1 ± 16.3 l in winter. The highest average yield recorded was for camels at sixth parity, whereas the highest weekly peak was at eighth parity, and highest persistency at fifth parity. Camels that calved during the cold months (November to February) were most productives, with the highest persistency, peak yield, and longest lactation length. Four types of curves were identified corresponding to different parities and milk yield levels. Based on these data, specific models for camels are proposed.
NASA Astrophysics Data System (ADS)
Montejo, Ludguier D.; Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.
2013-03-01
We apply the Fourier Transform to absorption and scattering coefficient images of proximal interphalangeal (PIP) joints and evaluate the performance of these coefficients as classifiers using receiver operator characteristic (ROC) curve analysis. We find 25 features that yield a Youden index over 0.7, 3 features that yield a Youden index over 0.8, and 1 feature that yields a Youden index over 0.9 (90.0% sensitivity and 100% specificity). In general, scattering coefficient images yield better one-dimensional classifiers compared to absorption coefficient images. Using features derived from scattering coefficient images we obtain an average Youden index of 0.58 +/- 0.16, and an average Youden index of 0.45 +/- 0.15 when using features from absorption coefficient images.
Proposed method for determining the thickness of glass in solar collector panels
NASA Technical Reports Server (NTRS)
Moore, D. M.
1980-01-01
An analytical method was developed for determining the minimum thickness for simply supported, rectangular glass plates subjected to uniform normal pressure environmental loads such as wind, earthquake, snow, and deadweight. The method consists of comparing an analytical prediction of the stress in the glass panel to a glass breakage stress determined from fracture mechanics considerations. Based on extensive analysis using the nonlinear finite element structural analysis program ARGUS, design curves for the structural analysis of simply supported rectangular plates were developed. These curves yield the center deflection, center stress and corner stress as a function of a dimensionless parameter describing the load intensity. A method of estimating the glass breakage stress as a function of a specified failure rate, degree of glass temper, design life, load duration time, and panel size is also presented.
Some Supporting Evidence for Accurate Multivariate Perceptions with Chernoff Faces, Project 547.
ERIC Educational Resources Information Center
Wainer, Howard
A scheme, using features in a cartoon-like human face to represent variables, is tested as to its ability to graphically depict multivariate data. A factor analysis of Harman's "24 Psychological Tests" was performed and yielded four orthogonal factors. Nose width represented the loading on Factor 1; eye size on Factor 2; curve of mouth…
Characterization of time series via Rényi complexity-entropy curves
NASA Astrophysics Data System (ADS)
Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.
2018-05-01
One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.
NASA Astrophysics Data System (ADS)
Ehinola, O. A.; Opoola, A. O.
2005-05-01
The Slingram electromagnetic (EM) survey using a coil separation of 60 and 100 meters was carried out in 10 villages in Akinyele area of Ibadan, southwestern Nigeria to aid in the development of groundwater. Five main rock types including an undifferentiated gneiss complex (Su), biotite-garnet schist/gneiss (Bs), quartzite and quartz schist (Q), migmatised undifferentiated biotite/hornblende gneiss (M) and pegmatite/quartz vein (P) underlie the study area. A total of 31 EM profiles was made to accurately locate prospective borehole sites in the field. Four main groups with different behavioural pattern were categorized from the EM profiles. Group 1 is characterized by high density of positive (HDP) or high density of negative (HDN) real and imaginary curves, Group 2 by parallel real and imaginary curves intersecting with negligible amplitude (PNA), Group 3 by frequent intersection of high density of negative minima (FHN) real and imaginary curves, and Group 4 by separate and approximately parallel (SAP) real and imaginary curves. Qualitative pictures of the overburden thickness and the extent of fracturing have been proposed from these behavioural patterns. A comparison of the borehole yield with the overburden thickness and the level of fracturing show that borehole yield depends more on the fracture density than on the overburden thickness. Asymmetry of the anomaly was also found useful in the determination of the inclination of the conductor/fracture.
Chemical Research--Radiochemistry Report for Month Ending April 17, 1943
DOE R&D Accomplishments Database
Franck, J. Division Director
1952-01-01
1. A continuation of the detailed analysis of beta and soft and hard gamma activity associated with all fission product elements in a nitrate bombardment is presented. The ?cooling? time has been extended to 170 days. The data for the individual elements are presented in tables as counts/min and in figures as percentage of total beta, soft gamma, and hard gamma radiations. 2. Calculations and graphs have been made on the heat generated by the longer-lived fission products. The method of analysis is presented. 3. Two new short-lived Rh fission product activities have been found. They are probably the daughters of the two long-lived Ru activities (30d, 200d). Re-evaluation of data on 43 leads to the conclusion that the longest lived 43 activity in measureable yields is the 6.1h (formerly 6.6h). New parent-daughter relationships in the rare-earth activities are given. 4. Theoretical beta absorption curves have been made using the Fermi distribution function and linear absorption curves for small energy intervals. A Feather analysis of the absorption curve leads to the theoretical maximum energy.
Light Curve and Orbital Period Analysis of VX Lac
NASA Astrophysics Data System (ADS)
Yılmaz, M.; Nelson, R. H.; Şenavcı, H. V.; İzci, D.; Özavcı, İ.; Gümüş, D.
2017-04-01
In this study, we performed simultaneously light curve and radial velocity, and also period analyses of the eclipsing binary system VX Lac. Four color (BVRI) light curves of the system were analysed using the W-D code. The results imply that VX Lac is a classic Algol-type binary with a mass ratio of q=0.27, of which the less massive secondary component fills its Roche lobe. The orbital period behaviour of the system was analysed by assuming the light time effect (LITE) from a third body. The O-C analysis yielded a mass transfer rate of dM/dt=1.86×10-8M⊙yr-1 and the minimal mass of the third body to be M3=0.31M⊙. The residuals from mass transfer and the third body were also analysed because another cyclic variation is seen in O-C diagram. This periodic variation was examined under the hypotheses of stellar magnetic activity and fourth body.
Wavelet analysis of stellar differential rotation. III. The Sun in white light
NASA Astrophysics Data System (ADS)
Hempelmann, A.
2003-02-01
Future space projects like KEPLER will deliver a vast quantity of high precision light curves of stars. This paper describes a test concerning the observability of rotation and even differential rotation of slowly rotating stars from such data. Two published light curves of solar total irradiance measures are investigated: the Nimbus-7 Earth Radiation Budget (ERB) observations between 1978 and 1993 and the Active Cavity Radiometer Irradiance Monitor I (ACRIM I) measurements between 1980 and 1989. Light curve analysis show that oscillations on time-scales comparable to solar rotation but of a complex pattern are visible. Neither Fourier analysis nor time-frequency Wavelet analysis yield the true rotation period during the more active phases of the solar cycle. The true rotation period dominates only for a short time during solar minimum. In the light of this study even space-born broad band photometry may turn out an inappropriate instrument to study stellar butterfly diagrams of stars rotating as slow as the Sun. However, it was shown in Papers I and II of this series that chromospheric tracers like Lyman alpha , Mg II h+k and CaII H+K are appropriate instruments to perform this task.
Brandfass, Christoph; Karlovsky, Petr
2006-01-23
Fusarium head blight (FHB) is a disease of cereal crops, which has a severe impact on wheat and barley production worldwide. Apart from reducing the yield and impairing grain quality, FHB leads to contamination of grain with toxic secondary metabolites (mycotoxins), which pose a health risk to humans and livestock. The Fusarium species primarily involved in FHB are F. graminearum and F. culmorum. A key prerequisite for a reduction in the incidence of FHB is an understanding of its epidemiology. We describe a duplex-PCR-based method for the simultaneous detection of F. culmorum and F. graminearum in plant material. Species-specific PCR products are identified by melting curve analysis performed in a real-time thermocycler in the presence of the fluorescent dye SYBR Green I. In contrast to multiplex real-time PCR assays, the method does not use doubly labeled hybridization probes. PCR with product differentiation by melting curve analysis offers a cost-effective means of qualitative analysis for the presence of F. culmorum and F. graminearum in plant material. This method is particularly suitable for epidemiological studies involving a large number of samples.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., SERIES EE Maturities, Redemption Values, and Investment Yields of Series EE Savings Bonds General... securities. This curve relates the yield on a security to its time to maturity. Yields at particular points...
Radiochemistry and the Study of Fission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rundberg, Robert S.
These are slides from a lecture given at UC Berkeley. Radiochemistry has been used to study fission since its discovery. Radiochemical methods are used to determine cumulative mass yields. These measurements have led to the two-mode fission hypothesis to model the neutron energy dependence of fission product yields. Fission product yields can be used for the nuclear forensics of nuclear explosions. The mass yield curve depends on both the fuel and the neutron spectrum of a device. Recent studies have shown that the nuclear structure of the compound nucleus can affect the mass yield distribution. The following topics are covered:more » In the beginning: the discovery of fission; forensics using fission products: what can be learned from fission products, definitions of R-values and Q-values, fission bases, K-factors and fission chambers, limitations; the neutron energy dependence of the mass yield distribution (the two mode fission hypothesis); the influence of nuclear structure on the mass yield distribution. In summary: Radiochemistry has been used to study fission since its discovery. Radiochemical measurement of fission product yields have provided the highest precision data for developing fission models and for nuclear forensics. The two-mode fission hypothesis provides a description of the neutron energy dependence of the mass yield curve. However, data is still rather sparse and more work is needed near second and third chance fission. Radiochemical measurements have provided evidence for the importance of nuclear states in the compound nucleus in predicting the mass yield curve in the resonance region.« less
Using Principal Component Analysis to Improve Fallout Characterization
2017-03-23
between actinide location and elemental composition in fallout from historic atmospheric nuclear weapons testing. Fifty spherical fallout samples were...mathematical approach to solving the complex system of elemental variables while establishing correlations to actinide incorporation within the fallout...1. The double hump curve for uranium-235 showing the effective fission yield by mass number for thermal neutrons. Reproduced with permission from
Design of advanced beams considering elasto-plastic behaviour of material
NASA Astrophysics Data System (ADS)
Tolun, S.
1992-10-01
The paper proposes a computational procedure for precise calculation of limit and ultimate or design loads, which must be carried by an advanced aviation beam, without permanent distortion and without rupture. Among several stress-strain curve representations, one that is suitable for a particular material is chosen for applied loads, yield, and failure load calculations, and then nonlinear analysis is performed.
Comparison of random regression test-day models for Polish Black and White cattle.
Strabel, T; Szyda, J; Ptak, E; Jamrozik, J
2005-10-01
Test-day milk yields of first-lactation Black and White cows were used to select the model for routine genetic evaluation of dairy cattle in Poland. The population of Polish Black and White cows is characterized by small herd size, low level of production, and relatively early peak of lactation. Several random regression models for first-lactation milk yield were initially compared using the "percentage of squared bias" criterion and the correlations between true and predicted breeding values. Models with random herd-test-date effects, fixed age-season and herd-year curves, and random additive genetic and permanent environmental curves (Legendre polynomials of different orders were used for all regressions) were chosen for further studies. Additional comparisons included analyses of the residuals and shapes of variance curves in days in milk. The low production level and early peak of lactation of the breed required the use of Legendre polynomials of order 5 to describe age-season lactation curves. For the other curves, Legendre polynomials of order 3 satisfactorily described daily milk yield variation. Fitting third-order polynomials for the permanent environmental effect made it possible to adequately account for heterogeneous residual variance at different stages of lactation.
Hyodo, T; Minagawa, K; Inoue, T; Fujimoto, J; Minami, N; Bito, R; Mikita, A
2013-12-01
A nicotine part-filter method can be applied to estimate smokers' mouth level exposure (MLE) to smoke constituents. The objectives of this study were (1) to generate calibration curves for 47 smoke constituents, (2) to estimate MLE to selected smoke constituents using Japanese smokers of commercially available cigarettes covering a wide range of International Organization for Standardization tar yields (1-21mg/cigarette), and (3) to investigate relationships between MLE estimates and various machine-smoking yields. Five cigarette brands were machine-smoked under 7 different smoking regimes and smoke constituents and nicotine content in part-filters were measured. Calibration curves were then generated. Spent cigarette filters were collected from a target of 50 smokers for each of the 15 brands and a total of 780 filters were obtained. Nicotine content in part-filters was then measured and MLE to each smoke constituent was estimated. Strong correlations were identified between nicotine content in part-filters and 41 out of the 47 smoke constituent yields. Estimates of MLE to acetaldehyde, acrolein, 1,3-butadiene, benzene, benzo[a]pyrene, carbon monoxide, and tar showed significant negative correlations with corresponding constituent yields per mg nicotine under the Health Canada Intense smoking regime, whereas significant positive correlations were observed for N-nitrosonornicotine and (4-methylnitrosoamino)-1-(3-pyridyl)-1-butanone. Copyright © 2013 Elsevier Inc. All rights reserved.
Hypervelocity impact on shielded plates
NASA Technical Reports Server (NTRS)
Smith, James P.
1993-01-01
A ballistic limit equation for hypervelocity impact on thin plates is derived analytically. This equation applies to cases of impulsive impact on a plate that is protected by a multi-shock shield, and it is valid in the range of velocity above 6 km/s. Experimental tests were conducted at the NASA Johnson Space Center on square aluminum plates. Comparing the center deflections of these plates with the theoretical deflections of a rigid-plastic plate subjected to a blast load, one determines the dynamic yield strength of the plate material. The analysis is based on a theory for the expansion of the fragmented projectile and on a simple failure criterion. Curves are presented for the critical projectile radius versus the projectile velocity, and for the critical plate thickness versus the velocity. These curves are in good agreement with curves that have been generated empirically.
Planetary spectra for anisotropic scattering
NASA Technical Reports Server (NTRS)
Chamberlain, J. W.
1976-01-01
Some effects on planetary spectra that would be produced by departures from isotropic scattering are examined. The phase function is the simplest departure to handle analytically and the only phase function, other than the isotropic one, that can be incorporated into a Chandrasekhar first approximation. This approach has the advantage of illustrating effects resulting from anisotropies while retaining the simplicity that yields analytic solutions. The curve of growth is the sine qua non of planetary spectroscopy. The discussion emphasizes the difficulties and importance of ascertaining curves of growth as functions of observing geometry. A plea is made to observers to analyze their empirical curves of growth, whenever it seems feasible, in terms of coefficients of which are the leading terms in radiative-transfer analysis. An algebraic solution to the two sets of anisotropic H functions is developed which gives emergent intensities accurate to 0.3%.
1993-12-02
determined by Leco* analysis with the highest impurity being C (< 91 wt. ppm) followed by 0 (< 39 ppm) and H (< 5 ppm). Results The yield stress of single... Analysis The slip trace analyses made after deformation along [0011, J021), and 17711 are summarized in Table 1. The characteristics of the slip traces...elastic recovery of the material as the indenter is removed. Following their analysis , we used the unloading portion of the curve to estimate the
WTAQ - A computer program for aquifer-test analysis of confined and unconfined aquifers
Barlow, P.M.; Moench, A.F.
2004-01-01
Computer program WTAQ was developed to implement a Laplace-transform analytical solution for axial-symmetric flow to a partially penetrating, finite-diameter well in a homogeneous and anisotropic unconfined (water-table) aquifer. The solution accounts for wellbore storage and skin effects at the pumped well, delayed response at an observation well, and delayed or instantaneous drainage from the unsaturated zone. For the particular case of zero drainage from the unsaturated zone, the solution simplifies to that of axial-symmetric flow in a confined aquifer. WTAQ calculates theoretical time-drawdown curves for the pumped well and observation wells and piezometers. The theoretical curves are used with measured time-drawdown data to estimate hydraulic parameters of confined or unconfined aquifers by graphical type-curve methods or by automatic parameter-estimation methods. Parameters that can be estimated are horizontal and vertical hydraulic conductivity, specific storage, and specific yield. A sample application illustrates use of WTAQ for estimating hydraulic parameters of a hypothetical, unconfined aquifer by type-curve methods. Copyright ASCE 2004.
NASA Astrophysics Data System (ADS)
Ehinola, O. A.; Opoola, A. O.; Adesokan, H. A.
2006-04-01
The Slingram electromagnetic (EM) survey using a coil separation of 60 and 100 m was carried out in ten villages in the Akinyele area of Ibadan, southwestern Nigeria to aid in the development of groundwater. Five main rock types including an undifferentiated gneiss complex (Su), biotite-garnet schist/gneiss (Bs), quartzite and quartz schist (Q), migmatized undifferentiated biotite/hornblende gneiss (M) and pegmatite/quartz vein (P) underlie the study area. A total of 31 EM profiles was made to accurately locate prospective borehole sites in the field. Four main groups with different behavioural patterns were categorized from the EM profiles. Group 1 is characterized by a high density of positive (HDP) or a high density of negative (HDN) real and imaginary curves, Group 2 by parallel real and imaginary curves intersecting with negligible amplitude (PNA), Group 3 by frequent intersection of a high density of negative minima (FHN) real and imaginary curves, and Group 4 by separate and approximately parallel (SAP) real and imaginary curves. Qualitative pictures of the overburden thickness and the extent of fracturing have been proposed from these behavioural patterns. A comparison of the borehole yield with the overburden thickness and the level of fracturing shows that the borehole yield depends more on the fracture density than on the overburden thickness. The asymmetry of the anomaly was also found to be useful in the determination of the inclination of the conductor/fracture.
Wixted, John T; Mickes, Laura
2018-01-01
Receiver operating characteristic (ROC) analysis was introduced to the field of eyewitness identification 5 years ago. Since that time, it has been both influential and controversial, and the debate has raised an issue about measuring discriminability that is rarely considered. The issue concerns the distinction between empirical discriminability (measured by area under the ROC curve) vs. underlying/theoretical discriminability (measured by d' or variants of it). Under most circumstances, the two measures will agree about a difference between two conditions in terms of discriminability. However, it is possible for them to disagree, and that fact can lead to confusion about which condition actually yields higher discriminability. For example, if the two conditions have implications for real-world practice (e.g., a comparison of competing lineup formats), should a policymaker rely on the area-under-the-curve measure or the theory-based measure? Here, we illustrate the fact that a given empirical ROC yields as many underlying discriminability measures as there are theories that one is willing to take seriously. No matter which theory is correct, for practical purposes, the singular area-under-the-curve measure best identifies the diagnostically superior procedure. For that reason, area under the ROC curve informs policy in a way that underlying theoretical discriminability never can. At the same time, theoretical measures of discriminability are equally important, but for a different reason. Without an adequate theoretical understanding of the relevant task, the field will be in no position to enhance empirical discriminability.
Nuclear reactor descriptions for space power systems analysis
NASA Technical Reports Server (NTRS)
Mccauley, E. W.; Brown, N. J.
1972-01-01
For the small, high performance reactors required for space electric applications, adequate neutronic analysis is of crucial importance, but in terms of computational time consumed, nuclear calculations probably yield the least amount of detail for mission analysis study. It has been found possible, after generation of only a few designs of a reactor family in elaborate thermomechanical and nuclear detail to use simple curve fitting techniques to assure desired neutronic performance while still performing the thermomechanical analysis in explicit detail. The resulting speed-up in computation time permits a broad detailed examination of constraints by the mission analyst.
Spherical nanoindentation stress–strain curves
Pathak, Siddhartha; Kalidindi, Surya R.
2015-03-24
Although indentation experiments have long been used to measure the hardness and Young's modulus, the utility of this technique in analyzing the complete elastic–plastic response of materials under contact loading has only been realized in the past few years – mostly due to recent advances in testing equipment and analysis protocols. This paper provides a timely review of the recent progress made in this respect in extracting meaningful indentation stress–strain curves from the raw datasets measured in instrumented spherical nanoindentation experiments. These indentation stress–strain curves have produced highly reliable estimates of the indentation modulus and the indentation yield strength inmore » the sample, as well as certain aspects of their post-yield behavior, and have been critically validated through numerical simulations using finite element models as well as direct in situ scanning electron microscopy (SEM) measurements on micro-pillars. Much of this recent progress was made possible through the introduction of a new measure of indentation strain and the development of new protocols to locate the effective zero-point of initial contact between the indenter and the sample in the measured datasets. As a result, this has led to an important key advance in this field where it is now possible to reliably identify and analyze the initial loading segment in the indentation experiments.« less
A geometric morphometric study into the sexual dimorphism of the human scapula.
Scholtz, Y; Steyn, M; Pretorius, E
2010-08-01
Sex determination is vital when attempting to establish identity from skeletal remains. Two approaches to sex determination exists: morphological and metrical. The aim of this paper was to use geometric morphometrics to study the shape of the scapula and its sexual dimorphism. The sample comprised 45 adult black male and 45 adult black female scapulae of known sex. The scapulae were photographed and 21 homologous landmarks were plotted to use for geometric morphometric analysis with the 'tps' series of programs, as well as the IMP package. Consensus thin-plate splines and vector plots for males and females were compared. The CVA and TwoGroup analyses indicated that significant differences exist between males and females. The lateral and medial borders of females are straighter while the supraspinous fossa is more convexly curved than that of males. More than 91% of the females and 95% of the males were correctly assigned. Hotelling's T(2)-test yielded a significant p-value of 0.00039. In addition, 100 equidistant landmarks representing the curve only were also assigned. These, however, yielded considerably poorer results. It is concluded that it is better to use homologous landmarks rather than curve data only, as it is most probable that the shape of the outline relative to the fixed homologous points on the scapula is sexually dimorphic.
2013-01-01
Background Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2–3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. Methods OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Results Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as “general level” (FPC1), “time to peak” (FPC2) and “oscillations” (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (−0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (−0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. Conclusions FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy. PMID:23327294
Frøslie, Kathrine Frey; Røislien, Jo; Qvigstad, Elisabeth; Godang, Kristin; Bollerslev, Jens; Voldner, Nanna; Henriksen, Tore; Veierød, Marit B
2013-01-17
Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2-3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as "general level" (FPC1), "time to peak" (FPC2) and "oscillations" (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (-0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (-0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy.
Gambarota, Giulio; Hitti, Eric; Leporq, Benjamin; Saint-Jalmes, Hervé; Beuf, Olivier
2017-01-01
Tissue perfusion measurements using intravoxel incoherent motion (IVIM) diffusion-MRI are of interest for investigations of liver pathologies. A confounding factor in the perfusion quantification is the partial volume between liver tissue and large blood vessels. The aim of this study was to assess and correct for this partial volume effect in the estimation of the perfusion fraction. MRI experiments were performed at 3 Tesla with a diffusion-MRI sequence at 12 b-values. Diffusion signal decays in liver were analyzed using the non-negative least square (NNLS) method and the biexponential fitting approach. In some voxels, the NNLS analysis yielded a very fast-decaying component that was assigned to partial volume with the blood flowing in large vessels. Partial volume correction was performed by biexponential curve fitting, where the first data point (b = 0 s/mm 2 ) was eliminated in voxels with a very fast-decaying component. Biexponential fitting with partial volume correction yielded parametric maps with perfusion fraction values smaller than biexponential fitting without partial volume correction. The results of the current study indicate that the NNLS analysis in combination with biexponential curve fitting allows to correct for partial volume effects originating from blood flow in IVIM perfusion fraction measurements. Magn Reson Med 77:310-317, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
A refined 'standard' thermal model for asteroids based on observations of 1 Ceres and 2 Pallas
NASA Technical Reports Server (NTRS)
Lebofsky, Larry A.; Sykes, Mark V.; Tedesco, Edward F.; Veeder, Glenn J.; Matson, Dennis L.
1986-01-01
An analysis of ground-based thermal IR observations of 1 Ceres and 2 Pallas in light of their recently determined occultation diameters and small amplitude light curves has yielded a new value for the IR beaming parameter employed in the standard asteroid thermal emission model which is significantly lower than the previous one. When applied to the reduction of thermal IR observations of other asteroids, this new value is expected to yield model diameters closer to actual values. The present formulation incorporates the IAU magnitude convention for asteroids that employs zero-phase magnitudes, including the opposition effect.
Recent Developments in the Formability of Aluminum Alloys
NASA Astrophysics Data System (ADS)
Banabic, Dorel; Cazacu, Oana; Paraianu, Liana; Jurco, Paul
2005-08-01
The paper presents a few recent contributions brought by the authors in the field of the formability of aluminum alloys. A new concept for calculating Forming Limit Diagrams (FLD) using the finite element method is presented. The article presents a new strategy for calculating both branches of an FLD, using a Hutchinson - Neale model implemented in a finite element code. The simulations have been performed with Abaqus/Standard. The constitutive model has been implemented using a UMAT subroutine. The plastic anisotropy of the sheet metal is described by the Cazacu-Barlat and the BBC2003 yield criteria. The theoretical predictions have been compared with the results given by the classical Hutchinson - Neale method and also with experimental data for different aluminum alloys. The comparison proves the capability of the finite element method to predict the strain localization. A computer program used for interactive calculation and graphical representation of different Yield Loci and Forming Limit Diagrams has also been developed. The program is based on a Hutchinson-Neale model. Different yield criteria (Hill 1948, Barlat-Lian and BBC 2003) are implemented in this model. The program consists in three modules: a graphical interface for input, a module for the identification and visualization of the yield surfaces, and a module for calculating and visualizing the forming limit curves. A useful facility offered by the program is the possibility to perform the sensitivity analysis both for the yield surface and the forming limit curves. The numerical results can be compared with experimental data, using the import/export facilities included in the program.
Recent Developments in the Formability of Aluminum Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banabic, Dorel; Paraianu, Liana; Jurco, Paul
The paper presents a few recent contributions brought by the authors in the field of the formability of aluminum alloys. A new concept for calculating Forming Limit Diagrams (FLD) using the finite element method is presented. The article presents a new strategy for calculating both branches of an FLD, using a Hutchinson - Neale model implemented in a finite element code. The simulations have been performed with Abaqus/Standard. The constitutive model has been implemented using a UMAT subroutine. The plastic anisotropy of the sheet metal is described by the Cazacu-Barlat and the BBC2003 yield criteria. The theoretical predictions have beenmore » compared with the results given by the classical Hutchinson - Neale method and also with experimental data for different aluminum alloys. The comparison proves the capability of the finite element method to predict the strain localization. A computer program used for interactive calculation and graphical representation of different Yield Loci and Forming Limit Diagrams has also been developed. The program is based on a Hutchinson-Neale model. Different yield criteria (Hill 1948, Barlat-Lian and BBC 2003) are implemented in this model. The program consists in three modules: a graphical interface for input, a module for the identification and visualization of the yield surfaces, and a module for calculating and visualizing the forming limit curves. A useful facility offered by the program is the possibility to perform the sensitivity analysis both for the yield surface and the forming limit curves. The numerical results can be compared with experimental data, using the import/export facilities included in the program.« less
Long-term hydrological simulation based on the Soil Conservation Service curve number
NASA Astrophysics Data System (ADS)
Mishra, Surendra Kumar; Singh, Vijay P.
2004-05-01
Presenting a critical review of daily flow simulation models based on the Soil Conservation Service curve number (SCS-CN), this paper introduces a more versatile model based on the modified SCS-CN method, which specializes into seven cases. The proposed model was applied to the Hemavati watershed (area = 600 km2) in India and was found to yield satisfactory results in both calibration and validation. The model conserved monthly and annual runoff volumes satisfactorily. A sensitivity analysis of the model parameters was performed, including the effect of variation in storm duration. Finally, to investigate the model components, all seven variants of the modified version were tested for their suitability.
Observation of an Opposition Surge on Triton
NASA Astrophysics Data System (ADS)
Herbert, B. D.; Buratti, B. J.; Schmidt, B.; Bauer, J. M.; Hicks, M. D.
2004-11-01
Ground-based observations of Neptune's moon Triton taken during the summers of 2000, 2003, and 2004 show a rotational light curve with a large amplitude. This is in stark contrast to data from the 1989 Voyager II flyby, which implies significant changes have occurred on Triton's surface since that time. The light curve has two notable regions, one that is significantly brighter than was observed in 1989 and one that is significantly darker. Data were also taken at a broad range of solar phase angles, allowing for a comprehensive study of the effects of phase on Triton's brightness. Analysis of the phase curve yields a solar phase coefficient close to zero for phases greater than 0.08 degrees, a number in close agreement with past studies that focused on higher phase angles. We also report a previously unrecognized opposition surge. Preliminary analysis suggests that the surge has different characteristics in the dark and bright regions currently visible on Triton, implying a non-homogenous regolith. Funding for this project was provided in part by the New York Space Grant Consortium and the NASA Undergraduate Student Research Program.
Cumulative frequency distribution of past species extinctions
NASA Technical Reports Server (NTRS)
Raup, D. M.
1991-01-01
Analysis of Sepkoski's compendium of the time ranges of 30,000+ taxa yields a mean duration of 28.4 ma for genera of fossil invertebrates. This converts to an average extinction rate of 3.5 percent per million years or about one percent every 286,000 years. Using survivorship techniques, these estimates can be converted to the species level, yielding a Phanerozoic average of one percent species extinction every 40,000 years. Variation in extinction rates through time is far greater than the null expectation of a homogeneous birth-death model and this reflects the well-known episodicity of extinction ranging from a few large mass extinctions to so-called background extinction. The observed variation in rates can be used to construct a cumulative frequency distribution of extinction intensity, and this distribution, in the form of a kill curve for species, shows the expected waiting times between extinction events of a given intensity. The kill curve is an average description of the extinction events of a given intensity. The kill curve is an average description of the extinction record and does not imply any cause or causes of extinction. The kill curve shows, among other things, that only about five percent of total species extinctions in the Phanerozoic were involved in the five largest mass extinctions. The other 95 percent were distributed among large and small events not normally called mass extinctions. As an exploration of the possibly absurd proposition that most past extinctions were produced by the effects of large-body impact, the kill curve for species was mapped on the comparable distribution for comet and asteroid impacts. The result is a curve predicting the species kill for a given size of impacting object (expressed as crater size). The results are reasonable in that impacts producing craters less than 30 km (diameter) cause negligible extinction but those producing craters 100-150 km (diameter) cause extinction of species in the range of 45-60 percent.
Laituri, Tony R; Henry, Scott; El-Jawahri, Raed; Muralidharan, Nirmal; Li, Guosong; Nutt, Marvin
2015-11-01
A provisional, age-dependent thoracic risk equation (or, "risk curve") was derived to estimate moderate-to-fatal injury potential (AIS2+), pertaining to men with responses gaged by the advanced mid-sized male test dummy (THOR50). The derivation involved two distinct data sources: cases from real-world crashes (e.g., the National Automotive Sampling System, NASS) and cases involving post-mortem human subjects (PMHS). The derivation was therefore more comprehensive, as NASS datasets generally skew towards younger occupants, and PMHS datasets generally skew towards older occupants. However, known deficiencies had to be addressed (e.g., the NASS cases had unknown stimuli, and the PMHS tests required transformation of known stimuli into THOR50 stimuli). For the NASS portion of the analysis, chest-injury outcomes for adult male drivers about the size of the THOR50 were collected from real-world, 11-1 o'clock, full-engagement frontal crashes (NASS, 1995-2012 calendar years, 1985-2012 model-year light passenger vehicles). The screening for THOR50-sized men involved application of a set of newly-derived "correction" equations for self-reported height and weight data in NASS. Finally, THOR50 stimuli were estimated via field simulations involving attendant representative restraint systems, and those stimuli were then assigned to corresponding NASS cases (n=508). For the PMHS portion of the analysis, simulation-based closure equations were developed to convert PMHS stimuli into THOR50 stimuli. Specifically, closure equations were derived for the four measurement locations on the THOR50 chest by cross-correlating the results of matched-loading simulations between the test dummy and the age-dependent, Ford Human Body Model. The resulting closure equations demonstrated acceptable fidelity (n=75 matched simulations, R2≥0.99). These equations were applied to the THOR50-sized men in the PMHS dataset (n=20). The NASS and PMHS datasets were combined and subjected to survival analysis with event-frequency weighting and arbitrary censoring. The resulting risk curve--a function of peak THOR50 chest compression and age--demonstrated acceptable fidelity for recovering the AIS2+ chest injury rate of the combined dataset (i.e., IR_dataset=1.97% vs. curve-based IR_dataset=1.98%). Additional sensitivity analyses showed that (a) binary logistic regression yielded a risk curve with nearly-identical fidelity, (b) there was only a slight advantage of combining the small-sample PMHS dataset with the large-sample NASS dataset, (c) use of the PMHS-based risk curve for risk estimation of the combined dataset yielded relatively poor performance (194% difference), and (d) when controlling for the type of contact (lab-consistent or not), the resulting risk curves were similar.
Type Ia supernova Hubble residuals and host-galaxy properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, A. G.; Aldering, G.; Aragon, C.
2014-03-20
Kim et al. introduced a new methodology for determining peak-brightness absolute magnitudes of type Ia supernovae from multi-band light curves. We examine the relation between their parameterization of light curves and Hubble residuals, based on photometry synthesized from the Nearby Supernova Factory spectrophotometric time series, with global host-galaxy properties. The K13 Hubble residual step with host mass is 0.013 ± 0.031 mag for a supernova subsample with data coverage corresponding to the K13 training; at <<1σ, the step is not significant and lower than previous measurements. Relaxing the data coverage requirement of the Hubble residual step with the host massmore » is 0.045 ± 0.026 mag for the larger sample; a calculation using the modes of the distributions, less sensitive to outliers, yields a step of 0.019 mag. The analysis of this article uses K13 inferred luminosities, as distinguished from previous works that use magnitude corrections as a function of SALT2 color and stretch parameters: steps at >2σ significance are found in SALT2 Hubble residuals in samples split by the values of their K13 x(1) and x(2) light-curve parameters. x(1) affects the light-curve width and color around peak (similar to the Δm {sub 15} and stretch parameters), and x(2) affects colors, the near-UV light-curve width, and the light-curve decline 20-30 days after peak brightness. The novel light-curve analysis, increased parameter set, and magnitude corrections of K13 may be capturing features of SN Ia diversity arising from progenitor stellar evolution.« less
Lumber grade-yields for factory-grade northern red oak sawlogs
James G. Schroeder; Leland F. Hanks
1967-01-01
A report on results of sawing 556 northern red oak sawlogs at four sawmills in West Virginia and Virginia, and the distribution of grades for the standard factory lumber produced. Tabular data on actual yield and curved grade-yield percentages.
Asteroid (367943) 2012 DA14 Flyby Spin State Analysis
NASA Astrophysics Data System (ADS)
Benson, Conor; Scheeres, Daniel J.; Moskovitz, Nicholas
2017-10-01
On February 15, 2013 asteroid 2012 DA14 experienced an extremely close Earth encounter, passing within 27700 km altitude. This flyby gave observers the chance to directly detect flyby-induced changes to the asteroid’s spin state and physical properties. The strongest shape and spin state constraints were provided by Goldstone delay-Doppler radar and visible-wavelength photometry taken after closest approach. These data indicated a roughly 40 m x 20 m object in non-principal axis rotation. NPA states are described by two fundamental periods. Pφ is the average precession period of the long/short axis about the angular momentum vector and Pψ is the rotation period about the long/short axis.WindowCLEAN (Belton & Gandhi 1988) power spectrum analysis of the post flyby light curve showed three prominent frequencies, two of which were 1:2 multiples of each other. Mueller et al. (2002) suggest peaks with this relationship are 1/Pφ and 2/Pφ, implying that Pφ = 6.35 hr. Likely values for Pψ were then 8.72, 13.95, or 23.39 hr. These Pφ,Pψ pairs yielded six candidate spin states in total, one LAM and one SAM per pair.Second to fourth order, two-dimensional Fourier series fits to the light curve were best for periods of 6.359 and 8.724 hr. The two other candidate pairs were also in the top ten fits. Inertia constraints of a roughly 2:1 uniform density ellipsoid eliminated two of the three SAM states. Using JPL Horizons ephemerides and Lambertian ellipsoids, simulated light curves were generated. The simulated and observed power spectra were then compared for all angular momentum poles and reasonable ellipsoid elongations. Only the Pφ = 6.359 hr and Pψ = 8.724 hr LAM state produced light curves consistent with the observed frequency structure. All other states were clearly incompatible. With two well-fitting poles found, phasing the initial attitude and angular velocity yielded plausible matches to the observed light curve. Neglecting gravitational torques, neither pole agreed with the observed pre-flyby light curve, suggesting that the asteroid’s spin state changed during the encounter, consistent with numerical simulation predictions. The consistency between the pre-flyby observations and simulated states will be discussed.
DECIPHERING THERMAL PHASE CURVES OF DRY, TIDALLY LOCKED TERRESTRIAL PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koll, Daniel D. B.; Abbot, Dorian S., E-mail: dkoll@uchicago.edu
2015-03-20
Next-generation space telescopes will allow us to characterize terrestrial exoplanets. To do so effectively it will be crucial to make use of all available data. We investigate which atmospheric properties can, and cannot, be inferred from the broadband thermal phase curve of a dry and tidally locked terrestrial planet. First, we use dimensional analysis to show that phase curves are controlled by six nondimensional parameters. Second, we use an idealized general circulation model to explore the relative sensitivity of phase curves to these parameters. We find that the feature of phase curves most sensitive to atmospheric parameters is the peak-to-troughmore » amplitude. Moreover, except for hot and rapidly rotating planets, the phase amplitude is primarily sensitive to only two nondimensional parameters: (1) the ratio of dynamical to radiative timescales and (2) the longwave optical depth at the surface. As an application of this technique, we show how phase curve measurements can be combined with transit or emission spectroscopy to yield a new constraint for the surface pressure and atmospheric mass of terrestrial planets. We estimate that a single broadband phase curve, measured over half an orbit with the James Webb Space Telescope, could meaningfully constrain the atmospheric mass of a nearby super-Earth. Such constraints will be important for studying the atmospheric evolution of terrestrial exoplanets as well as characterizing the surface conditions on potentially habitable planets.« less
Quantifying Individual Brain Connectivity with Functional Principal Component Analysis for Networks.
Petersen, Alexander; Zhao, Jianyang; Carmichael, Owen; Müller, Hans-Georg
2016-09-01
In typical functional connectivity studies, connections between voxels or regions in the brain are represented as edges in a network. Networks for different subjects are constructed at a given graph density and are summarized by some network measure such as path length. Examining these summary measures for many density values yields samples of connectivity curves, one for each individual. This has led to the adoption of basic tools of functional data analysis, most commonly to compare control and disease groups through the average curves in each group. Such group differences, however, neglect the variability in the sample of connectivity curves. In this article, the use of functional principal component analysis (FPCA) is demonstrated to enrich functional connectivity studies by providing increased power and flexibility for statistical inference. Specifically, individual connectivity curves are related to individual characteristics such as age and measures of cognitive function, thus providing a tool to relate brain connectivity with these variables at the individual level. This individual level analysis opens a new perspective that goes beyond previous group level comparisons. Using a large data set of resting-state functional magnetic resonance imaging scans, relationships between connectivity and two measures of cognitive function-episodic memory and executive function-were investigated. The group-based approach was implemented by dichotomizing the continuous cognitive variable and testing for group differences, resulting in no statistically significant findings. To demonstrate the new approach, FPCA was implemented, followed by linear regression models with cognitive scores as responses, identifying significant associations of connectivity in the right middle temporal region with both cognitive scores.
Caccamo, M; Ferguson, J D; Veerkamp, R F; Schadt, I; Petriglieri, R; Azzaro, G; Pozzebon, A; Licitra, G
2014-01-01
As part of a larger project aiming to develop management evaluation tools based on results from test-day (TD) models, the objective of this study was to examine the effect of physical composition of total mixed rations (TMR) tested quarterly from March 2006 through December 2008 on milk, fat, and protein yield curves for 25 herds in Ragusa, Sicily. A random regression sire-maternal grandsire model was used to estimate variance components for milk, fat, and protein yields fitted on a full data set, including 241,153 TD records from 9,809 animals in 42 herds recorded from 1995 through 2008. The model included parity, age at calving, year at calving, and stage of pregnancy as fixed effects. Random effects were herd × test date, sire and maternal grandsire additive genetic effect, and permanent environmental effect modeled using third-order Legendre polynomials. Model fitting was carried out using ASREML. Afterward, for the 25 herds involved in the study, 9 particle size classes were defined based on the proportions of TMR particles on the top (19-mm) and middle (8-mm) screen of the Penn State Particle Separator. Subsequently, the model with estimated variance components was used to examine the influence of TMR particle size class on milk, fat, and protein yield curves. An interaction was included with the particle size class and days in milk. The effect of the TMR particle size class was modeled using a ninth-order Legendre polynomial. Lactation curves were predicted from the model while controlling for TMR chemical composition (crude protein content of 15.5%, neutral detergent fiber of 40.7%, and starch of 19.7% for all classes), to have pure estimates of particle distribution not confounded by nutrient content of TMR. We found little effect of class of particle proportions on milk yield and fat yield curves. Protein yield was greater for sieve classes with 10.4 to 17.4% of TMR particles retained on the top (19-mm) sieve. Optimal distributions different from those recommended may reflect regional differences based on climate and types and quality of forages fed. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Qiu, Hao; Mizutani, Tomoko; Saraya, Takuya; Hiramoto, Toshiro
2015-04-01
The commonly used four metrics for write stability were measured and compared based on the same set of 2048 (2k) six-transistor (6T) static random access memory (SRAM) cells by the 65 nm bulk technology. The preferred one should be effective for yield estimation and help predict edge of stability. Results have demonstrated that all metrics share the same worst SRAM cell. On the other hand, compared to butterfly curve with non-normality and write N-curve where no cell state flip happens, bit-line and word-line margins have good normality as well as almost perfect correlation. As a result, both bit line method and word line method prove themselves preferred write stability metrics.
NASA Technical Reports Server (NTRS)
Boclair, J. W.; Braterman, P. S.
1999-01-01
Solutions containing di- and trivalent metal chlorides [M(II) = Mg2+, Zn2+, Co2+, Ni2+, Mn2+; M(III) = Al3+, Fe3+] were titrated with NaOH to yield hydrotalcite-like layered double hydroxides (LDH), [[M(II)]1-x[M(III)]x(OH)2][Cl]x yH2O, by way of M(III) hydroxide/hydrous oxide intermediates. Analysis of the resultant titration curves yields nominal solubility constants for the LDH. The corresponding LDH stabilities are in the order Mg < Mn < Co approximately Ni < Zn for M(II) and Al < Fe for M(III). The stability of LDH relative to the separate metal hydroxides/hydrous oxides is discussed.
Plastometric tests for plasticine as physical modelling material
NASA Astrophysics Data System (ADS)
Wójcik, Łukasz; Lis, Konrad; Pater, Zbigniew
2016-12-01
This paper presents results of plastometric tests for plasticine, used as material for physical modelling of metal forming processes. The test was conducted by means of compressing by flat dies of cylindrical billets at various temperatures. The aim of the conducted research was comparison of yield stresses and course of material flow curves. Tests were made for plasticine in black and white colour. On the basis of the obtained experimental results, the influence of forming parameters change on flow curves course was determined. Sensitivity of yield stresses change in function of material deformation, caused by forging temperature change within the scope of 0&C ÷ 20&C and differentiation of strain rate for ˙ɛ = 0.563; ˙ɛ = 0.0563; ˙ɛ = 0.0056s-1,was evaluated. Experimental curves obtained in compression test were described by constitutive equations. On the basis of the obtained results the function which most favourably describes flow curves was chosen.
A Microlensing Analysis of the Central Engine in the Lensed Quasar WFI J2033-4723
NASA Astrophysics Data System (ADS)
Hyer, Gregory Edward; Morgan, Christopher; Bonvin, Vivien; Courbin, Fredric; Kochanek, Christopher; Falco, Emilio
2018-01-01
We report a detection of uncorrelated variability in 12 season optical light curves of the gravitationally lensed quasar WFI J2033-4723 from the 1.3m SMARTS telescope at CTIO and the 1.5m EULER telescope in La Silla. We analyzed this variability using the Monte Carlo technique of Kochanek (2004) to yield the first measurement of the size of this quasar’s accretion disk.
Kessler, E C; Bruckmaier, R M; Gross, J J
2014-01-01
In dairy cows, milk yield increases rapidly after parturition until a peak at around wk 6 of lactation. However, the description of the shape of the lactation curve is commonly based on weekly average milk yields. For a more detailed analysis of the milk production curve from the very beginning of lactation including the colostral period and the effect of colostrum yield on further lactational performance, the first 10 milkings after parturition, daily milk yields from d 1 to 28 of lactation, and the cumulative milk production on d 100 to 305 of lactation were investigated in 17 primiparous and 39 multiparous cows milked twice daily. Milk yield at the first milking after parturition (colostrum) ranged from 1.3 to 20.7kg (Δ=19.4kg) in multiparous and from 1.8 to 10.9kg in primiparous animals (Δ=9.1kg). At the tenth milking, milk production ranged from 9.2 to 21.5kg (Δ=12.3kg) in multiparous and from 7.0 to 15.2kg (Δ=8.2kg) in primiparous animals. Immediately after parturition, daily milk production increased rapidly, but after approximately 1wk in lactation, the slope of the daily milk production curve flattened and continued more linear. A nonlinear regression equation was used to determine this timely change, which occurred earlier in primiparous (d 6.9±0.3) than in multiparous cows (d 8.2±0.2). The correlation between the amount of first colostrum and milk production during further lactation decreased already from 0.47 on d 5 to 0.32 on d 14. In multiparous cows, the correlation between total milk production of the previous 305d standard lactation and the amount of first colostrum was not significant (correlation=0.29), whereas the correlation with the daily production increased from 0.45 on d 5 to 0.69 on d 14. However, in primiparous animals, correlations between first-colostrum yield and daily milk yields up to d 28 of lactation were not significant, possibly due to the smaller sample size compared with multiparous animals. First-colostrum yield and cumulative milk production of 100, 200, and 305 lactation days were not significantly correlated in multiparous and primiparous cows. In conclusion, the milk production during the first few milkings is widely independent from the overall production level of a cow. Potentially, genetic selection toward lower milk yield during the very first days after parturition at a simultaneously high lactational performance may be a tool to ensure sufficient colostrum quality and to reduce the metabolic load around parturition. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Measurement of Moisture Sorption Isotherm by DVS Hydrosorb
NASA Astrophysics Data System (ADS)
Kurniawan, Y. R.; Purwanto, Y. A.; Purwanti, N.; Budijanto, S.
2018-05-01
Artificial rice made from corn flour, sago, glycerol monostearate, vegetable oil, water and jelly powder was developed by extrusion method through the process stages of material mixing, extrusion, drying, packaging and storage. Sorption isotherm pattern information on food ingredients used to design and optimize the drying process, packaging, storage. Sorption isotherm of water of artificial rice was measured using humidity generating method with Dynamic Vapor Sorption device that has an advantage of equilibration time is about 10 to 100 times faster than saturated salt slurry method. Relative humidity modification technique are controlled automatically by adjusting the proportion of mixture of dry air and water saturated air. This paper aims to develop moisture sorption isotherm using the Hydrosorb 1000 Water Vapor Sorption Analyzer. Sample preparation was conducted by degassing sample in a heating mantle of 65°C. Analysis parameters need to be fulfilled were determination of Po, sample data, selection of water activity points, and equilibrium conditions. The selected analytical temperatures were 30°C and 45°C. Analysis lasted for 45 hours and curves of adsorption and desorption were obtained. Selected bottom point of water activity 0.05 at 30°C and 45°C yielded adsorbed mass of 0.1466 mg/g and 0.3455 mg/g, respectively, whereas selected top water activity point 0.95 at 30°C and 45°C yielded adsorbed mass of 190.8734 mg/g and 242.4161mg/g, respectively. Moisture sorption isotherm measurements of articial rice made from corn flour at temperature of 30°C and 45°C using Hydrosorb showed that the moisture sorption curve approximates sigmoid-shaped type II curve commonly found in corn-based foodstuffs (high- carbohydrate).
A simple and rapid method to isolate purer M13 phage by isoelectric precipitation.
Dong, Dexian; Sutaria, Sanjana; Hwangbo, Je Yeol; Chen, P
2013-09-01
M13 virus (phage) has been extensively used in phage display technology and nanomaterial templating. Our research aimed to use M13 phage to template sulfur nanoparticles for making lithium ion batteries. Traditional methods for harvesting M13 phage from Escherichia coli employ polyethylene glycol (PEG)-based precipitation, and the yield is usually measured by plaque counting. With this method, PEG residue is present in the M13 phage pellet and is difficult to eliminate. To resolve this issue, a method based on isoelectric precipitation was introduced and tested. The isoelectric method resulted in the production of purer phage with a higher yield, compared to the traditional PEG-based method. There is no significant variation in infectivity of the phage prepared using isoelectric precipitation, and the dynamic light scattering data indirectly prove that the phage structure is not damaged by pH adjustment. To maximize phage production, a dry-weight yield curve of M13 phage for various culture times was produced. The yield curve is proportional to the growth curve of E. coli. On a 200-mL culture scale, 0.2 g L(-1) M13 phage (dry-weight) was produced by the isoelectric precipitation method.
Rekhi, Gurpreet; Ng, Wai Yee; Lee, Jimmy
2018-03-01
There is a pressing need for reliable and valid rating scales to assess and measure depression in individuals at ultra-high risk (UHR) of psychosis. The aim of this study was to examine the clinical utility of the Calgary Depression Scale for Schizophrenia (CDSS) in individuals at UHR of psychosis. 167 individuals at UHR of psychosis were included as participants in this study. The Structured Clinical Interview for DSM-IV Axis I Disorders, CDSS, Beck Anxiety Inventory and Global Assessment of Functioning were administered. A receiver operating characteristic (ROC) curve analysis and factor analyses were performed. Cronbach's alpha was computed. Correlations between CDSS factor scores and other clinical variables were examined. The median CDSS total score was 5.0 (IQR 1.0-9.0). The area under ROC curve was 0.886 and Cronbach's alpha was 0.855. A score of 7 on the CDSS yielded the highest sensitivity and specificity in detecting depression in UHR individuals. Exploratory factor analysis of the CDSS yielded two factors: depression-hopelessness and self depreciation-guilt, which was confirmed by confirmatory factor analysis. Further analysis showed that the depression-hopelessness factor predicted functioning; whereas the self depreciation-guilt factor was related to the severity of the attenuated psychotic symptoms. In conclusion, the CDSS demonstrates good psychometric properties when used to evaluate depression in individuals at UHR of psychosis. Our study results also support a two-factor structure of the CDSS in UHR individuals. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Grafton, S. B.; Lutze, F. H.
1981-01-01
Dynamic stability derivatives are evaluated on the basis of rolling-flow, curved-flow and snaking tests. Attention is given to the hardware associated with curved-flow, rolling-flow and oscillatory pure-yawing wind-tunnel tests. It is found that the snaking technique, when combined with linear- and forced-oscillation methods, yields an important method for evaluating beta derivatives for current configurations at high angles of attack. Since the rolling flow model is fixed during testing, forced oscillations may be imparted to the model, permitting the measurement of damping and cross-derivatives. These results, when coupled with basic rolling-flow or rotary-balance data, yield a highly accurate mathematical model for studies of incipient spin and spin entry.
Human Fear Chemosignaling: Evidence from a Meta-Analysis.
de Groot, Jasper H B; Smeets, Monique A M
2017-10-01
Alarm pheromones are widely used in the animal kingdom. Notably, there are 26 published studies (N = 1652) highlighting a human capacity to communicate fear, stress, and anxiety via body odor from one person (66% males) to another (69% females). The question is whether the findings of this literature reflect a true effect, and what the average effect size is. These questions were answered by combining traditional meta-analysis with novel meta-analytical tools, p-curve analysis and p-uniform-techniques that could indicate whether findings are likely to reflect a true effect based on the distribution of P-values. A traditional random-effects meta-analysis yielded a small-to-moderate effect size (Hedges' g: 0.36, 95% CI: 0.31-0.41), p-curve analysis showed evidence diagnostic of a true effect (ps < 0.0001), and there was no evidence for publication bias. This meta-analysis did not assess the internal validity of the current studies; yet, the combined results illustrate the statistical robustness of a field in human olfaction dealing with the human capacity to communicate certain emotions (fear, stress, anxiety) via body odor. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Evolution of Starspots on LO Pegasi
NASA Astrophysics Data System (ADS)
Harmon, Robert; Bloodgood, Felise; Martin, Alec; Pellegrin, Kyle
2018-01-01
LO Pegasi is a young solar analog, a K main-sequence star that rotates with a period of 10.1538 hr. The rapid rotation yields a strong stellar dynamo associated with large starspots on the surface, which are regions where the magnetic field inhibits the convective transport of energy from below, so that the spots are cooler and thus darker than the surrounding photosphere. The star thus exhibits rotational modulation of its light curve as the starspots are carried into and out of view of Earth. CCD images of LO Peg were acquired at Perkins Observatory in Delaware, OH through standard B, V, R, and I photometric filters from 2017 June 1 to July 20. After subtracting dark frames and flat fielding the images, differential aperture photometry was performed to yield light curves through each of the four filters. The resulting light curves that were then analyzed via the Light-curve Inversion program created by one of us (Harmon) to produce surface maps. Our observations indicated that LO Pegasi’s light curve changed in both amplitude and shape between 2017 June and July, while its maximum brightness did not change. We present maps corresponding to these two distinct light curves, along with maps for data acquired from 2006-2016.
Nanomechanical properties of phospholipid microbubbles.
Buchner Santos, Evelyn; Morris, Julia K; Glynos, Emmanouil; Sboros, Vassilis; Koutsos, Vasileios
2012-04-03
This study uses atomic force microscopy (AFM) force-deformation (F-Δ) curves to investigate for the first time the Young's modulus of a phospholipid microbubble (MB) ultrasound contrast agent. The stiffness of the MBs was calculated from the gradient of the F-Δ curves, and the Young's modulus of the MB shell was calculated by employing two different mechanical models based on the Reissner and elastic membrane theories. We found that the relatively soft phospholipid-based MBs behave inherently differently to stiffer, polymer-based MBs [Glynos, E.; Koutsos, V.; McDicken, W. N.; Moran, C. M.; Pye, S. D.; Ross, J. A.; Sboros, V. Langmuir2009, 25 (13), 7514-7522] and that elastic membrane theory is the most appropriate of the models tested for evaluating the Young's modulus of the phospholipid shell, agreeing with values available for living cell membranes, supported lipid bilayers, and synthetic phospholipid vesicles. Furthermore, we show that AFM F-Δ curves in combination with a suitable mechanical model can assess the shell properties of phospholipid MBs. The "effective" Young's modulus of the whole bubble was also calculated by analysis using Hertz theory. This analysis yielded values which are in agreement with results from studies which used Hertz theory to analyze similar systems such as cells.
Analysis of Self-Associating Proteins by Singular Value Decomposition of Solution Scattering Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Tim E.; Craig, Bruce A.; Kondrashkina, Elena
2008-07-08
We describe a method by which a single experiment can reveal both association model (pathway and constants) and low-resolution structures of a self-associating system. Small-angle scattering data are collected from solutions at a range of concentrations. These scattering data curves are mass-weighted linear combinations of the scattering from each oligomer. Singular value decomposition of the data yields a set of basis vectors from which the scattering curve for each oligomer is reconstructed using coefficients that depend on the association model. A search identifies the association pathway and constants that provide the best agreement between reconstructed and observed data. Using simulatedmore » data with realistic noise, our method finds the correct pathway and association constants. Depending on the simulation parameters, reconstructed curves for each oligomer differ from the ideal by 0.050.99% in median absolute relative deviation. The reconstructed scattering curves are fundamental to further analysis, including interatomic distance distribution calculation and low-resolution ab initio shape reconstruction of each oligomer in solution. This method can be applied to x-ray or neutron scattering data from small angles to moderate (or higher) resolution. Data can be taken under physiological conditions, or particular conditions (e.g., temperature) can be varied to extract fundamental association parameters ({Delta}H{sub ass}, S{sub ass}).« less
Chovalopoulou, Maria-Eleni; Valakos, Efstratios D; Manolis, Sotiris K
2016-06-01
The aim of this study is to assess sexual dimorphism of adult crania in the vault and midsagittal curve of the vault using three-dimensional geometric morphometric methods. The study sample consisted of 176 crania of known sex (94 males, 82 females) belonging to individuals who lived during the 20th century in Greece. The three-dimensional co-ordinates of 31 ecto-cranial landmarks and 30 semi-landmarks were digitized using a MicroScribe 3DX contact digitizer. Generalized Procrustes analysis (GPA) was used to obtain size and shape variables for statistical analysis. Shape, size and form analyses were carried out by logistic regression and three discriminant function analyses. Results indicate that there are shape differences between sexes. Females in the region of the parietal bones are narrower and the axis forming the frontal and occipital bones is more elongated; the frontal bone is more vertical. Sex-specific shape differences give better classification results in the vault (79%) compared with the midsagittal curve of the neurocranium (68.8%). Size alone yielded better results for cranial vault (82%), while for the midsagittal curve of the vault the result is poorer (68.1%). As anticipated, the classification accuracy improves when both size and shape are combined (89.2% for vault, and 79.4% for midsagittal curve of the vault). These latter findings imply that, in contrast to the midsagittal curve of the neurocranium, the shape of the cranial vault can be used as an indicator of sex in the modern Greek population. Copyright © 2016. Published by Elsevier GmbH.
LSS 2018: A double-lined spectroscopic binary central star with an extremely large reflection effect
NASA Technical Reports Server (NTRS)
Drilling, J. S.
1985-01-01
LSS 2018, the central star of the planetry nebulae DS1, was found to be a double-lined spectroscopic binary with a period of 8.571 hours. Light variations with the same period were observed in U, B, and V; in the wavelength regions defined by the two IUE cameras; and in the strength of the CIII 4647 emission line. The light variations can be accurately predicted by a simple reflection effect, and an analysis of the light curves yields the angular diameter and effective temperature of the primary, the radii of the two stars in terms of their separation, and the inclination of the system. Analysis of the radial velocities then yields the masses of the two stars, their separation, the distance of the system, the absolute magnitude of the primary, and the size of the nebula.
Cheng, Hsien C
2009-01-01
Half life and its derived pharmacokinetic parameters are calculated on an assumption that the terminal phase of drug disposition follows a constant rate of disposition. In reality, this assumption may not necessarily be the case. A new method is needed for analyzing PK parameters if the disposition does not follow a first order PK kinetic. Cumulative area under the concentration-time curve (AUC) is plotted against time to yield a hyperbolic (or sigmoidal) AUC-time relationship curve which is then analyzed by Hill's equation to yield AUC(inf), time to achieving AUC50% (T(AUC50%)) or AUC90% (T(AUC90%)), and the Hill's slope. From these parameters, an AUC-time relationship curve can be reconstructed. Projected plasma concentration can be calculated for any time point. Time at which cumulative AUC reaches 90% (T(AUC90%)) can be used as an indicator for expressing how fast a drug is cleared. Clearance is calculated in a traditional manner (i.v. dose/AUC(inf)), and the volume of distribution is proposed to be calculated at T(AUC50%) (0.5 i.v. dose/plasma concentration at T(AUC50%)). This method of estimating AUC is applicable for both i.v. and oral data. It is concluded that the Hill's equation can be used as an alternative method for estimating AUC and analysis of PK parameters if the disposition does not follow a first order kinetic. T(AUC90%) is proposed to be used as an indicator for expressing how fast a drug is cleared from the system.
Onset of Plasticity via Relaxation Analysis (OPRA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandey, Amit; Wheeler, Robert; Shyam, Amit
In crystalline metals and alloys, plasticity occurs due to the movement of mobile dislocations and the yield stress for engineering applications is traditionally quantified based on strain. The onset of irreversible plasticity or “yielding” is generally identified by a deviation from linearity in the stress-strain plot or by some standard convention such as 0.2 % offset strain relative to the “linear elastic response”. In the present work, we introduce a new methodology for the determination of the true yield point based on stress relaxation. We show experimentally that this determination is self-consistent in nature and, as such, provides an objectivemore » observation of the very onset of plastic flow. Lastly, our designation for yielding is no longer related to the shape of the stress-strain curve but instead reflects the earliest signature of the activation of concerted irreversible dislocation motion in a test specimen under increasing load.« less
Onset of Plasticity via Relaxation Analysis (OPRA)
Pandey, Amit; Wheeler, Robert; Shyam, Amit; ...
2016-03-17
In crystalline metals and alloys, plasticity occurs due to the movement of mobile dislocations and the yield stress for engineering applications is traditionally quantified based on strain. The onset of irreversible plasticity or “yielding” is generally identified by a deviation from linearity in the stress-strain plot or by some standard convention such as 0.2 % offset strain relative to the “linear elastic response”. In the present work, we introduce a new methodology for the determination of the true yield point based on stress relaxation. We show experimentally that this determination is self-consistent in nature and, as such, provides an objectivemore » observation of the very onset of plastic flow. Lastly, our designation for yielding is no longer related to the shape of the stress-strain curve but instead reflects the earliest signature of the activation of concerted irreversible dislocation motion in a test specimen under increasing load.« less
Davies, Frederick S.; Flore, James A.
1986-01-01
Roots of 1.5-year-old `Woodard' rabbiteye blueberry plants (Vaccinium ashei Reade) were flooded in containers or maintained at container capacity over a 5-day period. Carbon assimilation, and stomatal and residual conductances were monitored on one fully expanded shoot/plant using an open flow gas analysis system. Quantum yield was calculated from light response curves. Carbon assimilation and quantum yield of flooded plants decreased to 64 and 41% of control values, respectively, after 1 day of flooding and continued decreasing to 38 and 27% after 4 days. Stomatal and residual conductances to CO2 also decreased after 1 day of flooding compared with those of unflooded plants with residual conductance severely limiting carbon assimilation after 4 days of flooding. Stomatal opening occurred in 75 to 90 minutes and rate of opening was unaffected by flooding. PMID:16664791
Experimental Investigation on the Mechanical Instability of Superelastic NiTi Shape Memory Alloy
NASA Astrophysics Data System (ADS)
Xiao, Yao; Zeng, Pan; Lei, Liping
2016-09-01
In this paper, primary attention is paid to the mechanical instability of superelastic NiTi shape memory alloy (SMA) during localized forward transformation at different temperatures. By inhibiting the localized phase transformation, we can obtain the up-down-up mechanical response of NiTi SMA, which is closely related to the intrinsic material softening during localized martensitic transformation. Furthermore, the material parameters of the up-down-up stress-strain curve are extracted, in such a way that this database can be utilized for simulation and validation of the theoretical analysis. It is found that during forward transformation, the upper yield stress, lower yield stress, Maxwell stress, and nucleation stress of NiTi SMA exhibit linear dependence on temperature. The relation between nucleation stress and temperature can be explained by the famous Clausius-Clapeyron equation, while the relation between upper/lower yield stress and temperature lacks theoretical study, which needs further investigation.
ERTS-1 data collection systems used to predict wheat disease severities. [Riley County, Kansas
NASA Technical Reports Server (NTRS)
Kanemasu, E. T.; Schimmelpfenning, H.; Choy, E. C.; Eversmeyer, M. G.; Lenhert, D.
1974-01-01
The author has identified the following significant results. The feasibility of using the data collection system on ERTS-1 to predict wheat leaf rust severity and resulting yield loss was tested. Ground-based data collection platforms (DCP'S), placed in two commercial wheat fields in Riley County, Kansas, transmitted to the satellite such meteorological information as maximum and minimum temperature, relative humidity, and hours of free moisture. Meteorological data received from the two DCP'S from April 23 to 29 were used to estimate the disease progress curve. Values from the curve were used to predict the percentage decrease in wheat yields resulting from leaf rust. Actual decrease in yield was obtained by applying a zinc and maneb spray (5.6 kg/ha) to control leaf rust, then comparing yields of the controlled (healthy) and the noncontrolled (rusted) areas. In each field a 9% decrease in yield was predicted by the DCP-derived data; actual decreases were 12% and 9%.
Ultrafast current imaging by Bayesian inversion
Somnath, Suhas; Law, Kody J. H.; Morozovska, Anna; Maksymovych, Petro; Kim, Yunseok; Lu, Xiaoli; Alexe, Marin; Archibald, Richard K; Kalinin, Sergei V; Jesse, Stephen; Vasudevan, Rama K
2016-01-01
Spectroscopic measurements of current-voltage curves in scanning probe microscopy is the earliest and one of the most common methods for characterizing local energy-dependent electronic properties, providing insight into superconductive, semiconductor, and memristive behaviors. However, the quasistatic nature of these measurements renders them extremely slow. Here, we demonstrate a fundamentally new approach for dynamic spectroscopic current imaging via full information capture and Bayesian inference analysis. This "general-mode I-V"method allows three orders of magnitude faster rates than presently possible. The technique is demonstrated by acquiring I-V curves in ferroelectric nanocapacitors, yielding >100,000 I-V curves in <20 minutes. This allows detection of switching currents in the nanoscale capacitors, as well as determination of dielectric constant. These experiments show the potential for the use of full information capture and Bayesian inference towards extracting physics from rapid I-V measurements, and can be used for transport measurements in both atomic force and scanning tunneling microscopy. The data was analyzed using pycroscopy - an open-source python package available at https://github.com/pycroscopy/pycroscopy
Export product diversification and the environmental Kuznets curve: evidence from Turkey.
Gozgor, Giray; Can, Muhlis
2016-11-01
Countries try to stabilize the demand for energy on one hand and sustain economic growth on the other, but the worsening global warming and climate change problems have put pressure on them. This paper estimates the environmental Kuznets curve over the period 1971-2010 in Turkey both in the short and the long run. For this purpose, the unit root test with structural breaks and the cointegration analysis with multiple endogenous structural breaks are used. The effects of energy consumption and export product diversification on CO 2 emissions are also controlled in the dynamic empirical models. It is observed that the environmental Kuznets curve hypothesis is valid in Turkey in both the short run and the long run. The positive effect on energy consumption on CO 2 emissions is also obtained in the long run. In addition, it is found that a greater product diversification of exports yields higher CO 2 emissions in the long run. Inferences and policy implications are also discussed.
Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2
NASA Astrophysics Data System (ADS)
Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.
2017-12-01
In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.
Relationship between Yield Point Phenomena and the Nanoindentation Pop-in Behavior of Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, T.-H.; Oh, C.-S.; Lee, K.
2012-01-01
Pop-ins on nanoindentation load-displacement curves of a ferritic steel were correlated with yield drops on its tensile stress-strain curves. To investigate the relationship between these two phenomena, nanoindentation and tensile tests were performed on annealed specimens, prestrained specimens, and specimens aged for various times after prestraining. Clear nanoindentation pop-ins were observed on annealed specimens, which disappeared when specimens were indented right after the prestrain, but reappeared to varying degrees after strain aging. Yield drops in tensile tests showed similar disappearance and appearance, indicating that the two phenomena, at the nano- and macro-scale, respectively, are closely related and influenced by dislocationmore » locking by solutes (Cottrell atmospheres).« less
Salminen, Kaisa A; Meyer, Achim; Imming, Peter; Raunio, Hannu
2011-12-01
Several in vitro criteria were used to assess whether three methylenedioxyphenyl (MDP) compounds, the isoquinoline alkaloids bulbocapnine, canadine, and protopine, are mechanism-based inactivators of CYP2C19. The recently reported fluorometric CYP2C19 progress curve analysis approach was applied first to determine whether these alkaloids demonstrate time-dependent inhibition. In this experiment, bulbocapnine, canadine, and protopine displayed time dependence and saturation in their inactivation kinetics with K(I) and k(inact) values of 72.4 ± 14.7 μM and 0.38 ± 0.036 min(-1), 2.1 ± 0.63 μM and 0.18 ± 0.015 min(-1), and 7.1 ± 2.3 μM and 0.24 ± 0.021 min(-1), respectively. Additional studies were performed to determine whether other specific criteria for mechanism-based inactivation were fulfilled: NADPH dependence, irreversibility, and involvement of a catalytic step in the enzyme inactivation. CYP2C19 activity was not significantly restored by dialysis when it had been inactivated by the alkaloids in the presence of a NADPH-regenerating system, and a metabolic-intermediate complex-associated increase in absorbance at approximately 455 nm was observed. In conclusion, the CYP2C19 progress curve analysis method revealed time-dependent inhibition by these alkaloids, and additional experiments confirmed its quasi-irreversible nature. This study revealed that the CYP2C19 progress curve analysis method is useful for identifying novel mechanism-based inactivators and yields a wealth of information in one run. The alkaloids bulbocapnine, canadine, and protopine, present in herbal medicines, are new mechanism-based inactivators and the first MDP compounds exhibiting quasi-irreversible inactivation of CYP2C19.
Yang, Dongmei; Pan, Shaoan; Tyree, Melvin T
2016-08-01
Pressure-volume (PV) curve analysis is the most common and accurate way of estimating all components of the water relationships in leaves (water potential isotherms) as summarized in the Höfler diagram. PV curve analysis yields values of osmotic pressure, turgor pressure, and elastic modulus of cell walls as a function of relative water content. It allows the computation of symplasmic/apoplastic water content partitioning. For about 20 years, cavitation in xylem has been postulated as a possible source of error when estimating the above parameters, but, to the best of the authors' knowledge, no one has ever previously quantified its influence. Results in this paper provide independent estimates of osmotic pressure by PV curve analysis and by thermocouple psychrometer measurement. An anatomical evaluation was also used for the first time to compare apoplastic water fraction estimates from PV analysis with anatomical values. Conclusions include: (i) PV curve values of osmotic pressure are underestimated prior to correcting osmotic pressure for water loss by cavitation in Metasequoia glyptostroboides; (ii) psychrometer estimates of osmotic pressure obtained in tissues killed by freezing or heating agreed with PV values before correction for apoplastic water dilution; (iii) after correction for dilution effects, a solute concentration enhancement (0.27MPa or 0.11 osmolal) was revealed. The possible sources of solute enhancement were starch hydrolysis and release of ions from the Donnan free space of needle cell walls. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Volume growth trends in a Douglas-fir levels-of-growing-stock study.
Robert O. Curtis
2006-01-01
Mean curves of increment and yield in gross total cubic volume and net merchantable cubic volume were derived from seven installations of the regional cooperative Levels-of-Growing-Stock Study (LOGS) in Douglas-fir. The technique used reduces the seven curves for each treatment for each variable of interest to a single set of readily interpretable mean curves. To a top...
NASA Astrophysics Data System (ADS)
von Paris, P.; Gratier, P.; Bordé, P.; Selsis, F.
2016-03-01
Context. Basic atmospheric properties, such as albedo and heat redistribution between day- and nightsides, have been inferred for a number of planets using observations of secondary eclipses and thermal phase curves. Optical phase curves have not yet been used to constrain these atmospheric properties consistently. Aims: We model previously published phase curves of CoRoT-1b, TrES-2b, and HAT-P-7b, and infer albedos and recirculation efficiencies. These are then compared to previous estimates based on secondary eclipse data. Methods: We use a physically consistent model to construct optical phase curves. This model takes Lambertian reflection, thermal emission, ellipsoidal variations, and Doppler boosting, into account. Results: CoRoT-1b shows a non-negligible scattering albedo (0.11 < AS < 0.3 at 95% confidence) as well as small day-night temperature contrasts, which are indicative of moderate to high re-distribution of energy between dayside and nightside. These values are contrary to previous secondary eclipse and phase curve analyses. In the case of HAT-P-7b, model results suggest a relatively high scattering albedo (AS ≈ 0.3). This confirms previous phase curve analysis; however, it is in slight contradiction to values inferred from secondary eclipse data. For TrES-2b, both approaches yield very similar estimates of albedo and heat recirculation. Discrepancies between recirculation and albedo values as inferred from secondary eclipse and optical phase curve analyses might be interpreted as a hint that optical and IR observations probe different atmospheric layers, hence temperatures.
Comparison of NACA 6-series and 4-digit airfoils for Darrieus wind turbines
NASA Astrophysics Data System (ADS)
Migliore, P. G.
1983-08-01
The aerodynamic efficiency of Darrieus wind turbines as effected by blade airfoil geometry was investigated. Analysis was limited to curved-bladed machines having rotor solidities of 7-21 percent and operating at a Reynolds number of 3 x 10 to the 6th. Ten different airfoils, having thickness-to-chord ratios of 12, 15, and 18 percent, were studied. Performance estimates were made using a blade element/momentum theory approach. Results indicated that NACA 6-series airfoils yield peak power coefficients as great as NACA 4-digit airfoils and have broader and flatter power coefficient-tip speed ratio curves. Sample calculations for an NACA 63(2)-015 airfoil showed an annual energy output increase of 17-27 percent, depending on rotor solidity, compared to an NACA 0015 airfoil.
Computerized breast parenchymal analysis on DCE-MRI
NASA Astrophysics Data System (ADS)
Li, Hui; Giger, Maryellen L.; Yuan, Yading; Jansen, Sanaz A.; Lan, Li; Bhooshan, Neha; Newstead, Gillian M.
2009-02-01
Breast density has been shown to be associated with the risk of developing breast cancer, and MRI has been recommended for high-risk women screening, however, it is still unknown how the breast parenchymal enhancement on DCE-MRI is associated with breast density and breast cancer risk. Ninety-two DCE-MRI exams of asymptomatic women with normal MR findings were included in this study. The 3D breast volume was automatically segmented using a volume-growing based algorithm. The extracted breast volume was classified into fibroglandular and fatty regions based on the discriminant analysis method. The parenchymal kinetic curves within the breast fibroglandular region were extracted and categorized by use of fuzzy c-means clustering, and various parenchymal kinetic characteristics were extracted from the most enhancing voxels. Correlation analysis between the computer-extracted percent dense measures and radiologist-noted BIRADS density ratings yielded a correlation coefficient of 0.76 (p<0.0001). From kinetic analyses, 70% (64/92) of most enhancing curves showed persistent curve type and reached peak parenchymal intensity at the last postcontrast time point; with 89% (82/92) of most enhancing curves reaching peak intensity at either 4th or 5th post-contrast time points. Women with dense breast (BIRADS 3 and 4) were found to have more parenchymal enhancement at their peak time point (Ep) with an average Ep of 116.5% while those women with fatty breasts (BIRADS 1 and 2) demonstrated an average Ep of 62.0%. In conclusion, breast parenchymal enhancement may be associated with breast density and may be potential useful as an additional characteristic for assessing breast cancer risk.
NASA Astrophysics Data System (ADS)
Kumar, Gautam; Maji, Kuntal
2018-04-01
This article deals with the prediction of strain-and stress-based forming limit curves for advanced high strength steel DP590 sheet using Marciniak-Kuczynski (M-K) method. Three yield criteria namely Von-Mises, Hill's 48 and Yld2000-2d and two hardening laws i.e., Hollomon power and Swift hardening laws were considered to predict the forming limit curves (FLCs) for DP590 steel sheet. The effects of imperfection factor and initial groove angle on prediction of FLC were also investigated. It was observed that the FLCs shifted upward with the increase of imperfection factor value. The initial groove angle was found to have significant effects on limit strains in the left side of FLC, and insignificant effect for the right side of FLC for certain range of strain paths. The limit strains were calculated at zero groove angle for the right side of FLC, and a critical groove angle was used for the left side of FLC. The numerically predicted FLCs considering the different combinations of yield criteria and hardening laws were compared with the published experimental results of FLCs for DP590 steel sheet. The FLC predicted using the combination of Yld2000-2d yield criterion and swift hardening law was in better coorelation with the experimental data. Stress based forming limit curves (SFLCs) were also calculated from the limiting strain values obtained by M-K model. Theoretically predicted SFLCs were compared with that obtained from the experimental forming limit strains. Stress based forming limit curves were seen to better represent the forming limits of DP590 steel sheet compared to that by strain-based forming limit curves.
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
An analysis of the massless planet approximation in transit light curve models
NASA Astrophysics Data System (ADS)
Millholland, Sarah; Ruch, Gerry
2015-08-01
Many extrasolar planet transit light curve models use the approximation of a massless planet. They approximate the planet as orbiting elliptically with the host star at the orbit’s focus instead of depicting the planet and star as both orbiting around a common center of mass. This approximation should generally be very good because the transit is a small fraction of the full-phase curve and the planet to stellar mass ratio is typically very small. However, to fully examine the legitimacy of this approximation, it is useful to perform a robust, all-parameter space-encompassing statistical comparison between the massless planet model and the more accurate model.Towards this goal, we establish two questions: (1) In what parameter domain is the approximation invalid? (2) If characterizing an exoplanetary system in this domain, what is the error of the parameter estimates when using the simplified model? We first address question (1). Given each parameter vector in a finite space, we can generate the simplified and more complete model curves. Associated with these model curves is a measure of the deviation between them, such as the root mean square (RMS). We use Gibbs sampling to generate a sample that is distributed according to the RMS surface. The high-density regions in the sample correspond to a large deviation between the models. To determine the domains of these high-density areas, we first employ the Ordering Points to Identify the Clustering Structure (OPTICS) algorithm. We then characterize the subclusters by performing the Patient Rule Induction Method (PRIM) on the transformed Principal Component spaces of each cluster. This process yields descriptors of the parameter domains with large discrepancies between the models.To consider question (2), we start by generating synthetic transit curve observations in the domains specified by the above analysis. We then derive the best-fit parameters of these synthetic light curves according to each model and examine the quality of agreement between the estimated parameters. Taken as a whole, these steps allow for a thorough analysis of the validity of the massless planet approximation.
Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less
Sensitive Fibre-Based Thermoluminescence Detectors for High Resolution In-Vivo Dosimetry
NASA Astrophysics Data System (ADS)
Ghomeishi, Mostafa; Mahdiraji, G. Amouzad; Adikan, F. R. Mahamd; Ung, N. M.; Bradley, D. A.
2015-08-01
With interest in the potential of optical fibres as the basis of next-generation thermoluminescence dosimeters (TLDs), the development of suitable forms of material and their fabrication has become a fast-growing endeavour. Present study focuses on three types of Ge-doped optical fibres with different structural arrangements and/or shapes, namely conventional cylindrical fibre, capillary fibre, and flat fibre, all fabricated using the same optical fibre preform. For doses from 0.5 to 8 Gy, obtained at electron and photon energies, standard thermoluminescence (TL) characteristics of the optical fibres have been the subject of detailed investigation. The results show that in collapsing the capillary fibre into a flat shape, the TL yield is increased by a factor of 5.5, the yield being also some 3.2 times greater than that of the conventional cylindrical fibre fabricated from the same perform. This suggests a means of production of suitably sensitive TLD for in-vivo dosimeter applications. Addressing the associated defects generating luminescence from each of the optical fibres, the study encompasses analysis of the TL glow curves, with computerized glow curve deconvolution (CGCD) and 2nd order kinetics.
Sensitive Fibre-Based Thermoluminescence Detectors for High Resolution In-Vivo Dosimetry.
Ghomeishi, Mostafa; Mahdiraji, G Amouzad; Adikan, F R Mahamd; Ung, N M; Bradley, D A
2015-08-28
With interest in the potential of optical fibres as the basis of next-generation thermoluminescence dosimeters (TLDs), the development of suitable forms of material and their fabrication has become a fast-growing endeavour. Present study focuses on three types of Ge-doped optical fibres with different structural arrangements and/or shapes, namely conventional cylindrical fibre, capillary fibre, and flat fibre, all fabricated using the same optical fibre preform. For doses from 0.5 to 8 Gy, obtained at electron and photon energies, standard thermoluminescence (TL) characteristics of the optical fibres have been the subject of detailed investigation. The results show that in collapsing the capillary fibre into a flat shape, the TL yield is increased by a factor of 5.5, the yield being also some 3.2 times greater than that of the conventional cylindrical fibre fabricated from the same perform. This suggests a means of production of suitably sensitive TLD for in-vivo dosimeter applications. Addressing the associated defects generating luminescence from each of the optical fibres, the study encompasses analysis of the TL glow curves, with computerized glow curve deconvolution (CGCD) and 2(nd) order kinetics.
Sensitive Fibre-Based Thermoluminescence Detectors for High Resolution In-Vivo Dosimetry
Ghomeishi, Mostafa; Mahdiraji, G. Amouzad; Adikan, F. R. Mahamd; Ung, N. M.; Bradley, D. A.
2015-01-01
With interest in the potential of optical fibres as the basis of next-generation thermoluminescence dosimeters (TLDs), the development of suitable forms of material and their fabrication has become a fast-growing endeavour. Present study focuses on three types of Ge-doped optical fibres with different structural arrangements and/or shapes, namely conventional cylindrical fibre, capillary fibre, and flat fibre, all fabricated using the same optical fibre preform. For doses from 0.5 to 8 Gy, obtained at electron and photon energies, standard thermoluminescence (TL) characteristics of the optical fibres have been the subject of detailed investigation. The results show that in collapsing the capillary fibre into a flat shape, the TL yield is increased by a factor of 5.5, the yield being also some 3.2 times greater than that of the conventional cylindrical fibre fabricated from the same perform. This suggests a means of production of suitably sensitive TLD for in-vivo dosimeter applications. Addressing the associated defects generating luminescence from each of the optical fibres, the study encompasses analysis of the TL glow curves, with computerized glow curve deconvolution (CGCD) and 2nd order kinetics. PMID:26314683
Group Velocity Dispersion Curves from Wigner-Ville Distributions
NASA Astrophysics Data System (ADS)
Lloyd, Simon; Bokelmann, Goetz; Sucic, Victor
2013-04-01
With the widespread adoption of ambient noise tomography, and the increasing number of local earthquakes recorded worldwide due to dense seismic networks and many very dense temporary experiments, we consider it worthwhile to evaluate alternative Methods to measure surface wave group velocity dispersions curves. Moreover, the increased computing power of even a simple desktop computer makes it feasible to routinely use methods other than the typically employed multiple filtering technique (MFT). To that end we perform tests with synthetic and observed seismograms using the Wigner-Ville distribution (WVD) frequency time analysis, and compare dispersion curves measured with WVD and MFT with each other. Initial results suggest WVD to be at least as good as MFT at measuring dispersion, albeit at a greater computational expense. We therefore need to investigate if, and under which circumstances, WVD yields better dispersion curves than MFT, before considering routinely applying the method. As both MFT and WVD generally work well for teleseismic events and at longer periods, we explore how well the WVD method performs at shorter periods and for local events with smaller epicentral distances. Such dispersion information could potentially be beneficial for improving velocity structure resolution within the crust.
Piecewise SALT sampling for estimating suspended sediment yields
Robert B. Thomas
1989-01-01
A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...
Fracture mechanics evaluation of heavy welded structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, I.; Ericksson, C.W.; Zilberstein, V.A.
1982-05-01
This paper describes some applications of nondestructive examination (NDE) and engineering fracture mechanics to evaluation of flaws in heavy welded structures. The paper discusses not only widely recognized linear elastic fracture mechanics (LEFM) analysis, but also methods of the elastic-plastic fracture mechanics (EPFM), such as COD, J-integral, and Failure Assessment Diagram. Examples are given to highlight the importance of interaction between specialists providing input and the specialists performing the analysis. The paper points out that the critical parameters for as-welded structures when calculated by these methods are conservative since they are based on two pessimistic assumptions: that the magnitude ofmore » residual stress is always at the yield strength level, and that the residual stress always acts in the same direction as the applied (mechanical) stress. The suggestion is made that it would be prudent to use the COD or the FAD design curves for a conservative design. The appendix examines a J-design curve modified to include residual stresses.« less
Thoracic Idiopathic Scoliosis Severity Is Highly Correlated with 3D Measures of Thoracic Kyphosis.
Sullivan, T Barrett; Reighard, Fredrick G; Osborn, Emily J; Parvaresh, Kevin C; Newton, Peter O
2017-06-07
Loss of thoracic kyphosis has been associated with thoracic idiopathic scoliosis. Modern 3-dimensional (3D) imaging systems allow more accurate characterization of the scoliotic deformity than traditional radiographs. In this study, we utilized 3D calculations to characterize the association between increasing scoliosis severity and changes in the sagittal and axial planes. Patients evaluated in a scoliosis clinic and determined to have either a normal spine or idiopathic scoliosis were included in the analysis. All underwent upright, biplanar radiography with 3D reconstructions. Two-dimensional (2D) measurements of the magnitude of the thoracic major curve and the thoracic kyphosis were recorded. Image processing and MATLAB analysis were utilized to produce a 3D calculation of thoracic kyphosis and apical vertebral axial rotation. Regression analysis was performed to determine the correlation of 2D kyphosis, 3D kyphosis, and apical axial rotation with the magnitude of the thoracic major curve. The 442 patients for whom 2D and 3D data were collected had a main thoracic curve magnitude ranging from 1° to 118°. Linear regression analysis of the 2D and 3D T5-T12 kyphosis versus main thoracic curve magnitude yielded significant models (p < 0.05). The 2D model had a minimally negative slope (-0.07), a small R value (0.02), and a poor correlation coefficient (-0.14). In contrast, the 3D model had a strongly negative slope (-0.54), a high R value (0.56), and a strong correlation coefficient (-0.75). Curve magnitude also had a strong correlation with loss of 3D T1-T12 kyphosis and increasing apical axial rotation. Segmentally calculated 3D thoracic kyphosis had a strongly negative correlation with the magnitude of the main thoracic curve. With near uniformity, 3D thoracic kyphosis progressively decreased as scoliosis magnitude increased, at a rate of more than half the increase in the main thoracic curve magnitude. Analysis confirmed a surprisingly strong correlation between scoliosis severity and loss of 3D kyphosis that was absent in the 2D analysis. A similarly strong correlation between curve magnitude and apical axial rotation was evident. These findings lend further credence to the concept that scoliosis progresses in the coronal, sagittal, and axial planes simultaneously. The findings of this study suggest that 3D assessment is critical for adequate characterization of the multiplanar deformity of idiopathic scoliosis and deformity in the sagittal plane is linked to deformity in the coronal plane. Increasing severity of coronal plane curvature is associated with a progressive loss of thoracic kyphosis that should be anticipated so that the appropriate intraoperative techniques for correction of idiopathic scoliosis can be applied in all 3 planes.
Takahashi, Masahiro; Kozawa, Eito; Tanisaka, Megumi; Hasegawa, Kousei; Yasuda, Masanori; Sakai, Fumikazu
2016-06-01
We explored the role of histogram analysis of apparent diffusion coefficient (ADC) maps for discriminating uterine carcinosarcoma and endometrial carcinoma. We retrospectively evaluated findings in 13 patients with uterine carcinosarcoma and 50 patients with endometrial carcinoma who underwent diffusion-weighted imaging (b = 0, 500, 1000 s/mm(2) ) at 3T with acquisition of corresponding ADC maps. We derived histogram data from regions of interest drawn on all slices of the ADC maps in which tumor was visualized, excluding areas of necrosis and hemorrhage in the tumor. We used the Mann-Whitney test to evaluate the capacity of histogram parameters (mean ADC value, 5th to 95th percentiles, skewness, kurtosis) to discriminate uterine carcinosarcoma and endometrial carcinoma and analyzed the receiver operating characteristic (ROC) curve to determine the optimum threshold value for each parameter and its corresponding sensitivity and specificity. Carcinosarcomas demonstrated significantly higher mean vales of ADC, 95th, 90th, 75th, 50th, 25th percentiles and kurtosis than endometrial carcinomas (P < 0.05). ROC curve analysis of the 75th percentile yielded the best area under the ROC curve (AUC; 0.904), sensitivity of 100%, and specificity of 78.0%, with a cutoff value of 1.034 × 10(-3) mm(2) /s. Histogram analysis of ADC maps might be helpful for discriminating uterine carcinosarcomas and endometrial carcinomas. J. Magn. Reson. Imaging 2016;43:1301-1307. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jerzykiewicz, M.; Lehmann, H.; Niemczura, E.; Molenda-Żakowicz, J.; Dymitrov, W.; Fagas, M.; Guenther, D. B.; Hartmann, M.; Hrudková, M.; Kamiński, K.; Moffat, A. F. J.; Kuschnig, R.; Leto, G.; Matthews, J. M.; Rowe, J. F.; Ruciński, S. M.; Sasselov, D.; Weiss, W. W.
2013-06-01
MOST time series photometry of μ Eri, an SB1 eclipsing binary with a rapidly rotating SPB primary, is reported and analysed. The analysis yields a number of sinusoidal terms, mainly due to the intrinsic variation of the primary, and the eclipse light curve. New radial-velocity observations are presented and used to compute parameters of a spectroscopic orbit. Frequency analysis of the radial-velocity residuals from the spectroscopic orbital solution fails to uncover periodic variations with amplitudes greater than 2 km s-1. A Rossiter-McLaughlin anomaly is detected from observations covering ingress. From archival photometric indices and the revised Hipparcos parallax, we derive the primary's effective temperature, surface gravity, bolometric correction and the luminosity. An analysis of a high signal-to-noise spectrogram yields the effective temperature and surface gravity in good agreement with the photometric values. From the same spectrogram, we determine the abundance of He, C, N, O, Ne, Mg, Al, Si, P, S, Cl and Fe. The eclipse light curve is solved by means of EBOP. For a range of mass of the primary, a value of mean density, very nearly independent of assumed mass, is computed from the parameters of the system. Contrary to a recent report, this value is approximately equal to the mean density obtained from the star's effective temperature and luminosity. Despite limited frequency resolution of the MOST data, we were able to recover the closely spaced SPB frequency quadruplet discovered from the ground in 2002-2004. The other two SPB terms seen from the ground were also recovered. Moreover, our analysis of the MOST data adds 15 low-amplitude SPB terms with frequencies ranging from 0.109 to 2.786 d-1.
Nuclear Forensics and Radiochemistry: Fission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rundberg, Robert S.
Radiochemistry has been used to study fission since it’ discovery. Radiochemical methods are used to determine cumulative mass yields. These measurements have led to the two-mode fission hypothesis to model the neutron energy dependence of fission product yields. Fission product yields can be used for the nuclear forensics of nuclear explosions. The mass yield curve depends on both the fuel and the neutron spectrum of a device. Recent studies have shown that the nuclear structure of the compound nucleus can affect the mass yield distribution.
A photometric method for the estimation of the oil yield of oil shale
Cuttitta, Frank
1951-01-01
A method is presented for the distillation and photometric estimation of the oil yield of oil-bearing shales. The oil shale is distilled in a closed test tube and the oil extracted with toluene. The optical density of the toluene extract is used in the estimation of oil content and is converted to percentage of oil by reference to a standard curve. This curve is obtained by relating the oil yields determined by the Fischer assay method to the optical density of the toluene extract of the oil evolved by the new procedure. The new method gives results similar to those obtained by the Fischer assay method in a much shorter time. The applicability of the new method to oil-bearing shale and phosphatic shale has been tested.
NASA Astrophysics Data System (ADS)
Wang, Wei; Yuan, Hang; Wang, Xiangqin; Yu, Zengliang
2008-02-01
An identification of Phe dipeptide from L-phenylalanine monomers after keV nitrogen and argon ion implantation, by using the HPLC (high performance liquid chromatography) and LC-MS(liquid chromatography mass spectrometer) methods is reported. The results showed a similar yield behavior for both ion species, namely: 1) the yield of dipeptides under alkalescent conditions was distinctly higher than that under acidic or neutral conditions; 2) for different ion species, the dose-yield curves tracked a similar trend which was called a counter-saddle curve. The dipeptide formation may implicate a recombination repair mechanism of damaged biomolecules that energetic ions have left in their wake. Accordingly a physicochemical self-repair mechanism by radiation itself for the ion-beam radiobiological effects is proposed.
Can we improve the clinical utility of respiratory rate as a monitored vital sign?
Chen, Liangyou; Reisner, Andrew T; Gribok, Andrei; McKenna, Thomas M; Reifman, Jaques
2009-06-01
Respiratory rate (RR) is a basic vital sign, measured and monitored throughout a wide spectrum of health care settings, although RR is historically difficult to measure in a reliable fashion. We explore an automated method that computes RR only during intervals of clean, regular, and consistent respiration and investigate its diagnostic use in a retrospective analysis of prehospital trauma casualties. At least 5 s of basic vital signs, including heart rate, RR, and systolic, diastolic, and mean arterial blood pressures, were continuously collected from 326 spontaneously breathing trauma casualties during helicopter transport to a level I trauma center. "Reliable" RR data were identified retrospectively using automated algorithms. The diagnostic performances of reliable versus standard RR were evaluated by calculation of the receiver operating characteristic curves using the maximum-likelihood method and comparison of the summary areas under the receiver operating characteristic curves (AUCs). Respiratory rate shows significant data-reliability differences. For identifying prehospital casualties who subsequently receive a respiratory intervention (hospital intubation or tube thoracotomy), standard RR yields an AUC of 0.59 (95% confidence interval, 0.48-0.69), whereas reliable RR yields an AUC of 0.67 (0.57-0.77), P < 0.05. For identifying casualties subsequently diagnosed with a major hemorrhagic injury and requiring blood transfusion, standard RR yields an AUC of 0.60 (0.49-0.70), whereas reliable RR yields 0.77 (0.67-0.85), P < 0.001. Reliable RR, as determined by an automated algorithm, is a useful parameter for the diagnosis of respiratory pathology and major hemorrhage in a trauma population. It may be a useful input to a wide variety of clinical scores and automated decision-support algorithms.
NASA Astrophysics Data System (ADS)
Caparanga, Alvin R.; Reyes, Rachael Anne L.; Rivas, Reiner L.; De Vera, Flordeliza C.; Retnasamy, Vithyacharan; Aris, Hasnizah
2017-11-01
This study utilized the 3k factorial design with k as the two varying factors namely, temperature and air velocity. The effects of temperature and air velocity on the drying rate curves and on the average particle diameter of the arrowroot starch were investigated. Extracted arrowroot starch samples were dried based on the designed parameters until constant weight was obtained. The resulting initial moisture content of the arrowroot starch was 49.4%. Higher temperatures correspond to higher drying rates and faster drying time while air velocity effects were approximately negligible or had little effect. Drying rate is a function of temperature and time. The constant rate period was not observed for the drying rate of arrowroot starch. The drying curves were fitted against five mathematical models: Lewis, Page, Henderson and Pabis, Logarithmic and Midili. The Midili Model was the best fit for the experimental data since it yielded the highest R2 and the lowest RSME values for all runs. Scanning electron microscopy (SEM) was used for qualitative analysis and for determination of average particle diameter of the starch granules. The starch granules average particle diameter had a range of 12.06 - 24.60 μm. The use of ANOVA proved that particle diameters for each run varied significantly with each other. And, the Taguchi Design proved that high temperatures yield lower average particle diameter, while high air velocities yield higher average particle diameter.
Peter Hamner; Marshall S. White; Philip A. Araman
2006-01-01
Curve sawing is a primary log breakdown process that incorporates gang-saw technology to allow two-sided cants from logs with sweep to be cut parallel to the log surface or log axis. Since curve-sawn logs with sweep are cut along the grain, the potential for producing high quality straight-grain lumber and cants increases, and strength, stiffness, and dimensional...
Multi-color light curves and orbital period research of eclipsing binary V1073 Cyg
NASA Astrophysics Data System (ADS)
Tian, Xiao-Man; Zhu, Li-Ying; Qian, Sheng-Bang; Li, Lin-Jia; Jiang, Lin-Qiao
2018-02-01
New multi-color BV RcIc photometric observations are presented for the W UMa type eclipsing binary V1073 Cyg. The multi-color light curve analysis with the Wilson-Devinney procedure yielded the absolute parameters of this system, showing that V1073 Cyg is a shallow contact binary system with a fill-out factor f = 0.124(±0.011). We collected all available times of light minima spanning 119 yr, including CCD data to construct the O ‑ C curve, and performed detailed O ‑ C analysis. The O ‑ C diagram shows that the period change is complex. A long-term continuous decrease and a cyclic variation exist. The period is decreasing at a rate of Ṗ = ‑1.04(±0.18) × 10‑10 d cycle‑1 and, with the period decrease, V1073 Cyg will evolve to the deep contact stage. The cyclic variation with a period of P 3 = 82.7(±3.6) yr and an amplitude of A = 0.028(±0.002)d may be explained by magnetic activity of one or both components or the light travel time effect caused by a distant third companion with M 3(i‧ = 90°) = 0.511 M⊙.
Yield stress in amorphous solids: A mode-coupling-theory analysis
NASA Astrophysics Data System (ADS)
Ikeda, Atsushi; Berthier, Ludovic
2013-11-01
The yield stress is a defining feature of amorphous materials which is difficult to analyze theoretically, because it stems from the strongly nonlinear response of an arrested solid to an applied deformation. Mode-coupling theory predicts the flow curves of materials undergoing a glass transition and thus offers predictions for the yield stress of amorphous solids. We use this approach to analyze several classes of disordered solids, using simple models of hard-sphere glasses, soft glasses, and metallic glasses for which the mode-coupling predictions can be directly compared to the outcome of numerical measurements. The theory correctly describes the emergence of a yield stress of entropic nature in hard-sphere glasses, and its rapid growth as density approaches random close packing at qualitative level. By contrast, the emergence of solid behavior in soft and metallic glasses, which originates from direct particle interactions is not well described by the theory. We show that similar shortcomings arise in the description of the caging dynamics of the glass phase at rest. We discuss the range of applicability of mode-coupling theory to understand the yield stress and nonlinear rheology of amorphous materials.
Zhao, Ben; Ata-Ui-Karim, Syed Tahir; Yao, Xia; Tian, YongChao; Cao, WeiXing; Zhu, Yan; Liu, XiaoJun
2016-01-01
Diagnosing the status of crop nitrogen (N) helps to optimize crop yield, improve N use efficiency, and reduce the risk of environmental pollution. The objectives of the present study were to develop a critical N (Nc) dilution curve for winter wheat (based on spike dry matter [SDM] during the reproductive growth period), to compare this curve with the existing Nc dilution curve (based on plant dry matter [DM] of winter wheat), and to explore its ability to reliably estimate the N status of winter wheat. Four field experiments, using varied N fertilizer rates (0-375 kg ha-1) and six cultivars (Yangmai16, Ningmai13, Ningmai9, Aikang58, Yangmai12, Huaimai 17), were conducted in the Jiangsu province of eastern China. Twenty plants from each plot were sampled to determine the SDM and spike N concentration (SNC) during the reproductive growth period. The spike Nc curve was described by Nc = 2.85×SDM-0.17, with SDM ranging from 0.752 to 7.233 t ha-1. The newly developed curve was lower than the Nc curve based on plant DM. The N nutrition index (NNI) for spike dry matter ranged from 0.62 to 1.1 during the reproductive growth period across the seasons. Relative yield (RY) increased with increasing NNI; however, when NNI was greater than 0.96, RY plateaued and remained stable. The spike Nc dilution curve can be used to correctly identify the N nutrition status of winter wheat to support N management during the reproductive growth period for winter wheat in eastern China.
A BASIC program for the removal of noise from reaction traces using Fourier filtering.
Brittain, T
1989-04-01
Software for the removal of noise from reaction curves using the principle of Fourier filtering has been written in BASIC to execute on a PC. The program inputs reaction traces which are subjected to a rotation-inversion process, to produce functions suitable for Fourier analysis. Fourier transformation into the frequency domain is followed by multiplication of the transform by a rectangular filter function, to remove the noise frequencies. Inverse transformation then yields a noise-reduced reaction trace suitable for further analysis. The program is interactive at each stage and could easily be modified to remove noise from a range of input data types.
Zou, Kelly H; Resnic, Frederic S; Talos, Ion-Florin; Goldberg-Zimring, Daniel; Bhagwat, Jui G; Haker, Steven J; Kikinis, Ron; Jolesz, Ferenc A; Ohno-Machado, Lucila
2005-10-01
Medical classification accuracy studies often yield continuous data based on predictive models for treatment outcomes. A popular method for evaluating the performance of diagnostic tests is the receiver operating characteristic (ROC) curve analysis. The main objective was to develop a global statistical hypothesis test for assessing the goodness-of-fit (GOF) for parametric ROC curves via the bootstrap. A simple log (or logit) and a more flexible Box-Cox normality transformations were applied to untransformed or transformed data from two clinical studies to predict complications following percutaneous coronary interventions (PCIs) and for image-guided neurosurgical resection results predicted by tumor volume, respectively. We compared a non-parametric with a parametric binormal estimate of the underlying ROC curve. To construct such a GOF test, we used the non-parametric and parametric areas under the curve (AUCs) as the metrics, with a resulting p value reported. In the interventional cardiology example, logit and Box-Cox transformations of the predictive probabilities led to satisfactory AUCs (AUC=0.888; p=0.78, and AUC=0.888; p=0.73, respectively), while in the brain tumor resection example, log and Box-Cox transformations of the tumor size also led to satisfactory AUCs (AUC=0.898; p=0.61, and AUC=0.899; p=0.42, respectively). In contrast, significant departures from GOF were observed without applying any transformation prior to assuming a binormal model (AUC=0.766; p=0.004, and AUC=0.831; p=0.03), respectively. In both studies the p values suggested that transformations were important to consider before applying any binormal model to estimate the AUC. Our analyses also demonstrated and confirmed the predictive values of different classifiers for determining the interventional complications following PCIs and resection outcomes in image-guided neurosurgery.
Angeli, Vasiliki; Polymeris, George S; Sfampa, Ioanna K; Tsirliganis, Nestor C; Kitis, George
2017-04-01
Natural calcium fluoride has been commonly used as thermoluminescence (TL) dosimeter due to its high luminescence intensity. The aim of this work includes attempting a correlation between specific TL glow curves after bleaching and components of linearly modulated optically stimulated luminescence (LM-OSL) as well as continuous wave OSL (CW-OSL). A component resolved analysis was applied to both integrated intensity of the RTL glow curves and all OSL decay curves, by using a Computerized Glow-Curve De-convolution (CGCD) procedure. All CW-OSL and LM-OSL components are correlated to the decay components of the integrated RTL signal, apart from two RTL components which cannot be directly correlated with either LM-OSL or CW-OSL component. The unique, stringent criterion for this correlation deals with the value of the decay constant λ of each bleaching component. There is only one, unique bleaching component present in all three luminescence entities which were the subject of the present study, indicating that each TL trap yields at least three different bleaching components; different TL traps can indicate bleaching components with similar values. According to the data of the present work each RTL bleaching component receives electrons from at least two peaks. The results of the present study strongly suggest that the traps that contribute to TL and OSL are the same. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the meaning of the weighted alternative free-response operating characteristic figure of merit.
Chakraborty, Dev P; Zhai, Xuetong
2016-05-01
The free-response receiver operating characteristic (FROC) method is being increasingly used to evaluate observer performance in search tasks. Data analysis requires definition of a figure of merit (FOM) quantifying performance. While a number of FOMs have been proposed, the recommended one, namely, the weighted alternative FROC (wAFROC) FOM, is not well understood. The aim of this work is to clarify the meaning of this FOM by relating it to the empirical area under a proposed wAFROC curve. The weighted wAFROC FOM is defined in terms of a quasi-Wilcoxon statistic that involves weights, coding the clinical importance, assigned to each lesion. A new wAFROC curve is proposed, the y-axis of which incorporates the weights, giving more credit for marking clinically important lesions, while the x-axis is identical to that of the AFROC curve. An expression is derived relating the area under the empirical wAFROC curve to the wAFROC FOM. Examples are presented with small numbers of cases showing how AFROC and wAFROC curves are affected by correct and incorrect decisions and how the corresponding FOMs credit or penalize these decisions. The wAFROC, AFROC, and inferred ROC FOMs were applied to three clinical data sets involving multiple reader FROC interpretations in different modalities. It is shown analytically that the area under the empirical wAFROC curve equals the wAFROC FOM. This theorem is the FROC analog of a well-known theorem developed in 1975 for ROC analysis, which gave meaning to a Wilcoxon statistic based ROC FOM. A similar equivalence applies between the area under the empirical AFROC curve and the AFROC FOM. The examples show explicitly that the wAFROC FOM gives equal importance to all diseased cases, regardless of the number of lesions, a desirable statistical property not shared by the AFROC FOM. Applications to the clinical data sets show that the wAFROC FOM yields results comparable to that using the AFROC FOM. The equivalence theorem gives meaning to the weighted AFROC FOM, namely, it is identical to the empirical area under weighted AFROC curve.
Rapid detection of G6PD mutations by multicolor melting curve analysis.
Xia, Zhongmin; Chen, Ping; Tang, Ning; Yan, Tizhen; Zhou, Yuqiu; Xiao, Qizhi; Huang, Qiuying; Li, Qingge
2016-09-01
The MeltPro G6PD assay is the first commercial genetic test for glucose-6-phosphate dehydrogenase (G6PD) deficiency. This multicolor melting curve analysis-based real-time PCR assay is designed to genotype 16 G6PD mutations prevalent in the Chinese population. We comprehensively evaluated both the analytical and clinical performances of this assay. All 16 mutations were accurately genotyped, and the standard deviation of the measured Tm was <0.3°C. The limit of detection was 1.0ng/μL human genomic DNA. The assay could be run on four mainstream models of real-time PCR machines. The shortest running time (150min) was obtained with LightCycler 480 II. A clinical study using 763 samples collected from three hospitals indicated that, of 433 samples with reduced G6PD activity, the MeltPro assay identified 423 samples as mutant, yielding a clinical sensitivity of 97.7% (423/433). Of the 117 male samples with normal G6PD activity, the MeltPro assay confirmed that 116 samples were wild type, yielding a clinical specificity of 99.1% (116/117). Moreover, the MeltPro assay demonstrated 100% concordance with DNA sequencing for all targeted mutations. We concluded that the MeltPro G6PD assay is useful as a diagnostic or screening tool for G6PD deficiency in clinical settings. Copyright © 2016 Elsevier Inc. All rights reserved.
An assessment of the BEST procedure to estimate the soil water retention curve
NASA Astrophysics Data System (ADS)
Castellini, Mirko; Di Prima, Simone; Iovino, Massimo
2017-04-01
The Beerkan Estimation of Soil Transfer parameters (BEST) procedure represents a very attractive method to accurately and quickly obtain a complete hydraulic characterization of the soil (Lassabatère et al., 2006). However, further investigations are needed to check the prediction reliability of soil water retention curve (Castellini et al., 2016). Four soils with different physical properties (texture, bulk density, porosity and stoniness) were considered in this investigation. Sites of measurement were located at Palermo University (PAL site) and Villabate (VIL site) in Sicily, Arborea (ARB site) in Sardinia and in Foggia (FOG site), Apulia. For a given site, BEST procedure was applied and the water retention curve was estimated using the available BEST-algorithms (i.e., slope, intercept and steady), and the reference values of the infiltration constants (β=0.6 and γ=0.75) were considered. The water retention curves estimated by BEST were then compared with those obtained in laboratory by the evaporation method (Wind, 1968). About ten experiments were carried out with both methods. A sensitivity analysis of the constants β and γ within their feasible range of variability (0.1<β<1.9 and of 0.61<γ< 0.79) was also carried out for each soil in order to establish: i) the impact of infiltration constants in the three BEST-algorithms on saturated hydraulic conductivity, Ks, soil sorptivity, S and on the retention curve scale parameter, hg; ii) the effectiveness of the three BEST-algorithms in the estimate of the soil water retention curve. Main results of sensitivity analysis showed that S tended to increase for increasing β values and decreasing values of γ for all the BEST-algorithms and soils. On the other hand, Ks tended to decrease for increasing β and γ values. Our results also reveal that: i) BEST-intercept and BEST-steady algorithms yield lower S and higher Ks values than BEST-slope; ii) these algorithms yield also more variable values. For the latter, a higher sensitiveness of these two alternative algorithms to β than for γ was established. The decreasing sensitiveness to γ may lead to a possible lack in the correction of the simplified theoretical description of the parabolic two-dimensional and one-dimensional wetting front along the soil profile (Smettem et al., 1994). This likely resulted in lower S and higher Ks values. Nevertheless, these differences are expected to be negligible for practical applications (Di Prima et al., 2016). On the other hand, the -intercept and -steady algorithms yielded hg values independent from γ, hence, determining water retention curves by these algorithms appears questionable. The linear regression between the soil water retention curves of BEST-slope and BEST-intercept (note that the same result is obtained with BEST-steady, due to a purely analytical reason) vs. lab method showed the following main results: i) the BEST procedure generally tends to underestimate the soil water retention (the exception was the PAL site); depending on the soil and algorithmic, the root mean square differences, RMSD obtained with BEST and lab method ranged between 0.028 cm3/cm3 (VIL, BEST-slope) and 0.082 cm3/cm3(FOG, BEST-intercept/steady); highest RMSD values (0.124-0.140 cm3/cm3) were obtained in the PAL site; ii) depending on the soil, BEST-slope generally determined lowest RMSD values (by a factor of 1.2-2.1); iii) when the whole variability range of β and γ was considered and a different couple of parameters was chosen (in general, extreme values of the parameters), lower RMSD values were detected in three out of four cases for BEST-slope; iv) the negligible observed differences of RMSD however suggest that using the reference values of infiltration constants, does not worsen significantly the soil water retention curve estimation; v) in 25% of considered soils (PAL site), the BEST procedure was not able to reproduce the retention curve of the soil in a sufficiently accurate way. In conclusion, our results showed that the BEST-slope algorithm appeared to yield more accurate estimates of water retention data with reference to three of the four sampled soils. Conversely, determining water retention curves by the -intercept and -steady algorithms may be questionable, since these algorithms overestimated hg yielding independent values of this parameter from the proportionality coefficient γ. (*) The work was supported by the project "STRATEGA, Sperimentazione e TRAsferimento di TEcniche innovative di aGricoltura conservativA", financed by Regione Puglia - Servizio Agricoltura. References Castellini, M., Iovino, M., Pirastru, M., Niedda, M., Bagarello, V., 2016. Use of BEST Procedure to Assess Soil Physical Quality in the Baratz Lake Catchment (Sardinia, Italy). Soil Sci. Soc. Am. J. 80:742-755. doi:10.2136/sssaj2015.11.0389 Di Prima, S., Lassabatere, L., Bagarello, V., Iovino, M., Angulo-Jaramillo, R., 2016. Testing a new automated single ring infiltrometer for Beerkan infiltration experiments. Geoderma 262, 20-34. doi:10.1016/j.geoderma.2015.08.006 Lassabatère, L., Angulo-Jaramillo, R., Soria Ugalde, J.M., Cuenca, R., Braud, I., Haverkamp, R., 2006. Beerkan Estimation of Soil Transfer Parameters through Infiltration Experiments-BEST. Soil Sci. Soc. Am. J. 70:521-532. doi:10.2136/sssaj2005.0026 Smettem, K.R.J., Parlange, J.Y., Ross, P.J., Haverkamp, R., 1994. Three-dimensional analysis of infiltration from the disc infiltrometer: 1. A capillary-based theory. Water Resour. Res. 30, 2925-2929. doi:10.1029/94WR01787 Wind, G.P. 1968. Capillary conductivity data estimated by a simple method. In: Water in the Unsaturated Zone, Proceedings of Wageningen Syposium, June 1966 Vol.1 (eds P.E. Rijtema & H Wassink), pp. 181-191, IASAH, Gentbrugge, Belgium.
NASA Astrophysics Data System (ADS)
Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.
2018-04-01
Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.
Photometric Study of Fourteen Low-mass Binaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korda, D.; Zasche, P.; Wolf, M.
2017-07-01
New CCD photometric observations of fourteen short-period low-mass eclipsing binaries (LMBs) in the photometric filters I, R, and V were used for a light curve analysis. A discrepancy remains between observed radii and those derived from the theoretical modeling for LMBs, in general. Mass calibration of all observed LMBs was performed using only the photometric indices. The light curve modeling of these selected systems was completed, yielding the new derived masses and radii for both components. We compared these systems with the compilation of other known double-lined LMB systems with uncertainties of masses and radii less then 5%, which includesmore » 66 components of binaries where both spectroscopy and photometry were combined together. All of our systems are circular short-period binaries, and for some of them, the photospheric spots were also used. A purely photometric study of the light curves without spectroscopy seems unable to achieve high enough precision and accuracy in the masses and radii to act as meaningful test of the M–R relation for low-mass stars.« less
Practical sliced configuration spaces for curved planar pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, E.
1999-01-01
In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less
Stellar occultation spikes as probes of atmospheric structure and composition. [for Jupiter
NASA Technical Reports Server (NTRS)
Elliot, J. L.; Veverka, J.
1976-01-01
The characteristics of spikes observed in occultation light curves of Beta Scorpii by Jupiter are discussed in terms of the gravity-gradient model. The occultation of Beta Sco by Jupiter on May 13, 1971, is reviewed, and the gravity-gradient model is defined as an isothermal atmosphere of constant composition in which the refractivity is a function only of the radial coordinate from the center of refraction, which is assumed to lie parallel to the local gravity gradient. The derivation of the occultation light curve in terms of the atmosphere, the angular diameter of the occulted star, and the occultation geometry is outlined. It is shown that analysis of the light-curve spikes can yield the He/H2 concentration ratio in a well-mixed atmosphere, information on fine-scale atmospheric structure, high-resolution images of the occulted star, and information on ray crossing. Observational limits are placed on the magnitude of horizontal refractivity gradients, and it is concluded that the spikes are the result of local atmospheric density variations: atmospheric layers, density waves, or turbulence.
Know the Planet, Know the Star: Precise Stellar Parameters with Kepler
NASA Astrophysics Data System (ADS)
Sandford, Emily; Kipping, David M.
2017-01-01
The Kepler space telescope has revolutionized exoplanetary science with unprecedentedly precise photometric measurements of the light curves of transiting planets. In addition to information about the planet and its orbit, encoded in each Kepler transiting planet light curve are certain properties of the host star, including the stellar density and the limb darkening profile. For planets with strong prior constraints on orbital eccentricity (planets to which we refer as “stellar anchors”), we may measure these stellar properties directly from the light curve. This method promises to aid greatly in the characterization of transiting planet host stars targeted by the upcoming NASA TESS mission and any long-period, singly-transiting planets discovered in the same systems. Using Bayesian inference, we fit a transit model, including a nonlinear limb darkening law, to a large sample of transiting planet hosts to measure their stellar properties. We present the results of our analysis, including posterior stellar density distributions for each stellar host, and show how the method yields superior precision to literature stellar properties in the majority of cases studied.
An item response curves analysis of the Force Concept Inventory
NASA Astrophysics Data System (ADS)
Morris, Gary A.; Harshman, Nathan; Branum-Martin, Lee; Mazur, Eric; Mzoughi, Taha; Baker, Stephen D.
2012-09-01
Several years ago, we introduced the idea of item response curves (IRC), a simplistic form of item response theory (IRT), to the physics education research community as a way to examine item performance on diagnostic instruments such as the Force Concept Inventory (FCI). We noted that a full-blown analysis using IRT would be a next logical step, which several authors have since taken. In this paper, we show that our simple approach not only yields similar conclusions in the analysis of the performance of items on the FCI to the more sophisticated and complex IRT analyses but also permits additional insights by characterizing both the correct and incorrect answer choices. Our IRC approach can be applied to a variety of multiple-choice assessments but, as applied to a carefully designed instrument such as the FCI, allows us to probe student understanding as a function of ability level through an examination of each answer choice. We imagine that physics teachers could use IRC analysis to identify prominent misconceptions and tailor their instruction to combat those misconceptions, fulfilling the FCI authors' original intentions for its use. Furthermore, the IRC analysis can assist test designers to improve their assessments by identifying nonfunctioning distractors that can be replaced with distractors attractive to students at various ability levels.
Granja, M F; Pedraza, C M; Flórez, D C; Romero, J A; Palau, M A; Aguirre, D A
To evaluate the diagnostic performance of the length of the tumor contact with the capsule (LTC) and the apparent diffusion coefficient (ADC) map in the prediction of microscopic extracapsular extension in patients with prostate cancer who are candidates for radical prostatectomy. We used receiver operating curves to retrospectively study the diagnostic performance of the ADC map and the LTC as predictors of microscopic extracapsular extension in 92 patients with prostate cancer and moderate to high risk who were examined between May 2011 and December 2013. The optimal cutoff for the ADC map was 0.87× 10 -3 mm 2 /s, which yielded an area under the ROC curve of 72% (95% CI: 57%-86%), corresponding to a sensitivity of 83% and a specificity of 61%. The optimal cutoff for the LTC was 17.5mm, which yielded an area under the ROC curve of 74% (95% CI: 61%-87%), corresponding to a sensitivity of 91% and a specificity of 57%. Combining the two criteria improved the diagnostic performance, yielding an area under the ROC curve of 77% (95% CI: 62%-92%), corresponding to a sensitivity of 77% and a specificity of 61%. We elaborated a logistic regression model, obtaining an area under the ROC curve of 82% (95% CI: 73%-93%). Using quantitative measures improves the diagnostic accuracy of multiparametric magnetic resonance imaging in the staging of prostate cancer. The values of the ADC and LTC were predictors of microscopic extracapsular extension, and the best results were obtained when both values were used in combination. Copyright © 2017 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Description of 3D digital curves using the theory free groups
NASA Astrophysics Data System (ADS)
Imiya, Atsushi; Oosawa, Muneaki
1999-09-01
In this paper, we propose a new descriptor for two- and three- dimensional digital curves using the theory of free groups. A spatial digital curve is expressed as a word which is an element of the free group which consists from three elements. These three symbols correspond to the directions of the orthogonal coordinates, respectively. Since a digital curve is treated as a word which is a sequence of alphabetical symbols, this expression permits us to describe any geometric operation as rewriting rules for words. Furthermore, the symbolic derivative of words yields geometric invariants of digital curves for digital Euclidean motion. These invariants enable us to design algorithms for the matching and searching procedures of partial structures of digital curves. Moreover, these symbolic descriptors define the global and local distances for digital curves as an editing distance.
Centritto, Mauro; Lauteri, Marco; Monteverdi, Maria Cristina; Serraj, Rachid
2009-01-01
Genotypic variations in leaf gas exchange and yield were analysed in five upland-adapted and three lowland rice cultivars subjected to a differential soil moisture gradient, varying from well-watered to severely water-stressed conditions. A reduction in the amount of water applied resulted in a significant decrease in leaf gas exchange and, subsequently, in above-ground dry mass and grain yield, that varied among genotypes and distance from the line source. The comparison between the variable J and the Delta values in recently synthesized sugars methods, yielded congruent estimations of mesophyll conductance (g(m)), confirming the reliability of these two techniques. Our data demonstrate that g(m) is a major determinant of photosynthesis (A), because rice genotypes with inherently higher g(m) were capable of keeping higher A in stressed conditions. Furthermore, A, g(s), and g(m) of water-stressed genotypes rapidly recovered to the well-watered values upon the relief of water stress, indicating that drought did not cause any lasting metabolic limitation to photosynthesis. The comparisons between the A/C(i) and corresponding A/C(c) curves, measured in the genotypes that showed intrinsically higher and lower instantaneous A, confirmed this finding. Moreover, the effect of drought stress on grain yield was correlated with the effects on both A and total diffusional limitations to photosynthesis. Overall, these data indicate that genotypes which showed higher photosynthesis and conductances were also generally more productive across the entire soil moisture gradient. The analysis of Delta revealed a substantial variation of water use efficiency among the genotypes, both on the long-term (leaf pellet analysis) and short-term scale (leaf soluble sugars analysis).
IDF relationships using bivariate copula for storm events in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Ariff, N. M.; Jemain, A. A.; Ibrahim, K.; Wan Zin, W. Z.
2012-11-01
SummaryIntensity-duration-frequency (IDF) curves are used in many hydrologic designs for the purpose of water managements and flood preventions. The IDF curves available in Malaysia are those obtained from univariate analysis approach which only considers the intensity of rainfalls at fixed time intervals. As several rainfall variables are correlated with each other such as intensity and duration, this paper aims to derive IDF points for storm events in Peninsular Malaysia by means of bivariate frequency analysis. This is achieved through utilizing the relationship between storm intensities and durations using the copula method. Four types of copulas; namely the Ali-Mikhail-Haq (AMH), Frank, Gaussian and Farlie-Gumbel-Morgenstern (FGM) copulas are considered because the correlation between storm intensity, I, and duration, D, are negative and these copulas are appropriate when the relationship between the variables are negative. The correlations are attained by means of Kendall's τ estimation. The analysis was performed on twenty rainfall stations with hourly data across Peninsular Malaysia. Using Akaike's Information Criteria (AIC) for testing goodness-of-fit, both Frank and Gaussian copulas are found to be suitable to represent the relationship between I and D. The IDF points found by the copula method are compared to the IDF curves yielded based on the typical IDF empirical formula of the univariate approach. This study indicates that storm intensities obtained from both methods are in agreement with each other for any given storm duration and for various return periods.
UROKIN: A Software to Enhance Our Understanding of Urogenital Motion.
Czyrnyj, Catriona S; Labrosse, Michel R; Graham, Ryan B; McLean, Linda
2018-05-01
Transperineal ultrasound (TPUS) allows for objective quantification of mid-sagittal urogenital mechanics, yet current practice omits dynamic motion information in favor of analyzing only a rest and a peak motion frame. This work details the development of UROKIN, a semi-automated software which calculates kinematic curves of urogenital landmark motion. A proof of concept analysis, performed using UROKIN on TPUS video recorded from 20 women with and 10 women without stress urinary incontinence (SUI) performing maximum voluntary contraction of the pelvic floor muscles. The anorectal angle and bladder neck were tracked while the motion of the pubic symphysis was used to compensate for the error incurred by TPUS probe motion during imaging. Kinematic curves of landmark motion were generated for each video and curves were smoothed, time normalized, and averaged within groups. Kinematic data yielded by the UROKIN software showed statistically significant differences between women with and without SUI in terms of magnitude and timing characteristics of the kinematic curves depicting landmark motion. Results provide insight into the ways in which UROKIN may be useful to study differences in pelvic floor muscle contraction mechanics between women with and without SUI and other pelvic floor disorders. The UROKIN software improves on methods described in the literature and provides unique capacity to further our understanding of urogenital biomechanics.
NASA Astrophysics Data System (ADS)
Budiarsa, I. N.; Gde Antara, I. N.; Dharma, Agus; Karnata, I. N.
2018-04-01
Under an indentation, the material undergoes a complex deformation. One of the most effective ways to analyse indentation has been the representative method. The concept coupled with finite element (FE) modelling has been used successfully in analysing sharp indenters. It is of great importance to extend this method to spherical indentation and associated hardness system. One particular case is the Rockwell B test, where the hardness is determined by two points on the P-h curve of a spherical indenter. In this case, an established link between materials parameters and P-h curves can naturally lead to direct hardness estimation from the materials parameters (e.g. yield stress (y) and work hardening coefficients (n)). This could provide a useful tool for both research and industrial applications. Two method to predict p-h curve in spherical indentation has been established. One is use method using C1-C2 polynomial equation approach and another one by depth approach. Both approach has been successfully. An effective method in representing the P-h curves using a normalized representative stress concept was established. The concept and methodology developed is used to predict hardness (HRB) values of materials through direct analysis and validated with experimental data on selected samples of steel.
NASA Astrophysics Data System (ADS)
Torres-Perez, J. L.; Guild, L. S.; Armstrong, R.; Corredor, J. E.; Polanco, R.; Zuluaga-Montero, A. B.
2013-05-01
Coral reefs are highly heterogenic ecosystems with a plethora of photosynthetic organisms forming most of the benthic communities. Usually coral reef benthos is a composite of reef corals, different groups of algae, seagrasses, sandy bottoms, dead rubble, and even mangrove forests living in a relatively small area. The remote characterization of these important tropical ecosystems represents a challenge to scientists, particularly due to the similarity of the spectral signatures among some of these components. As such, we examined the similarities and differences between nine Scleractinian Caribbean shallow-water reef corals' spectral reflectance curves. Samples were also collected from each species for pigment analysis using High Performance Liquid Chromatography (HPLC). Reflectance curves were obtained with the aid of a GER-1500 hand-held field spectroradiometer enclosed in an underwater housing. Our findings showed that even though most of the pigmentation was directly related to the relationship of corals with their symbiotic dinoflagellates (zooxanthellae), the presence of other endolithic photosynthetic organisms can also contribute to the light absorption of corals and, hence, the reflectance spectra of each species. Also, the relative contribution of chlorophylls vs. carotenes or xanthophylls depends on the coral species with some species relying more on Chlorophyll a and other species relying on Chlorophyl c2 and Peridinin with a small Chlorophyll a component. Pigments associated with the xanthophyll cycle of dinoflagellates (Diadinoxanthin and Diatoxanthin) were detected in most species. Pigments typical of endolithic organisms such as Zeaxanthin, Fucoxanthin, Violaxanthin and Siphonaxanthin were also detected in some coral species. The influence of major pigments on the reflectance curve was evidenced with a 2nd derivative analysis. This could be used to discriminate among most species. Further, an analysis of the integration of the area under the reflectance curve in the photosynthetically active radiation (PAR; 400-700nm) yielded an inverse relationship with the total pigment concentration with an up to 97% confidence level. Corals were distinguished from seagrasses and other benthic components based on their reflectance and differences in curve inflection peaks. Special care needs to be taken when characterizing sandy bottoms as they are influenced by the presence of photosynthetic microbiota as reflected in their reflectance curves. The use of this integration is proposed as a novel non-invasive method to predict pigment changes in reef corals aimed to monitor their health in the present climate change scenario.
Liou, Kevin; Negishi, Kazuaki; Ho, Suyen; Russell, Elizabeth A; Cranney, Greg; Ooi, Sze-Yuan
2016-08-01
Global longitudinal strain (GLS) is well validated and has important applications in contemporary clinical practice. The aim of this analysis was to evaluate the accuracy of resting peak GLS in the diagnosis of obstructive coronary artery disease (CAD). A systematic literature search was performed through July 2015 using four databases. Data were extracted independently by two authors and correlated before analyses. Using a random-effect model, the pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, and summary area under the curve for GLS were estimated with their respective 95% CIs. Screening of 1,669 articles yielded 10 studies with 1,385 patients appropriate for inclusion in the analysis. The mean age and left ventricular ejection fraction were 59.9 years and 61.1%. On the whole, 54.9% and 20.9% of the patients had hypertension and diabetes, respectively. Overall, abnormal GLS detected moderate to severe CAD with a pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of 74.4%, 72.1%, 2.9, and 0.35 respectively. The area under the curve and diagnostic odds ratio were 0.81 and 8.5. The mean values of GLS for those with and without CAD were -16.5% (95% CI, -15.8% to -17.3%) and -19.7% (95% CI, -18.8% to -20.7%), respectively. Subgroup analyses for patients with severe CAD and normal left ventricular ejection fractions yielded similar results. Current evidence supports the use of GLS in the detection of moderate to severe obstructive CAD in symptomatic patients. GLS may complement existing diagnostic algorithms and act as an early adjunctive marker of cardiac ischemia. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Aref-Eshghi, Erfan; Oake, Justin; Godwin, Marshall; Aubrey-Bassler, Kris; Duke, Pauline; Mahdavian, Masoud; Asghari, Shabnam
2017-03-01
The objective of this study was to define the optimal algorithm to identify patients with dyslipidemia using electronic medical records (EMRs). EMRs of patients attending primary care clinics in St. John's, Newfoundland and Labrador (NL), Canada during 2009-2010, were studied to determine the best algorithm for identification of dyslipidemia. Six algorithms containing three components, dyslipidemia ICD coding, lipid lowering medication use, and abnormal laboratory lipid levels, were tested against a gold standard, defined as the existence of any of the three criteria. Linear discriminate analysis, and bootstrapping were performed following sensitivity/specificity testing and receiver's operating curve analysis. Two validating datasets, NL records of 2011-2014, and Canada-wide records of 2010-2012, were used to replicate the results. Relative to the gold standard, combining laboratory data together with lipid lowering medication consumption yielded the highest sensitivity (99.6%), NPV (98.1%), Kappa agreement (0.98), and area under the curve (AUC, 0.998). The linear discriminant analysis for this combination resulted in an error rate of 0.15 and an Eigenvalue of 1.99, and the bootstrapping led to AUC: 0.998, 95% confidence interval: 0.997-0.999, Kappa: 0.99. This algorithm in the first validating dataset yielded a sensitivity of 97%, Negative Predictive Value (NPV) = 83%, Kappa = 0.88, and AUC = 0.98. These figures for the second validating data set were 98%, 93%, 0.95, and 0.99, respectively. Combining laboratory data with lipid lowering medication consumption within the EMR is the best algorithm for detecting dyslipidemia. These results can generate standardized information systems for dyslipidemia and other chronic disease investigations using EMRs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Jordan S.; Pathak, Siddhartha; Reichardt, Ashley
Experimentally quantifying the mechanical effects of radiation damage in reactor materials is necessary for the development and qualification of new materials for improved performance and safety. This can be achieved in a high-throughput fashion through a combination of ion beam irradiation and small scale mechanical testing in contrast to the high cost and laborious nature of bulk testing of reactor irradiated samples. The current paper focuses on using spherical nanoindentation stress-strain curves on unirradiated and proton irradiated (10 dpa at 360 °C) 304 stainless steel to quantify the mechanical effects of radiation damage. Spherical nanoindentation stress-strain measurements show a radiation-inducedmore » increase in indentation yield strength from 1.36 GPa to 2.72 GPa and a radiation-induced increase in indentation work hardening rate of 10 GPa–30 GPa. These measurements are critically compared against Berkovich nanohardness, micropillar compression, and micro-tension measurements on the same material and similar grain orientations. The ratio of irradiated to unirradiated yield strength increases by a similar factor of 2 when measured via spherical nanoindentation or Berkovich nanohardness testing. A comparison of spherical indentation stress-strain curves to uniaxial (micropillar and micro-tension) stress-strain curves was achieved using a simple scaling relationship which shows good agreement for the unirradiated condition and poor agreement in post-yield behavior for the irradiated condition. Finally, the disagreement between spherical nanoindentation and uniaxial stress-strain curves is likely due to the plastic instability that occurs during uniaxial tests but is absent during spherical nanoindentation tests.« less
Zhang, Sijia; Liu, Xianghua; Liu, Lizhong
2018-01-01
In this paper, the microstructure and mechanical properties that distribute regulation along the rolling direction of tailor rolled blanks (TRB) were investigated. A tensile specimen with equal probability in yield (EYS) was first designed considering variation both in thickness and in material strength. The uniaxial tension test was carried out with a digital image correlation method to analyze the mechanical behaviors. The results showed that the strain distribution of EYS was homogeneous. From the results, it can be known that a new design philosophy for a TRB tensile specimen is reasonable and EYS is suitable to characterize the mechanical behavior of TRB. The true stress-strain curves of metal in different cross sections of TRB were calculated. On the basis of the true stress-strain curves, a material model of TRB was constructed and then implemented into finite element simulations of TRB uniaxial tensile tests. The strain distribution of numerical and experimental results was similar and the error between the elongation of the specimen after fracture obtained by experiment and FE ranged from 9.51% to 13.06%. Therefore, the simulation results match well with the experimental results and the material model has high accuracy and as well as practicability. PMID:29710772
Zhao, Ben; Ata-UI-Karim, Syed Tahir; Yao, Xia; Tian, YongChao; Cao, WeiXing; Zhu, Yan; Liu, XiaoJun
2016-01-01
Diagnosing the status of crop nitrogen (N) helps to optimize crop yield, improve N use efficiency, and reduce the risk of environmental pollution. The objectives of the present study were to develop a critical N (Nc) dilution curve for winter wheat (based on spike dry matter [SDM] during the reproductive growth period), to compare this curve with the existing Nc dilution curve (based on plant dry matter [DM] of winter wheat), and to explore its ability to reliably estimate the N status of winter wheat. Four field experiments, using varied N fertilizer rates (0–375 kg ha-1) and six cultivars (Yangmai16, Ningmai13, Ningmai9, Aikang58, Yangmai12, Huaimai 17), were conducted in the Jiangsu province of eastern China. Twenty plants from each plot were sampled to determine the SDM and spike N concentration (SNC) during the reproductive growth period. The spike Nc curve was described by Nc = 2.85×SDM-0.17, with SDM ranging from 0.752 to 7.233 t ha-1. The newly developed curve was lower than the Nc curve based on plant DM. The N nutrition index (NNI) for spike dry matter ranged from 0.62 to 1.1 during the reproductive growth period across the seasons. Relative yield (RY) increased with increasing NNI; however, when NNI was greater than 0.96, RY plateaued and remained stable. The spike Nc dilution curve can be used to correctly identify the N nutrition status of winter wheat to support N management during the reproductive growth period for winter wheat in eastern China. PMID:27732634
Learning curve in robotic rectal cancer surgery: current state of affairs.
Jiménez-Rodríguez, Rosa M; Rubio-Dorado-Manzanares, Mercedes; Díaz-Pavón, José Manuel; Reyes-Díaz, M Luisa; Vazquez-Monchul, Jorge Manuel; Garcia-Cabrera, Ana M; Padillo, Javier; De la Portilla, Fernando
2016-12-01
Robotic-assisted rectal cancer surgery offers multiple advantages for surgeons, and it seems to yield the same clinical outcomes as regards the short-time follow-up of patients compared to conventional laparoscopy. This surgical approach emerges as a technique aiming at overcoming the limitations posed by rectal cancer and other surgical fields of difficult access, in order to obtain better outcomes and a shorter learning curve. A systematic review of the literature of robot-assisted rectal surgery was carried out according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The search was conducted in October 2015 in PubMed, MEDLINE and the Cochrane Central Register of Controlled Trials, for articles published in the last 10 years and pertaining the learning curve of robotic surgery for colorectal cancer. It consisted of the following key words: "rectal cancer/learning curve/robotic-assisted laparoscopic surgery". A total of 34 references were identified, but only 9 full texts specifically addressed the analysis of the learning curve in robot-assisted rectal cancer surgery, 7 were case series and 2 were non-randomised case-comparison series. Eight papers used the cumulative sum (CUSUM) method, and only one author divided the series into two groups to compare both. The mean number of cases for phase I of the learning curve was calculated to be 29.7 patients; phase II corresponds to a mean number 37.4 patients. The mean number of cases required for the surgeon to be classed as an expert in robotic surgery was calculated to be 39 patients. Robotic advantages could have an impact on learning curve for rectal cancer and lower the number of cases that are necessary for rectal resections.
Fabrication of slender elastic shells by the coating of curved surfaces
NASA Astrophysics Data System (ADS)
Lee, A.; Brun, P.-T.; Marthelot, J.; Balestra, G.; Gallaire, F.; Reis, P. M.
2016-04-01
Various manufacturing techniques exist to produce double-curvature shells, including injection, rotational and blow molding, as well as dip coating. However, these industrial processes are typically geared for mass production and are not directly applicable to laboratory research settings, where adaptable, inexpensive and predictable prototyping tools are desirable. Here, we study the rapid fabrication of hemispherical elastic shells by coating a curved surface with a polymer solution that yields a nearly uniform shell, upon polymerization of the resulting thin film. We experimentally characterize how the curing of the polymer affects its drainage dynamics and eventually selects the shell thickness. The coating process is then rationalized through a theoretical analysis that predicts the final thickness, in quantitative agreement with experiments and numerical simulations of the lubrication flow field. This robust fabrication framework should be invaluable for future studies on the mechanics of thin elastic shells and their intrinsic geometric nonlinearities.
Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots
Gilbert, Hunter B.; Webster, Robert J.
2016-01-01
Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C. PMID:27648473
Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.
Gilbert, Hunter B; Webster, Robert J
Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.
A posteriori noise estimation in variable data sets. With applications to spectra and light curves
NASA Astrophysics Data System (ADS)
Czesla, S.; Molle, T.; Schmitt, J. H. M. M.
2018-01-01
Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.
Identifying Children in Middle Childhood Who Are at Risk for Reading Problems.
Speece, Deborah L; Ritchey, Kristen D; Silverman, Rebecca; Schatschneider, Christopher; Walker, Caroline Y; Andrusik, Katryna N
2010-06-01
The purpose of this study was to identify and evaluate a universal screening battery for reading that is appropriate for older elementary students in a response to intervention model. Multiple measures of reading and reading correlates were administered to 230 fourth-grade children. Teachers rated children's reading skills, academic competence, and attention. Children were classified as not-at-risk or at-risk readers based on a three-factor model reflecting reading comprehension, word recognition/decoding, and word fluency. Predictors of reading status included group-administered tests of reading comprehension, silent word reading fluency, and teacher ratings of reading problems. Inclusion of individually administered tests and growth estimates did not add substantial variance. The receiver-operator characteristic curve analysis yielded an area under the curve index of 0.90, suggesting this model may both accurately and efficiently screen older elementary students with reading problems.
Spychalski, Michał; Skulimowski, Aleksander; Dziki, Adam; Saito, Yutaka
2017-12-01
Up to date we lack a detailed description of the colorectal endoscopic submucosal dissection (ESD) learning curve, that would represent the experience of the Western center. The aim of this study was to define the critical points of the learning curve and to draw up lesions qualification guidelines tailored to the endoscopists experience. We have carried out a single center prospective study. Between June 2013 and December 2016, 228 primary colorectal lesions were managed by ESD procedure. In order to create a learning curve model and to carry out the analysis the cases were divided into six periods, each consisting of 38 cases. The overall en bloc resection rate was 79.39%. The lowest en bloc resection rate (52.36%) was observed in the first period. After completing 76 procedures, the resection rate surged to 86% and it was accompanied by the significant increase in the mean procedure speed of ≥9 cm 2 /h. Lesions localization and diameter had a signification impact on the outcomes. After 76 procedures, en bloc resection rate of 90.9 and 90.67% were achieved for the left side of colon and rectum, respectively. In the right side of colon statistically significant lower resection rate of 67.57% was observed. We have proved that in the setting of the Western center, colorectal ESD can yield excellent results. It seems that the key to the success during the learning period is 'tailoring' lesions qualification guidelines to the experience of the endoscopist, as lesions diameter and localization highly influence the outcomes.
Binary Sources and Binary Lenses in Microlensing Surveys of MACHOs
NASA Astrophysics Data System (ADS)
Petrovic, N.; Di Stefano, R.; Perna, R.
2003-12-01
Microlensing is an intriguing phenomenon which may yield information about the nature of dark matter. Early observational searches identified hundreds of microlensing light curves. The data set consisted mainly of point-lens light curves and binary-lens events in which the light curves exhibit caustic crossings. Very few mildly perturbed light curves were observed, although this latter type should constitute the majority of binary lens light curves. Di Stefano (2001) has suggested that the failure to take binary effects into account may have influenced the estimates of optical depth derived from microlensing surveys. The work we report on here is the first step in a systematic analysis of binary lenses and binary sources and their impact on the results of statistical microlensing surveys. In order to asses the problem, we ran Monte-Carlo simulations of various microlensing events involving binary stars (both as the source and as the lens). For each event with peak magnification > 1.34, we sampled the characteristic light curve and recorded the chi squared value when fitting the curve with a point lens model; we used this to asses the perturbation rate. We also recorded the parameters of each system, the maximum magnification, the times at which each light curve started and ended and the number of caustic crossings. We found that both the binarity of sources and the binarity of lenses increased the lensing rate. While the binarity of sources had a negligible effect on the perturbation rates of the light curves, the binarity of lenses had a notable effect. The combination of binary sources and binary lenses produces an observable rate of interesting events exhibiting multiple "repeats" in which the magnification rises above and dips below 1.34 several times. Finally, the binarity of lenses impacted both the durations of the events and the maximum magnifications. This work was supported in part by the SAO intern program (NSF grant AST-9731923) and NASA contracts NAS8-39073 and NAS8-38248 (CXC).
An appraisal of the learning curve in robotic general surgery.
Pernar, Luise I M; Robertson, Faith C; Tavakkoli, Ali; Sheu, Eric G; Brooks, David C; Smink, Douglas S
2017-11-01
Robotic-assisted surgery is used with increasing frequency in general surgery for a variety of applications. In spite of this increase in usage, the learning curve is not yet defined. This study reviews the literature on the learning curve in robotic general surgery to inform adopters of the technology. PubMed and EMBASE searches yielded 3690 abstracts published between July 1986 and March 2016. The abstracts were evaluated based on the following inclusion criteria: written in English, reporting original work, focus on general surgery operations, and with explicit statistical methods. Twenty-six full-length articles were included in final analysis. The articles described the learning curves in colorectal (9 articles, 35%), foregut/bariatric (8, 31%), biliary (5, 19%), and solid organ (4, 15%) surgery. Eighteen of 26 (69%) articles report single-surgeon experiences. Time was used as a measure of the learning curve in all studies (100%); outcomes were examined in 10 (38%). In 12 studies (46%), the authors identified three phases of the learning curve. Numbers of cases needed to achieve plateau performance were wide-ranging but overlapping for different kinds of operations: 19-128 cases for colorectal, 8-95 for foregut/bariatric, 20-48 for biliary, and 10-80 for solid organ surgery. Although robotic surgery is increasingly utilized in general surgery, the literature provides few guidelines on the learning curve for adoption. In this heterogeneous sample of reviewed articles, the number of cases needed to achieve plateau performance varies by case type and the learning curve may have multiple phases as surgeons add more complex cases to their case mix with growing experience. Time is the most common determinant for the learning curve. The literature lacks a uniform assessment of outcomes and complications, which would arguably reflect expertise in a more meaningful way than time to perform the operation alone.
Comparative study of Acacia nilotica exudate gum and acacia gum.
Bhushette, Pravin R; Annapure, Uday S
2017-09-01
Over 900 species of Acacia trees are found on earth, most of them produce gums. Acacia nilotica (Babul tree) is one of the major gum-yielding acacia species found in he Indian subcontinent. A. nilotica gum was collected from Maharashtra, India and characterised for its proximate analysis, physicochemical, functional, rheological and thermal properties. These properties further were compared with commercially available Acacia gum (AG). The sugar composition of the gums indicated the presence of arabinose, galactose, and rhamnose in ANG and AG. FTIR spectrums revealed the typical trend of polysaccharides for both the gums, however, the difference was observed in fingerprint region. The rheological outcomes were derived from flow curve measurements of gums at different concentrations and temperatures. Investigations of the flow curves of both gums revealed the diminutive difference in viscosity profile. The concentration difference in the monosaccharides of polysaccharides and proximate analysis of gums could be the responsible for the difference in rheological and thermal properties of gums. However, ANG shows good resemblance with AG and can be substituted for numerous applications in food and pharmaceutical industry. Copyright © 2017 Elsevier B.V. All rights reserved.
An Approach for Assessing Delamination Propagation Capabilities in Commercial Finite Element Codes
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2007-01-01
An approach for assessing the delamination propagation capabilities in commercial finite element codes is presented and demonstrated for one code. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. Good agreement between the load-displacement relationship obtained from the propagation analysis results and the benchmark results could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as may be expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2008-01-01
An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
General relativity exactly described in terms of Newton's laws within curved geometries
NASA Astrophysics Data System (ADS)
Savickas, D.
2014-07-01
Many years ago Milne and McCrea showed in their well-known paper that the Hubble expansion occurring in general relativity could be exactly described by the use of Newtonian mechanics. It will be shown that a similar method can be extended to, and used within, curved geometries when Newton's second law is expressed within a four-dimensional curved spacetime. The second law will be shown to yield an equation that is exactly identical to the geodesic equation of motion of general relativity. This in itself yields no new information concerning relativity since the equation is mathematically identical to the relativistic equation. However, when the time in the second law is defined to have a constant direction as effectively occurs in Newtonian mechanics, and no longer acts as a fourth dimension as exists in relativity theory, it separates into a vector equation in a curved three-dimensional space and an additional second scalar equation that describes conservation of energy. It is shown that the curved Newtonian equations of motion define the metric coefficients which occur in the Schwarzschild solution and that they also define its equations of motion. Also, because the curved Newtonian equations developed here use masses as gravitational sources, as occurs in Newtonian mechanics, they make it possible to derive the solution for other kinds of mass distributions and are used here to find the metric equation for a thin mass-rod and the equation of motion for a mass particle orbiting it in its relativistic gravitational field.
White, J Wilson; Botsford, Louis W; Moffitt, Elizabeth A; Fischer, Douglas T
2010-09-01
Marine protected areas (MPAs) are growing in popularity as a conservation tool, and there are increasing calls for additional MPAs. Meta-analyses indicate that most MPAs successfully meet the minimal goal of increasing biomass inside the MPA, while some do not, leaving open the important question of what makes MPAs successful. An often-overlooked aspect of this problem is that the success of fishery management outside MPA boundaries (i.e., whether a population is overfished) affects how well MPAs meet both conservation goals (e.g., increased biomass) and economic goals (e.g., minimal negative effects on fishery yield). Using a simple example of a system with homogeneous habitat and periodically spaced MPAs, we show that, as area in MPAs increases, (1) conservation value (biomass) may initially be zero, implying no benefit, then at some point increases monotonically; and (2) fishery yield may be zero, then increases monotonically to a maximum beyond which further increase in MPA area causes yield to decline. Importantly, the points at which these changes in slope occur vary among species and depend on management outside MPAs. Decision makers considering the effects of a potential system of MPAs on multiple species are confronted by a number of such cost-benefit curves, and it is usually impossible to maximize benefits and minimize costs for all species. Moreover, the precise shape of each curve is unknown due to uncertainty regarding the fishery status of each species. Here we describe a decision-analytic approach that incorporates existing information on fishery stock status to present decision makers with the range of likely outcomes of MPA implementation. To summarize results from many species whose overfishing status is uncertain, our decision-analysis approach involves weighted averages over both overfishing uncertainty and species. In an example from an MPA decision process in California, USA, an optimistic projection of future fishery management success led to recommendation of fewer and smaller MPAs than that derived from a more pessimistic projection of future management success. This example illustrates how information on fishery status can be used to project potential outcomes of MPA implementation within a decision analysis framework and highlights the need for better population information.
Bryant, J R; Lopez-Villalobos, N; Holmes, C W; Pryce, J E; Pitman, G D; Davis, S R
2007-03-01
An evolutionary algorithm was applied to a mechanistic model of the mammary gland to find the parameter values that minimised the difference between predicted and actual lactation curves of milk yields in New Zealand Jersey cattle managed at different feeding levels. The effect of feeding level, genetic merit, body condition score at parturition and age on total lactation yields of milk, fat and protein, days in milk, live weight and evolutionary algorithm derived mammary gland parameters was then determined using a multiple regression model. The mechanistic model of the mammary gland was able to fit lactation curves that corresponded to actual lactation curves with a high degree of accuracy. The senescence rate of quiescent (inactive) alveoli was highest at the very low feeding level. The active alveoli population at peak lactation was highest at very low feeding levels, but lower nutritional status at this feeding level prevented high milk yields from being achieved. Genetic merit had a significant linear effect on the active alveoli population at peak and mid to late lactation, with higher values in animals, which had higher breeding values for milk yields. A type of genetic merit × feeding level scaling effect was observed for total yields of milk and fat, and total number of alveoli produced from conception until the end of lactation with the benefits of increases in genetic merit being greater at high feeding levels. A genetic merit × age scaling effect was observed for total lactation protein yields. Initial rates of differentiation of progenitor cells declined with age. Production levels of alveoli from conception to the end of lactation were lowest in 5- to 8-year-old animals; however, in these older animals, quiescent alveoli were reactivated more frequently. The active alveoli population at peak lactation and rates of active alveoli proceeding to quiescence were highest in animals of intermediate body condition scores of 4.0 to 5.0. The results illustrate the potential uses of a mechanistic model of the mammary gland to fit a lactation curve and to quantify the effects of feeding level, genetic merit, body condition score, and age on mammary gland dynamics throughout lactation.
On the behavior of certain ink aging curves.
Cantú, Antonio A
2017-09-01
This work treats writing inks, particularly ballpoint pen inks. It reviews those ink aging methods that are based on the analysis (measurement) of ink solvents (e.g., 2-phenoxyethanol, which is the most common among ballpoint pen inks). Each method involves measurements that are components of an ink aging parameter associated with the method. Only mass independent parameters are considered. An ink solvent from an ink that is on an air-exposed substrate will evaporate at a decreasing rate and is never constant as the ink ages. An ink aging parameter should reflect this behavior. That is, the graph of a parameter's experimentally-determined values plotted against ink age (which yields the ink aging curve) should show this behavior. However, some experimentally-determined aging curves contain outlying points that are below or above where they should be or points corresponding to different ages that have the same ordinate (parameter value). Such curves, unfortunately, are useless since such curves show that an ink can appear older or younger than what it should be in one or more of its points or have the same age in two or more of its points. This work explains that one cause of this unexpected behavior is that the parameter values were improperly determined such as when a measurement is made of an ink solvent that is not completely extracted (removed) from an ink sample with a chosen extractor such as dry heat or a solvent. Copyright © 2017 Elsevier B.V. All rights reserved.
Gao, C Q; Yang, J X; Chen, M X; Yan, H C; Wang, X Q
2016-04-01
Two experiments were conducted to fit growth curves, and determine age-related changes in carcass characteristics, organs, serum biochemical parameters, and gene expression of intestinal nutrient transporters in domestic pigeon (Columba livia). In experiment 1, body weight (BW) of 30 pigeons was respectively determined at 1, 3, 7, 14, 21, 28, and 35 days old to fit growth curves and to describe the growth of pigeons. In experiment 2, eighty-four 1-day-old squabs were grouped by weight into 7 groups. On d 1, 3, 7, 14, 21, 28, and 35, twelve birds from each group were randomly selected for slaughter and post-slaughter analysis. The results showed that BW of pigeons increased rapidly from d 1 to d 28 (a 25.7-fold increase), and then had little change until d 35. The Logistic, Gompertz, and Von Bertalanffy functions can all be well fitted with the growth curve of domestic pigeons (R2>0.90) and the Gompertz model showed the highest R2value among the models (R2=0.9997). The equation of Gompertz model was Y=507.72×e-(3.76exp(-0.17t))(Y=BW of pigeon (g); t=time (day)). In addition, breast meat yield (%) increased with age throughout the experiment, whereas the leg meat yield (%) reached to the peak on d 14. Serum total protein, albumin, globulin, and glucose concentration were increased with age, whereas serum uric acid concentration was decreased (P<0.05). Furthermore, the gene expressions of nutrient transporters (y+LAT2, LAT1, B0AT1, PepT1, and NHE2) in jejunum of pigeon were increased with age. The results of correlation analysis showed the gene expressions of B0AT1, PepT1, and NHE2 had positive correlations with BW (0.73
Extracting factors for interest rate scenarios
NASA Astrophysics Data System (ADS)
Molgedey, L.; Galic, E.
2001-04-01
Factor based interest rate models are widely used for risk managing purposes, for option pricing and for identifying and capturing yield curve anomalies. The movements of a term structure of interest rates are commonly assumed to be driven by a small number of orthogonal factors such as SHIFT, TWIST and BUTTERFLY (BOW). These factors are usually obtained by a Principal Component Analysis (PCA) of historical bond prices (interest rates). Although PCA diagonalizes the covariance matrix of either the interest rates or the interest rate changes, it does not use both covariance matrices simultaneously. Furthermore higher linear and nonlinear correlations are neglected. These correlations as well as the mean reverting properties of the interest rates become crucial, if one is interested in a longer time horizon (infrequent hedging or trading). We will show that Independent Component Analysis (ICA) is a more appropriate tool than PCA, since ICA uses the covariance matrix of the interest rates as well as the covariance matrix of the interest rate changes simultaneously. Additionally higher linear and nonlinear correlations may be easily incorporated. The resulting factors are uncorrelated for various time delays, approximately independent but nonorthogonal. This is in contrast to the factors obtained from the PCA, which are orthogonal and uncorrelated for identical times only. Although factors from the ICA are nonorthogonal, it is sufficient to consider only a few factors in order to explain most of the variation in the original data. Finally we will present examples that ICA based hedges outperforms PCA based hedges specifically if the portfolio is sensitive to structural changes of the yield curve.
Hozo, Iztok; Tsalatsanis, Athanasios; Djulbegovic, Benjamin
2018-02-01
Decision curve analysis (DCA) is a widely used method for evaluating diagnostic tests and predictive models. It was developed based on expected utility theory (EUT) and has been reformulated using expected regret theory (ERG). Under certain circumstances, these 2 formulations yield different results. Here we describe these situations and explain the variation. We compare the derivations of the EUT- and ERG-based formulations of DCA for a typical medical decision problem: "treat none," "treat all," or "use model" to guide treatment. We illustrate the differences between the 2 formulations when applied to the following clinical question: at which probability of death we should refer a terminally ill patient to hospice? Both DCA formulations yielded identical but mirrored results when treatment effects are ignored; they generated significantly different results otherwise. Treatment effect has a significant effect on the results derived by EUT DCA and less so on ERG DCA. The elicitation of specific values for disutilities affected the results even more significantly in the context of EUT DCA, whereas no such elicitation was required within the ERG framework. EUT and ERG DCA generate different results when treatment effects are taken into account. The magnitude of the difference depends on the effect of treatment and the disutilities associated with disease and treatment effects. This is important to realize as the current practice guidelines are uniformly based on EUT; the same recommendations can significantly differ if they are derived based on ERG framework. © 2016 The Authors. Journal of Evaluation in Clinical Practice Published by John Wiley & Sons Ltd.
Hozo, Iztok; Tsalatsanis, Athanasios
2016-01-01
Abstract Rationale, aims, and objectives Decision curve analysis (DCA) is a widely used method for evaluating diagnostic tests and predictive models. It was developed based on expected utility theory (EUT) and has been reformulated using expected regret theory (ERG). Under certain circumstances, these 2 formulations yield different results. Here we describe these situations and explain the variation. Methods We compare the derivations of the EUT‐ and ERG‐based formulations of DCA for a typical medical decision problem: “treat none,” “treat all,” or “use model” to guide treatment. We illustrate the differences between the 2 formulations when applied to the following clinical question: at which probability of death we should refer a terminally ill patient to hospice? Results Both DCA formulations yielded identical but mirrored results when treatment effects are ignored; they generated significantly different results otherwise. Treatment effect has a significant effect on the results derived by EUT DCA and less so on ERG DCA. The elicitation of specific values for disutilities affected the results even more significantly in the context of EUT DCA, whereas no such elicitation was required within the ERG framework. Conclusion EUT and ERG DCA generate different results when treatment effects are taken into account. The magnitude of the difference depends on the effect of treatment and the disutilities associated with disease and treatment effects. This is important to realize as the current practice guidelines are uniformly based on EUT; the same recommendations can significantly differ if they are derived based on ERG framework. PMID:27981695
One hundred and fifty years of sprint and distance running – Past trends and future prospects
Weiss, Martin; Newman, Alexandra; Whitmore, Ceri; Weiss, Stephan
2016-01-01
Abstract Sprint and distance running have experienced remarkable performance improvements over the past century. Attempts to forecast running performances share an almost similarly long history but have relied so far on relatively short data series. Here, we compile a comprehensive set of season-best performances for eight Olympically contested running events. With this data set, we conduct (1) an exponential time series analysis and (2) a power-law experience curve analysis to quantify the rate of past performance improvements and to forecast future performances until the year 2100. We find that the sprint and distance running performances of women and men improve exponentially with time and converge at yearly rates of 4% ± 3% and 2% ± 2%, respectively, towards their asymptotic limits. Running performances can also be modelled with the experience curve approach, yielding learning rates of 3% ± 1% and 6% ± 2% for the women's and men's events, respectively. Long-term trends suggest that: (1) women will continue to run 10–20% slower than men, (2) 9.50 s over 100 m dash may only be broken at the end of this century and (3) several middle- and long-distance records may be broken within the next two to three decades. The prospects of witnessing a sub-2 hour marathon before 2100 remain inconclusive. Our results should be interpreted cautiously as forecasting human behaviour is intrinsically uncertain. The future season-best sprint and distance running performances will continue to scatter around the trends identified here and may yield unexpected improvements of standing world records. PMID:26088705
A site classification for the mixed-conifer selection forests of the Sierra Nevada
Duncan Dunning
1942-01-01
The site-class curves presented . . . for the irregular pine-fir forests of California, were first prepared in connection with a yield-predicting procedure . . . developed in 1933. The original curves were designed principally for administrative use of the Forest Service in Region 5. Since they have now come to be accepted by other agencies and for general purposes,...
A Height–Diameter Curve for Longleaf Pine Plantations in the Gulf Coastal Plain
Daniel Leduc; Jeffery Goelz
2009-01-01
Tree height is a critical component of a complete growth-and-yield model because it is one of the primary components used in volume calculation. To develop an equation to predict total height from dbh for longleaf pine (Pinus palustris Mill.) plantations in the West Gulf region, many different sigmoidal curve forms, weighting functions, and ways of...
NASA Technical Reports Server (NTRS)
Effinger, M.; Ellingson, B.; Spohnholtz, T.; Koenig, J.
2001-01-01
An idea is put forth for a nondestructive characterization (NDC) generated algorithm-N curve to replace a S-N curve. A scenario for NDC life determination has been proposed. There are many challenges for the NDC life determination and prediction, but it could yield a grand payoff. The justification for NDC life determination and prediction is documented.
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2013-01-01
In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2014-01-01
In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…
Effect of tensile deformation on micromagnetic parameters in 0.2% carbon steel and 2.25Cr-1Mo steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moorthy, V.; Vaidyanathan, S.; Jayakumar, T.
The influence of prior tensile deformation on the magnetic Barkhausen emission (MBE) and the hysteresis (B-H) curve has been studied in 0.2% carbon steel and 2.25Cr-1Mo steel under different tempered conditions. This study shows that the micromagnetic parameters can be used to identify the four stages of deformation, namely (1) perfectly elastic, (2) microplastic yielding, (3) macroyielding and (4) progressive plastic deformation. However, it is observed that the MBE profile shows more distinct changes at different stages of tensile deformation than the hysteresis curve. It has been established that the beginning of microplastic yielding and macroyielding can be identified frommore » the MBE profile which is not possible from the stress-strain plot. The onset of microplastic yielding can be identified from the decrease in the MBE peak height. The macroyielding can be identified from the merging of the initially present two-peak MBE profile into a single central peak with relatively higher peak height and narrow profile width. The difference between the variation of MBE and hysteresis curve parameters with strain beyond macroyielding indicates the difference in the deformation state of the surface and bulk of the sample.« less
Effect of yield curves and porous crush on hydrocode simulations of asteroid airburst
NASA Astrophysics Data System (ADS)
Robertson, D. K.; Mathias, D. L.
2017-03-01
Simulations of asteroid airburst are being conducted to obtain best estimates of damage areas and assess sensitivity to variables for asteroid characterization and mitigation efforts. The simulations presented here employed the ALE3D hydrocode to examine the breakup and energy deposition of asteroids entering the Earth's atmosphere, using the Chelyabinsk meteor as a test case. This paper examines the effect of increasingly complex material models on the energy deposition profile. Modeling the meteor as a rock having a single strength can reproduce airburst altitude and energy deposition reasonably well but is not representative of real rock masses (large bodies of material). Accounting for a yield curve that includes different tensile, shear, and compressive strengths shows that shear strength determines the burst altitude. Including yield curves and compaction of porous spaces in the material changes the detailed mechanics of the breakup but only has a limited effect on the burst altitude and energy deposition. Strong asteroids fail and create peak energy deposition close to the altitude at which ram dynamic pressure equals the material strength. Weak asteroids, even though they structurally fail at high altitude, require the increased pressure at lower altitude to disrupt and disperse the rubble. As a result, a wide range of weaker asteroid strengths produce peak energy deposition at a similar altitude.
Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T
2013-12-11
The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.
Weaver, Jordan S.; Pathak, Siddhartha; Reichardt, Ashley; ...
2017-06-27
Experimentally quantifying the mechanical effects of radiation damage in reactor materials is necessary for the development and qualification of new materials for improved performance and safety. This can be achieved in a high-throughput fashion through a combination of ion beam irradiation and small scale mechanical testing in contrast to the high cost and laborious nature of bulk testing of reactor irradiated samples. The current paper focuses on using spherical nanoindentation stress-strain curves on unirradiated and proton irradiated (10 dpa at 360 °C) 304 stainless steel to quantify the mechanical effects of radiation damage. Spherical nanoindentation stress-strain measurements show a radiation-inducedmore » increase in indentation yield strength from 1.36 GPa to 2.72 GPa and a radiation-induced increase in indentation work hardening rate of 10 GPa–30 GPa. These measurements are critically compared against Berkovich nanohardness, micropillar compression, and micro-tension measurements on the same material and similar grain orientations. The ratio of irradiated to unirradiated yield strength increases by a similar factor of 2 when measured via spherical nanoindentation or Berkovich nanohardness testing. A comparison of spherical indentation stress-strain curves to uniaxial (micropillar and micro-tension) stress-strain curves was achieved using a simple scaling relationship which shows good agreement for the unirradiated condition and poor agreement in post-yield behavior for the irradiated condition. Finally, the disagreement between spherical nanoindentation and uniaxial stress-strain curves is likely due to the plastic instability that occurs during uniaxial tests but is absent during spherical nanoindentation tests.« less
NASA Astrophysics Data System (ADS)
Weaver, Jordan S.; Pathak, Siddhartha; Reichardt, Ashley; Vo, Hi T.; Maloy, Stuart A.; Hosemann, Peter; Mara, Nathan A.
2017-09-01
Experimentally quantifying the mechanical effects of radiation damage in reactor materials is necessary for the development and qualification of new materials for improved performance and safety. This can be achieved in a high-throughput fashion through a combination of ion beam irradiation and small scale mechanical testing in contrast to the high cost and laborious nature of bulk testing of reactor irradiated samples. The current work focuses on using spherical nanoindentation stress-strain curves on unirradiated and proton irradiated (10 dpa at 360 °C) 304 stainless steel to quantify the mechanical effects of radiation damage. Spherical nanoindentation stress-strain measurements show a radiation-induced increase in indentation yield strength from 1.36 GPa to 2.72 GPa and a radiation-induced increase in indentation work hardening rate of 10 GPa-30 GPa. These measurements are critically compared against Berkovich nanohardness, micropillar compression, and micro-tension measurements on the same material and similar grain orientations. The ratio of irradiated to unirradiated yield strength increases by a similar factor of 2 when measured via spherical nanoindentation or Berkovich nanohardness testing. A comparison of spherical indentation stress-strain curves to uniaxial (micropillar and micro-tension) stress-strain curves was achieved using a simple scaling relationship which shows good agreement for the unirradiated condition and poor agreement in post-yield behavior for the irradiated condition. The disagreement between spherical nanoindentation and uniaxial stress-strain curves is likely due to the plastic instability that occurs during uniaxial tests but is absent during spherical nanoindentation tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Jordan S.; Pathak, Siddhartha; Reichardt, Ashley
Experimentally quantifying the mechanical effects of radiation damage in reactor materials is necessary for the development and qualification of new materials for improved performance and safety. This can be achieved in a high-throughput fashion through a combination of ion beam irradiation and small scale mechanical testing in contrast to the high cost and laborious nature of bulk testing of reactor irradiated samples. The current paper focuses on using spherical nanoindentation stress-strain curves on unirradiated and proton irradiated (10 dpa at 360 °C) 304 stainless steel to quantify the mechanical effects of radiation damage. Spherical nanoindentation stress-strain measurements show a radiation-inducedmore » increase in indentation yield strength from 1.36 GPa to 2.72 GPa and a radiation-induced increase in indentation work hardening rate of 10 GPa–30 GPa. These measurements are critically compared against Berkovich nanohardness, micropillar compression, and micro-tension measurements on the same material and similar grain orientations. The ratio of irradiated to unirradiated yield strength increases by a similar factor of 2 when measured via spherical nanoindentation or Berkovich nanohardness testing. A comparison of spherical indentation stress-strain curves to uniaxial (micropillar and micro-tension) stress-strain curves was achieved using a simple scaling relationship which shows good agreement for the unirradiated condition and poor agreement in post-yield behavior for the irradiated condition. Finally, the disagreement between spherical nanoindentation and uniaxial stress-strain curves is likely due to the plastic instability that occurs during uniaxial tests but is absent during spherical nanoindentation tests.« less
TG-FTIR analysis on pyrolysis and combustion of marine sediment
NASA Astrophysics Data System (ADS)
Oudghiri, Fatiha; Allali, Nabil; Quiroga, José María; Rodríguez-Barroso, María Rocío
2016-09-01
In this paper, the pyrolysis and combustion of sediment have been compared using thermogravimetric analysis (TG) coupled with Fourier transform infrared spectrometry (TG-FTIR) analysis. The TG results showed that both the pyrolysis and combustion of sediment presented four weight loss stages, each. The evolving gaseous products during pyrolysis were H2O, CO2 and hydrocarbons, while combustion yielded considerable amounts of CO2, in addition to H2O, CO, Cdbnd C, Cdbnd O and NH3. Comparing the pyrolysis and combustion TG-FTIR curves, it is possible to evaluate the effect of oxygen presence in the temperature range of 200-600 °C, which increases the volatilisation rate of organic matter in sediment. For the better detection of organic and inorganic matter in sediment by TG-FTIR analysis it is recommended to work in combustion mode of sediment.
Leslie, Mark; Holloway, Charles A
2006-01-01
When a company launches a new product into a new market, the temptation is to immediately ramp up sales force capacity to gain customers as quickly as possible. But hiring a full sales force too early just causes the firm to burn through cash and fail to meet revenue expectations. Before it can sell an innovative product efficiently, the entire organization needs to learn how customers will acquire and use it, a process the authors call the sales learning curve. The concept of a learning curve is well understood in manufacturing. Employees transfer knowledge and experience back and forth between the production line and purchasing, manufacturing, engineering, planning, and operations. The sales learning curve unfolds similarly through the give-and-take between the company--marketing, sales, product support, and product development--and its customers. As customers adopt the product, the firm modifies both the offering and the processes associated with making and selling it. Progress along the manufacturing curve is measured by tracking cost per unit: The more a firm learns about the manufacturing process, the more efficient it becomes, and the lower the unit cost goes. Progress along the sales learning curve is measured in an analogous way: The more a company learns about the sales process, the more efficient it becomes at selling, and the higher the sales yield. As the sales yield increases, the sales learning process unfolds in three distinct phases--initiation, transition, and execution. Each phase requires a different size--and kind--of sales force and represents a different stage in a company's production, marketing, and sales strategies. Adjusting those strategies as the firm progresses along the sales learning curve allows managers to plan resource allocation more accurately, set appropriate expectations, avoid disastrous cash shortfalls, and reduce both the time and money required to turn a profit.
Pricing strategy for aesthetic surgery: economic analysis of a resident clinic's change in fees.
Krieger, L M; Shaw, W W
1999-02-01
The laws of microeconomics explain how prices affect consumer purchasing decisions and thus overall revenues and profits. These principles can easily be applied to the behavior aesthetic plastic surgery patients. The UCLA Division of Plastic Surgery resident aesthetics clinic recently offered a radical price change for its services. The effects of this change on demand for services and revenue were tracked. Economic analysis was applied to see if this price change resulted in the maximization of total revenues, or if additional price changes could further optimize them. Economic analysis of pricing involves several steps. The first step is to assess demand. The number of procedures performed by a given practice at different price levels can be plotted to create a demand curve. From this curve, price sensitivities of consumers can be calculated (price elasticity of demand). This information can then be used to determine the pricing level that creates demand for the exact number of procedures that yield optimal revenues. In economic parlance, revenues are maximized by pricing services such that elasticity is equal to 1 (the point of unit elasticity). At the UCLA resident clinic, average total fees per procedure were reduced by 40 percent. This resulted in a 250-percent increase in procedures performed for representative 4-month periods before and after the price change. Net revenues increased by 52 percent. Economic analysis showed that the price elasticity of demand before the price change was 6.2. After the price change it was 1. We conclude that the magnitude of the price change resulted in a fee schedule that yielded the highest possible revenues from the resident clinic. These results show that changes in price do affect total revenue and that the nature of these effects can be understood, predicted, and maximized using the tools of microeconomics.
David B. South; Curtis L. VanderSchaaf; Larry D. Teeter
2006-01-01
Some researchers claim that continuously increasing intensive plantation management will increase profits and reduce the unit cost of wood production while others believe in the law of diminishing returns. We developed four hypothetical production models where yield is a function of silvicultural effort. Models that produced unrealistic results were (1) an exponential...
Red alder stand development and dynamics.
R.L. Deal
2006-01-01
This paper synthesizes information on the development of natural pure red alder stands and dynamics of mixed alder-conifer stands. Early research on red alder growth and yield focused on developing stand volume and normal yield. tables for alder in the Pacific Northwest. Recent site-index estimation and height-growth curves were developed on a 20-year site base age....
Adjusting site index and age to account for genetic effects in yield equations for loblolly pine
Steven A. Knowe; G. Sam Foster
2010-01-01
Nine combinations of site index curves and age adjustments methods were evaluated for incorporating genetic effects for open-pollinated loblolly pine (Pinus taeda L.) families. An explicit yield system consisting of dominant height, basal area, and merchantable green weight functions was used to compare the accuracy of predictions associated with...
Economic weights for genetic improvement of lactation persistency and milk yield.
Togashi, K; Lin, C Y
2009-06-01
This study aimed to establish a criterion for measuring the relative weight of lactation persistency (the ratio of yield at 280 d in milk to peak yield) in restricted selection index for the improvement of net merit comprising 3-parity total yield and total lactation persistency. The restricted selection index was compared with selection based on first-lactation total milk yield (I(1)), the first-two-lactation total yield (I(2)), and first-three-lactation total yield (I(3)). Results show that genetic response in net merit due to selection on restricted selection index could be greater than, equal to, or less than that due to the unrestricted index depending upon the relative weight of lactation persistency and the restriction level imposed. When the relative weight of total lactation persistency is equal to the criterion, the restricted selection index is equal to the selection method compared (I(1), I(2), or I(3)). The restricted selection index yielded a greater response when the relative weight of total lactation persistency was above the criterion, but a lower response when it was below the criterion. The criterion varied depending upon the restriction level (c) imposed and the selection criteria compared. A curvilinear relationship (concave curve) exists between the criterion and the restricted level. The criterion increases as the restriction level deviates in either direction from 1.5. Without prior information of the economic weight of lactation persistency, the imposition of the restriction level of 1.5 on lactation persistency would maximize change in net merit. The procedure presented allows for simultaneous modification of multi-parity lactation curves.
Effects of eccentricities and lateral pressure on the design of stiffened compression panels
NASA Technical Reports Server (NTRS)
Giles, G. L.; Anderson, M. S.
1972-01-01
An analysis for determining the effects of eccentricities and lateral pressure on the design of stiffened compression panels is presented. The four types of panel stiffeners considered are integral, zee, integral zee, and integral tee. Mass-strength curves, which give the mass of the panel necessary to carry a specified load, are given along with related design equations needed to calculate the cross-sectional dimensions of the minimum-mass-stiffened panel. The results of the study indicate that the proportions of the panels are geometrically similar to the proportions of panels designed for no eccentricity or lateral pressure, but no cross-sectional dimensions are greater, resulting in significantly increased mass. The analytical minimum-mass designs of zee-stiffened panels are compared with designs from experimentally derived charts. An assumed eccentricity of 0.001 times the length of the panel is used to correlate the analytical and experimental data. Good correlation between the experimentally derived and the analytical curves is obtained for the range of loading where materials yield governs the design. At lower loads the mass given by the analytical curve using this assumed eccentricity is greater than that given by the experimental results.
Gemperline, Paul J; Cash, Eric
2003-08-15
A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.
Photometric and Spectral Study of the Saturnian Satellites
NASA Technical Reports Server (NTRS)
Newman, Sarah F.
2005-01-01
Photometric and spectra analysis of data from the Cassini Visual and Infrared Mapping Spectrometer (VIMS) has yielded intriguing findings regarding the surface properties of several of the icy Saturnian satellites. Spectral cubes were obtained of these satellites with a wavelength distribution in the IR far more extensive than from any previous observations. Disk-integrated solar phase curves were constructed in several key IR wavelengths that are indicative of key properties of the surface of the body, such as macroscopic roughness, fluffiness (or the porosity of the surface), global albedo and scattering properties of surface particles. Polynomial fits to these phase curves indicate a linear albedo trend of the curvature of the phase functions. Rotational phase functions from Enceladus were found to exhibit a double-peaked sinusoidal curve, which shows larger amplitudes for bands corresponding to water ice and a linear amplitude-albedo trend. These functions indicate regions on the surface of the satellite of more recent geologic activity. In addition, recent images of Enceladus show tectonic features and an absence of impact craters on Southern latitudes which could be indicative of a younger surface. Investigations into the properties of these features using VIMS are underway.
Topographically driven crustal flow and its implication to the development of pinned oroclines
NASA Technical Reports Server (NTRS)
Hsui, Albert T.; Wilkerson, M. Scott; Marshak, Stephen
1990-01-01
Pinned oroclines, a type of curved orogen which results from lateral pinning of a growing fold-thrust belt, tend to resemble parabolic Newtonian curvature modified by different degrees of flattening at the flow front. It is proposed that such curves can be generated by Newtonian crustal flow driven by topographic variations. In this model, regional topographic differences create a regional flow which produces a parabolic flow front on interaction with lateral bounding obstacles. Local topographic variations modify the parabolic curves and yield more flat-crested, non-Newtonian-type curvatures. A finite-difference thin-skin tectonic simulation demonstrates that both Newtonian and non-Newtonian curved orogens can be produced within a Newtonian crust.
Using Mason number to predict MR damper performance from limited test data
NASA Astrophysics Data System (ADS)
Becnel, Andrew C.; Wereley, Norman M.
2017-05-01
The Mason number can be used to produce a single master curve which relates MR fluid stress versus strain rate behavior across a wide range of shear rates, temperatures, and applied magnetic fields. As applications of MR fluid energy absorbers expand to a variety of industries and operating environments, Mason number analysis offers a path to designing devices with desired performance from a minimal set of preliminary test data. Temperature strongly affects the off-state viscosity of the fluid, as the passive viscous force drops considerably at higher temperatures. Yield stress is not similarly affected, and stays relatively constant with changing temperature. In this study, a small model-scale MR fluid rotary energy absorber is used to measure the temperature correction factor of a commercially-available MR fluid from LORD Corporation. This temperature correction factor is identified from shear stress vs. shear rate data collected at four different temperatures. Measurements of the MR fluid yield stress are also obtained and related to a standard empirical formula. From these two MR fluid properties - temperature-dependent viscosity and yield stress - the temperature-corrected Mason number is shown to predict the force vs. velocity performance of a full-scale rotary MR fluid energy absorber. This analysis technique expands the design space of MR devices to high shear rates and allows for comprehensive predictions of overall performance across a wide range of operating conditions from knowledge only of the yield stress vs. applied magnetic field and a temperature-dependent viscosity correction factor.
Brenkman, H J F; Ruurda, J P; Verhoeven, R H A; van Hillegersberg, R
2017-09-01
Minimally invasive techniques for gastric cancer surgery have recently been introduced in the Netherlands, based on a proctoring program. The aim of this population-based cohort study was to evaluate the short-term oncological outcomes of minimally invasive gastrectomy (MIG) during its introduction in the Netherlands. The Netherlands Cancer Registry identified all patients with gastric adenocarcinoma who underwent gastrectomy with curative intent between 2010 and 2014. Multivariable analysis was performed to compare MIG and open gastrectomy (OG) on lymph node yield (≥15), R0 resection rate, and 1-year overall survival. The pooled learning curve per center of MIG was evaluated by groups of five subsequent procedures. Between 2010 and 2014, a total of 277 (14%) patients underwent MIG and 1633 (86%) patients underwent OG. During this period, the use of MIG and neoadjuvant chemotherapy increased from 4% to 39% (p < 0.001) and from 47% to 62% (p < 0.001), respectively. The median lymph node yield increased from 12 to 20 (p < 0.001), and the R0 resection rate remained stable, from 86% to 91% (p = 0.080). MIG and OG had a comparable lymph node yield (OR, 1.01; 95% CI, 0.75-1.36), R0 resection rate (OR, 0.86; 95% CI, 0.54-1.37), and 1-year overall survival (HR, 0.99; 95% CI, 0.75-1.32). A pooled learning curve of ten procedures was demonstrated for MIG, after which the conversion rate (13%-2%; p = 0.001) and lymph node yield were at a desired level (18-21; p = 0.045). With a proctoring program, the introduction of minimally invasive gastrectomy in Western countries is feasible and can be performed safely.
Identification of Preferential Groundwater Flow Pathways from Local Tracer Breakthrough Curves
NASA Astrophysics Data System (ADS)
Kokkinaki, A.; Sleep, B. E.; Dearden, R.; Wealthall, G.
2009-12-01
Characterizing preferential groundwater flow paths in the subsurface is a key factor in the design of in situ remediation technologies. When applying reaction-based remediation methods, such as enhanced bioremediation, preferential flow paths result in fast solute migration and potentially ineffective delivery of reactants, thereby adversely affecting treatment efficiency. The presence of such subsurface conduits was observed at the SABRe (Source Area Bioremediation) research site. Non-uniform migration of contaminants and electron donor during the field trials of enhanced bioremediation supported this observation. To better determine the spatial flow field of the heterogeneous aquifer, a conservative tracer test was conducted. Breakthrough curves were obtained at a reference plane perpendicular to the principal groundwater flow direction. The resulting dataset was analyzed using three different methods: peak arrival times, analytical solution fitting and moment analysis. Interpretation using the peak arrival time method indicated areas of fast plume migration. However, some of the high velocities are supported by single data points, thus adding considerable uncertainty to the estimated velocity distribution. Observation of complete breakthrough curves indicated different types of solute breakthrough, corresponding to different transport mechanisms. Sharp peaks corresponded to high conductivity preferential flow pathways, whereas more dispersed breakthrough curves with long tails were characteristic of significant dispersive mixing and dilution. While analytical solutions adequately quantified flow characteristics for the first type of curves, they failed to do so for the second type, in which case they gave unrealistic results. Therefore, a temporal moment analysis was performed to obtain complete spatial distributions of mass recovery, velocity and dispersivity. Though the results of moment analysis qualitatively agreed with the results of previous methods, more realistic estimates of velocities were obtained and the presence of one major preferential flow pathway was confirmed. However, low mass recovery and deviations from the 10% scaling rule for dispersivities indicate that insufficient spatial and temporal monitoring, as well as interpolation and truncation errors introduced uncertainty in the flow and transport parameters estimated by the method of moments. The results of the three analyses are valuable for enhancing the understanding of mass transport and remediation performance. Comparing the different interpretation methods, increasing the amount of concentration data considered in the analysis, the derived velocity fields were smoother and the estimated local velocities and dispersivities became more realistic. In conclusion, moment analysis is a method that represents a smoothed average of the velocity across the entire breakthrough curve, whereas the peak arrival time, which may be a less well constrained estimate, represents the physical peak arrival and typically yields a higher velocity than the moment analysis. This is an important distinction when applying the results of the tracer test to field sites.
Identification of Apical and Cervical Curvature Radius of Human Molars.
Estrela, Carlos; Bueno, Mike R; Barletta, Fernando B; Guedes, Orlando A; Porto, Olavo C; Estrela, Cyntia R A; Pécora, Jesus Djalma
2015-01-01
To determine the frequency of apical and cervical curvatures in human molars using the radius method and cone-beam computed tomography (CBCT) images. Four hundred images of mandibular and maxillary first and second molars were selected from a database of CBCT exams. The radius of curvature of curved root canals was measured using a circumcenter based on three mathematical points. Radii were classified according to the following scores: 0 - straight line; 1 - large radius (r > 8 mm, mild curvature); 2 - intermediate radius (r > 4 and r < 8 mm, moderate curvature); and 3 - small radius (r ≤ 4 mm, severe curvature). The frequency of curved root canals was analyzed according to root canal, root thirds, and coronal and sagittal planes, and assessed using the chi-square test (significance at α = 0.05). Of the 1,200 evaluated root canals, 92.75% presented curved root canals in the apical third and 73.25% in the cervical third on coronal plane images; sagittal plane analysis yielded 89.75% of curved canals in the apical third and 77% in the cervical third. Root canals with a large radius were significantly more frequent when compared with the other categories, regardless of root third or plane. Most root canals of maxillary and mandibular first and second molars showed some degree of curvature in the apical and cervical thirds, regardless of the analyzed plane (coronal or sagittal).
Broadband Photometric Reverberation Mapping Analysis on SDSS-RM and Stripe 82 Quasars
NASA Astrophysics Data System (ADS)
Zhang, Haowen; Yang, Qian; Wu, Xuebing; Shen, Yue
2018-01-01
We extended the broadband photometric reverberation mapping (PRM) code, JAVELIN and test the availability to get broad line region (BLR) time delays that are consistent with spectroscopic reverberation mapping (SRM) projects. Broadband light curves of SDSS-RM quasars produced by convolution with system transmission curve were used in the test. We find that under similar sampling conditions (evenly and frequently sampled), the key factor determining whether the broadband PRM code can yield lags consistent with spectroscopic projects is the flux ratio of line to the reference continuum, which is in line with the findings in Zu et al. (2016). We further find a crucial line-to-continuum flux ratio, above which the mean of the ratios between the lags from PRM and SRM becomes closer to unity, and the scatter is pronouncedly reduced. Based on this flux ratio criteria, we selected some of the quasars from Hernitschek et al. (2015) and carry out broadband PRM on this subset. The performance of damped random walking (DRW) model and power-law (PL) structure function model on broadband PRM are compared using mock light curves with high, even cadences and low, uneven ones, respectively. We find that DRW model performs better in carrying out broadband PRM than PL model both for high and low cadence light curves with other data qualities similar to SDSS-RM quasars.
NASA Astrophysics Data System (ADS)
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Young-Sang; Le Roy, Robert J.
2016-01-14
All available “conventional” absorption/emission spectroscopic data have been combined with photodissociation data and translational spectroscopy data in a global analysis that yields analytic potential energy and Born-Oppenheimer breakdown functions for the X{sup 1}Σ{sup +} and A{sup 1}Π states of CH{sup +} and its isotopologues that reproduce all of the data (on average) within their assigned uncertainties. For the ground X{sup 1}Σ{sup +} state, this fully quantum mechanical “Direct-Potential-Fit” analysis yielded an improved empirical well depth of D{sub e} = 34 362.8(3) cm{sup −1} and equilibrium bond length of r{sub e} = 1.128 462 5 (58) Å. For the A{sup 1}Π state, the resulting wellmore » depth and equilibrium bond length are D{sub e} = 10 303.7(3) cm{sup −1} and r{sub e} = 1.235 896 (14) Å, while the electronic isotope shift from the hydride to the deuteride is ΔT{sub e} = − 5.99(±0.08) cm{sup −1}.« less
Automated diagnosis of Alzheimer's disease with multi-atlas based whole brain segmentations
NASA Astrophysics Data System (ADS)
Luo, Yuan; Tang, Xiaoying
2017-03-01
Voxel-based analysis is widely used in quantitative analysis of structural brain magnetic resonance imaging (MRI) and automated disease detection, such as Alzheimer's disease (AD). However, noise at the voxel level may cause low sensitivity to AD-induced structural abnormalities. This can be addressed with the use of a whole brain structural segmentation approach which greatly reduces the dimension of features (the number of voxels). In this paper, we propose an automatic AD diagnosis system that combines such whole brain segmen- tations with advanced machine learning methods. We used a multi-atlas segmentation technique to parcellate T1-weighted images into 54 distinct brain regions and extract their structural volumes to serve as the features for principal-component-analysis-based dimension reduction and support-vector-machine-based classification. The relationship between the number of retained principal components (PCs) and the diagnosis accuracy was systematically evaluated, in a leave-one-out fashion, based on 28 AD subjects and 23 age-matched healthy subjects. Our approach yielded pretty good classification results with 96.08% overall accuracy being achieved using the three foremost PCs. In addition, our approach yielded 96.43% specificity, 100% sensitivity, and 0.9891 area under the receiver operating characteristic curve.
Multi-frequency local wavenumber analysis and ply correlation of delamination damage.
Juarez, Peter D; Leckey, Cara A C
2015-09-01
Wavenumber domain analysis through use of scanning laser Doppler vibrometry has been shown to be effective for non-contact inspection of damage in composites. Qualitative and semi-quantitative local wavenumber analysis of realistic delamination damage and quantitative analysis of idealized damage scenarios (Teflon inserts) have been performed previously in the literature. This paper presents a new methodology based on multi-frequency local wavenumber analysis for quantitative assessment of multi-ply delamination damage in carbon fiber reinforced polymer (CFRP) composite specimens. The methodology is presented and applied to a real world damage scenario (impact damage in an aerospace CFRP composite). The methodology yields delamination size and also correlates local wavenumber results from multiple excitation frequencies to theoretical dispersion curves in order to robustly determine the delamination ply depth. Results from the wavenumber based technique are validated against a traditional nondestructive evaluation method. Published by Elsevier B.V.
Robert B. Thomas
1988-01-01
Abstract - Rating curves are widely used for directly assessing changes in the suspended sediment delivery process and indirectly for estimating total yields. Four sampling methods were simulated-over a 31-day record of suspended sediment from the North Fork of the Mad River near Korbel, California. The position and size of the four groups of plotted slope/intercept...
Early Detection of Progressive Adolescent Idiopathic Scoliosis: A Severity Index.
Skalli, Wafa; Vergari, Claudio; Ebermeyer, Eric; Courtois, Isabelle; Drevelle, Xavier; Kohler, Remi; Abelin-Genevois, Kariman; Dubousset, Jean
2017-06-01
Early detection of progressive adolescent idiopathic scoliosis (AIS) was assessed based on 3D quantification of the deformity. Based on 3D quantitative description of scoliosis curves, the aim is to assess a specific phenotype that could be an early detectable severity index for progressive AIS. Early detection of progressive scoliosis is important for adapted treatment to limit progression. However, progression risk assessment is mainly based on the follow up, waiting for signs of rapid progression that generally occur during the growth peak. Sixty-five mild scoliosis (16 boys, 49 girls, Cobb Angle between 10 and 20°) with a Risser between 0 and 2 were followed from their first examination until a decision was made by the clinician, either considering the spine as stable at the end of growth (26 patients) or planning to brace because of progression (39 patients). Calibrated biplanar x-rays were performed and 3D reconstructions of the spine allowed calculating six local parameters related to main curve deformity. For progressive curve 3D phenotype assessment, data were compared with those previously assessed for 30 severe scoliosis (Cobb Angle > 35°), 17 scoliosis before brace (Cobb Angle > 29°) and 53 spines of nonscoliosis subjects. A predictive discriminant analysis was performed to assess similarity of mild scoliosis curves either to those of scoliosis or nonscoliosis spines, yielding a severity index (S-index). S-index value at first examination was compared with clinical outcome. At the first exam, 53 out of 65 predictions (82%) were in agreement with actual clinical outcome. Approximately, 89% of the curves that were predicted as progressive proved accurate. Although still requiring large scale validation, results are promising for early detection of progressive curves. 2.
Gaia eclipsing binary and multiple systems. Supervised classification and self-organizing maps
NASA Astrophysics Data System (ADS)
Süveges, M.; Barblan, F.; Lecoeur-Taïbi, I.; Prša, A.; Holl, B.; Eyer, L.; Kochoska, A.; Mowlavi, N.; Rimoldini, L.
2017-07-01
Context. Large surveys producing tera- and petabyte-scale databases require machine-learning and knowledge discovery methods to deal with the overwhelming quantity of data and the difficulties of extracting concise, meaningful information with reliable assessment of its uncertainty. This study investigates the potential of a few machine-learning methods for the automated analysis of eclipsing binaries in the data of such surveys. Aims: We aim to aid the extraction of samples of eclipsing binaries from such databases and to provide basic information about the objects. We intend to estimate class labels according to two different, well-known classification systems, one based on the light curve morphology (EA/EB/EW classes) and the other based on the physical characteristics of the binary system (system morphology classes; detached through overcontact systems). Furthermore, we explore low-dimensional surfaces along which the light curves of eclipsing binaries are concentrated, and consider their use in the characterization of the binary systems and in the exploration of biases of the full unknown Gaia data with respect to the training sets. Methods: We have explored the performance of principal component analysis (PCA), linear discriminant analysis (LDA), Random Forest classification and self-organizing maps (SOM) for the above aims. We pre-processed the photometric time series by combining a double Gaussian profile fit and a constrained smoothing spline, in order to de-noise and interpolate the observed light curves. We achieved further denoising, and selected the most important variability elements from the light curves using PCA. Supervised classification was performed using Random Forest and LDA based on the PC decomposition, while SOM gives a continuous 2-dimensional manifold of the light curves arranged by a few important features. We estimated the uncertainty of the supervised methods due to the specific finite training set using ensembles of models constructed on randomized training sets. Results: We obtain excellent results (about 5% global error rate) with classification into light curve morphology classes on the Hipparcos data. The classification into system morphology classes using the Catalog and Atlas of Eclipsing binaries (CALEB) has a higher error rate (about 10.5%), most importantly due to the (sometimes strong) similarity of the photometric light curves originating from physically different systems. When trained on CALEB and then applied to Kepler-detected eclipsing binaries subsampled according to Gaia observing times, LDA and SOM provide tractable, easy-to-visualize subspaces of the full (functional) space of light curves that summarize the most important phenomenological elements of the individual light curves. The sequence of light curves ordered by their first linear discriminant coefficient is compared to results obtained using local linear embedding. The SOM method proves able to find a 2-dimensional embedded surface in the space of the light curves which separates the system morphology classes in its different regions, and also identifies a few other phenomena, such as the asymmetry of the light curves due to spots, eccentric systems, and systems with a single eclipse. Furthermore, when data from other surveys are projected to the same SOM surface, the resulting map yields a good overview of the general biases and distortions due to differences in time sampling or population.
Activation cross-sections of proton induced reactions on vanadium in the 37-65 MeV energy range
NASA Astrophysics Data System (ADS)
Ditrói, F.; Tárkányi, F.; Takács, S.; Hermanne, A.
2016-08-01
Experimental excitation functions for proton induced reactions on natural vanadium in the 37-65 MeV energy range were measured with the activation method using a stacked foil irradiation technique. By using high resolution gamma spectrometry cross-section data for the production of 51,48Cr, 48V, 48,47,46,44m,44g,43Sc and 43,42K were determined. Comparisons with the earlier published data are presented and results predicted by different theoretical codes (EMPIRE and TALYS) are included. Thick target yields were calculated from a fit to our experimental excitation curves and compared with the earlier experimental yield data. Depth distribution curves to be used for thin layer activation (TLA) are also presented.
Reed, Derek D; Kaplan, Brent A; Brewer, Adam T
2012-01-01
In recent years, researchers and practitioners in the behavioral sciences have profited from a growing literature on delay discounting. The purpose of this article is to provide readers with a brief tutorial on how to use Microsoft Office Excel 2010 and Excel for Mac 2011 to analyze discounting data to yield parameters for both the hyperbolic discounting model and area under the curve. This tutorial is intended to encourage the quantitative analysis of behavior in both research and applied settings by readers with relatively little formal training in nonlinear regression.
A TUTORIAL ON THE USE OF EXCEL 2010 AND EXCEL FOR MAC 2011 FOR CONDUCTING DELAY-DISCOUNTING ANALYSES
Reed, Derek D; Kaplan, Brent A; Brewer, Adam T
2012-01-01
In recent years, researchers and practitioners in the behavioral sciences have profited from a growing literature on delay discounting. The purpose of this article is to provide readers with a brief tutorial on how to use Microsoft Office Excel 2010 and Excel for Mac 2011 to analyze discounting data to yield parameters for both the hyperbolic discounting model and area under the curve. This tutorial is intended to encourage the quantitative analysis of behavior in both research and applied settings by readers with relatively little formal training in nonlinear regression. PMID:22844143
NASA Astrophysics Data System (ADS)
Chan, H. M.; van der Velden, B. H. M.; E Loo, C.; Gilhuijs, K. G. A.
2017-08-01
We present a radiomics model to discriminate between patients at low risk and those at high risk of treatment failure at long-term follow-up based on eigentumors: principal components computed from volumes encompassing tumors in washin and washout images of pre-treatment dynamic contrast-enhanced (DCE-) MR images. Eigentumors were computed from the images of 563 patients from the MARGINS study. Subsequently, a least absolute shrinkage selection operator (LASSO) selected candidates from the components that contained 90% of the variance of the data. The model for prediction of survival after treatment (median follow-up time 86 months) was based on logistic regression. Receiver operating characteristic (ROC) analysis was applied and area-under-the-curve (AUC) values were computed as measures of training and cross-validated performances. The discriminating potential of the model was confirmed using Kaplan-Meier survival curves and log-rank tests. From the 322 principal components that explained 90% of the variance of the data, the LASSO selected 28 components. The ROC curves of the model yielded AUC values of 0.88, 0.77 and 0.73, for the training, leave-one-out cross-validated and bootstrapped performances, respectively. The bootstrapped Kaplan-Meier survival curves confirmed significant separation for all tumors (P < 0.0001). Survival analysis on immunohistochemical subgroups shows significant separation for the estrogen-receptor subtype tumors (P < 0.0001) and the triple-negative subtype tumors (P = 0.0039), but not for tumors of the HER2 subtype (P = 0.41). The results of this retrospective study show the potential of early-stage pre-treatment eigentumors for use in prediction of treatment failure of breast cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhakaran, SP.; Babu, R. Ramesh, E-mail: rampap2k@yahoo.co.in; Velusamy, P.
2011-11-15
Highlights: {yields} Growth of bulk single crystal of 8-hydroxyquinoline (8-HQ) by vertical Bridgman technique for the first time. {yields} The crystalline perfection is reasonably good. {yields} The photoluminescence spectrum shows that the material is suitable for blue light emission. -- Abstract: Single crystal of organic nonlinear optical material, 8-hydroxyquinoline (8-HQ) of dimension 52 mm (length) x 12 mm (dia.) was grown from melt using vertical Bridgman technique. The crystal system of the material was confirmed by powder X-ray diffraction analysis. The crystalline perfection of the grown crystal was examined by high-resolution X-ray diffraction study. Low angular spread around 400'' ofmore » the diffraction curve and the low full width half maximum values show that the crystalline perfection is reasonably good. The recorded photoluminescence spectrum shows that the material is suitable for blue light emission. Optical transmittance for the UV and visible region was measured and mechanical strength was estimated from Vicker's microhardness test along the growth face of the grown crystal.« less
de Jager, L S; Andrews, A R
2000-11-01
A novel, fast screening method for organochlorine pesticides (OCPs) in water samples has been developed. Total analysis time was less than 9 min, allowing 11 samples to be screened per hour. The relatively new technique of solvent microextraction (SME) was used to extract and preconcentrate the pesticides into a single drop of hexane. The use of a conventional carbon dioxide cryotrap was investigated for introduction of the extract onto a micro-bore (0.1 mm) capillary column for fast GC analysis. A pulsed-discharge electron capture detector was used which yielded selective and sensitive measurement of the pesticide peaks. Fast GC conditions were optimised and tested with the previously developed SME procedure. Calibration curves yielded good linearity and concentrations down to 0.25 ng mL-1 were detectable with RSD values ranging from 12.0 to 28% and LOD for most OCPs at 0.25 ng mL-1. Spiked river water samples were tested and using the developed screen we were able to differentiate between spiked samples and samples containing no OCPs.
Xu, Yiliang; Chen, Baoliang
2013-10-01
The thermodynamic parameters of the conversion of two companion pair materials, i.e., rice straw vs dairy manure, and rice bran vs chicken manure, to biochars were characterized by thermogravimetric analysis. The overall changes of activation energy (Ea) were well described by the Flynn-Wall method. The Ea values increased steeply from about 120 to 180 kJ/mol at the mass conversion (α) at 0.2-0.4, followed by a relatively steady change at 0.4<α<0.65, thereafter showed a quick increase at α>0.65. The higher contents of minerals in manures resulted in the larger Ea. The individual conversion of hemicellulose, cellulose and lignin in the feedstocks was identified and their thermodynamic parameters (ΔH°, ΔG° and ΔS°) were calculated. The yields of biochars calculated from TG curve were compared with the determined yields of biochars using muffle pyrolysis. Along with Fourier transform infrared spectra data, the distinct decompositions of biomasses and manures were evaluated. Copyright © 2013 Elsevier Ltd. All rights reserved.
Area under precision-recall curves for weighted and unweighted data.
Keilwagen, Jens; Grosse, Ivo; Grau, Jan
2014-01-01
Precision-recall curves are highly informative about the performance of binary classifiers, and the area under these curves is a popular scalar performance measure for comparing different classifiers. However, for many applications class labels are not provided with absolute certainty, but with some degree of confidence, often reflected by weights or soft labels assigned to data points. Computing the area under the precision-recall curve requires interpolating between adjacent supporting points, but previous interpolation schemes are not directly applicable to weighted data. Hence, even in cases where weights were available, they had to be neglected for assessing classifiers using precision-recall curves. Here, we propose an interpolation for precision-recall curves that can also be used for weighted data, and we derive conditions for classification scores yielding the maximum and minimum area under the precision-recall curve. We investigate accordances and differences of the proposed interpolation and previous ones, and we demonstrate that taking into account existing weights of test data is important for the comparison of classifiers.
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.
Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang
2015-09-21
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.
Area under Precision-Recall Curves for Weighted and Unweighted Data
Grosse, Ivo
2014-01-01
Precision-recall curves are highly informative about the performance of binary classifiers, and the area under these curves is a popular scalar performance measure for comparing different classifiers. However, for many applications class labels are not provided with absolute certainty, but with some degree of confidence, often reflected by weights or soft labels assigned to data points. Computing the area under the precision-recall curve requires interpolating between adjacent supporting points, but previous interpolation schemes are not directly applicable to weighted data. Hence, even in cases where weights were available, they had to be neglected for assessing classifiers using precision-recall curves. Here, we propose an interpolation for precision-recall curves that can also be used for weighted data, and we derive conditions for classification scores yielding the maximum and minimum area under the precision-recall curve. We investigate accordances and differences of the proposed interpolation and previous ones, and we demonstrate that taking into account existing weights of test data is important for the comparison of classifiers. PMID:24651729
Nested taxa-area curves for eastern United States floras
Bennett, J.P.
1997-01-01
The slopes of log-log species-area curves have been studied extensively and found to be influenced by the range of areas under study. Two such studies of eastern United States floras have yielded species-area curve slopes which differ by more than 100%: 0.251 and 0.113. The first slope may be too steep because the flora of the world was included, and both may be too steep because noncontiguous areas were used. These two hypotheses were tested using a set of nested floras centered in Ohio and continuing up to the flora of the world. The results suggest that this set of eastern United States floras produces a log-log species-area curve with a slope of approximately 0.20 with the flora of the world excluded, and regardless of whether or not the floras are from nested areas. Genera- and family-area curves are less steep than species-area curves and show similar patterns. Taxa ratio curves also increase with area, with the species/family ratio showing the steepest slope.
NASA Astrophysics Data System (ADS)
Fu, Liming; Shan, Mokun; Zhang, Daoda; Wang, Huanrong; Wang, Wei; Shan, Aidang
2017-05-01
The microstructures and deformation behavior were studied in a high-temperature annealed high-manganese dual-phase (28 vol pct δ-ferrite and 72 vol pct γ-austenite) transformation-induced plasticity/twinning-induced plasticity (TRIP/TWIP) steel. The results showed that the steel exhibits a special Lüders-like yielding phenomenon at room temperature (RT) and 348 K (75 °C), while it shows continuous yielding at 423 K, 573 K and 673 K (150 °C, 300 °C and 400 °C) deformation. A significant TRIP effect takes place during Lüders-like deformation at RT and 348 K (75 °C) temperatures. Semiquantitative analysis of the TRIP effect on the Lüders-like yield phenomenon proves that a softening effect of the strain energy consumption of strain-induced transformation is mainly responsible for this Lüders-like phenomenon. The TWIP mechanism dominates the 423 K (150 °C) deformation process, while the dislocation glide controls the plasticity at 573 K (300 °C) deformation. The delta-ferrite, as a hard phase in annealed dual-phase steel, greatly affects the mechanical stability of austenite due to the heterogeneous strain distribution between the two phases during deformation. A delta-ferrite-aided TRIP effect, i.e., martensite transformation induced by localized strain concentration of the hard delta-ferrite, is proposed to explain this kind of Lüders-like phenomenon. Moreover, the tensile curve at RT exhibits an upward curved behavior in the middle deformation stage, which is principally attributed to the deformation twinning of austenite retained after Lüders-like deformation. The combination of the TRIP effect during Lüders-like deformation and the subsequent TWIP effect greatly enhances the ductility in this annealed high-manganese dual-phase TRIP/TWIP steel.
Sputtering of cobalt and chromium by argon and xenon ions near the threshold energy region
NASA Technical Reports Server (NTRS)
Handoo, A. K.; Ray, P. K.
1993-01-01
Sputtering yields of cobalt and chromium by argon and xenon ions with energies below 50 eV are reported. The targets were electroplated on copper substrates. Measurable sputtering yields were obtained from cobalt with ion energies as low as 10 eV. The ion beams were produced by an ion gun. A radioactive tracer technique was used for the quantitative measurement of the sputtering yield. Co-57 and Cr-51 were used as tracers. The yield-energy curves are observed to be concave, which brings into question the practice of finding threshold energies by linear extrapolation.
Hess, Andreas; Aksel, Nuri
2013-09-10
The yield stress of polyelectrolyte multilayer modified suspensions exhibits a surprising dependence on the polyelectrolyte conformation of multilayer films. The rheological data scale onto a universal master curve for each polyelectrolyte conformation as the particle volume fraction, φ, and the ionic strength of the background fluid, I, are varied. It is shown that rough films with highly coiled, brushy polyelectrolytes significantly enhance the yield stress. Moreover, via the ionic strength I of the background fluid, the dynamic yield stress of brushy polyelectrolyte multilayers can be finely adjusted over 2 decades.
Behavioral economics and empirical public policy.
Hursh, Steven R; Roma, Peter G
2013-01-01
The application of economics principles to the analysis of behavior has yielded novel insights on value and choice across contexts ranging from laboratory animal research to clinical populations to national trends of global impact. Recent innovations in demand curve methods provide a credible means of quantitatively comparing qualitatively different reinforcers as well as quantifying the choice relations between concurrently available reinforcers. The potential of the behavioral economic approach to inform public policy is illustrated with examples from basic research, pre-clinical behavioral pharmacology, and clinical drug abuse research as well as emerging applications to public transportation and social behavior. Behavioral Economics can serve as a broadly applicable conceptual, methodological, and analytical framework for the development and evaluation of empirical public policy. © Society for the Experimental Analysis of Behavior.
Development and Validation Study of the Internet Overuse Screening Questionnaire
Lee, Han-Kyeong; Lee, Hae-Woo; Han, Joo Hyun; Park, Subin; Ju, Seok-Jin; Choi, Kwanwoo; Lee, Ji Hyeon; Jeon, Hong Jin
2018-01-01
Objective Concerns over behavioral and emotional problems caused by excessive internet usage have been developed. This study intended to develop and a standardize questionnaire that can efficiently identify at-risk internet users through their internet usage habits. Methods Participants (n=158) were recruited at six I-will-centers located in Seoul, South Korea. From the initial 36 questionnaire item pool, 28 preliminary items were selected through expert evaluation and panel discussions. The construct validity, internal consistency, and concurrent validity were examined. We also conducted Receiver Operating Curve (ROC) analysis to assess diagnostic ability of the Internet Overuse Screening-Questionnaire (IOS-Q). Results The exploratory factor analysis yielded a five factor structure. Four factors with 17 items remained after items that had unclear factor loading were removed. The Cronbach’s alpha for the IOS-Q total score was 0.91, and test-retest reliability was 0.72. The correlation between Young’s internet addiction scale and K-scale supported concurrent validity. ROC analysis showed that the IOS-Q has superior diagnostic ability with the Area Under the Curve of 0.87. At the cut-off point of 25.5, the sensitivity was 0.93 and specificity was 0.86. Conclusion Overall, this study supports the use of IOS-Q for internet addiction research and for screening high-risk individuals. PMID:29669406
NASA Astrophysics Data System (ADS)
Glennie, Erin; Anyamba, Assaf
2018-06-01
A time series of Advanced Very High Resolution Radiometer (AVHRR) derived normalized difference vegetation index (NDVI) data were compared to National Agricultural Statistics Service (NASS) corn yield data in the United States Corn Belt from 1982 to 2014. The main objectives of the comparison were to assess 1) the consistency of regional Corn Belt responses to El Niño/Southern Oscillation (ENSO) teleconnection signals, and 2) the reliability of using NDVI as an indicator of crop yield. Regional NDVI values were used to model a seasonal curve and to define the growing season - May to October. Seasonal conditions in each county were represented by NDVI and land surface temperature (LST) composites, and corn yield was represented by average annual bushels produced per acre. Correlation analysis between the NDVI, LST, corn yield, and equatorial Pacific sea surface temperature anomalies revealed patterns in land surface dynamics and corn yield, as well as typical impacts of ENSO episodes. It was observed from the study that growing seasons coincident with La Niña events were consistently warmer, but El Niño events did not consistently impact NDVI, temperature, or corn yield data. Moreover, the El Niño and La Niña composite images suggest that impacts vary spatially across the Corn Belt. While corn is the dominant crop in the region, some inconsistencies between corn yield and NDVI may be attributed to soy crops and other background interference. The overall correlation between the total growing season NDVI anomaly and detrended corn yield was 0.61(p = 0.00013), though the strength of the relationship varies across the Corn Belt.
Commissioning an in-room mobile CT for adaptive proton therapy with a compact proton system.
Oliver, Jasmine A; Zeidan, Omar; Meeks, Sanford L; Shah, Amish P; Pukala, Jason; Kelly, Patrick; Ramakrishna, Naren R; Willoughby, Twyla R
2018-05-01
To describe the commissioning of AIRO mobile CT system (AIRO) for adaptive proton therapy on a compact double scattering proton therapy system. A Gammex phantom was scanned with varying plug patterns, table heights, and mAs on a CT simulator (CT Sim) and on the AIRO. AIRO-specific CT-stopping power ratio (SPR) curves were created with a commonly used stoichiometric method using the Gammex phantom. A RANDO anthropomorphic thorax, pelvis, and head phantom, and a CIRS thorax and head phantom were scanned on the CT Sim and AIRO. Clinically realistic treatment plans and nonclinical plans were generated on the CT Sim images and subsequently copied onto the AIRO CT scans for dose recalculation and comparison for various AIRO SPR curves. Gamma analysis was used to evaluate dosimetric deviation between both plans. AIRO CT values skewed toward solid water when plugs were scanned surrounded by other plugs in phantom. Low-density materials demonstrated largest differences. Dose calculated on AIRO CT scans with stoichiometric-based SPR curves produced over-ranged proton beams when large volumes of low-density material were in the path of the beam. To create equivalent dose distributions on both data sets, the AIRO SPR curve's low-density data points were iteratively adjusted to yield better proton beam range agreement based on isodose lines. Comparison of the stoichiometric-based AIRO SPR curve and the "dose-adjusted" SPR curve showed slight improvement on gamma analysis between the treatment plan and the AIRO plan for single-field plans at the 1%, 1 mm level, but did not affect clinical plans indicating that HU number differences between the CT Sim and AIRO did not affect dose calculations for robust clinical beam arrangements. Based on this study, we believe the AIRO can be used offline for adaptive proton therapy on a compact double scattering proton therapy system. © 2018 Orlando Health UF Health Cancer Center. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Spontaneous swallowing frequency has potential to identify dysphagia in acute stroke.
Crary, Michael A; Carnaby, Giselle D; Sia, Isaac; Khanna, Anna; Waters, Michael F
2013-12-01
Spontaneous swallowing frequency has been described as an index of dysphagia in various health conditions. This study evaluated the potential of spontaneous swallow frequency analysis as a screening protocol for dysphagia in acute stroke. In a cohort of 63 acute stroke cases, swallow frequency rates (swallows per minute [SPM]) were compared with stroke and swallow severity indices, age, time from stroke to assessment, and consciousness level. Mean differences in SPM were compared between patients with versus without clinically significant dysphagia. Receiver operating characteristic curve analysis was used to identify the optimal threshold in SPM, which was compared with a validated clinical dysphagia examination for identification of dysphagia cases. Time series analysis was used to identify the minimally adequate time period to complete spontaneous swallow frequency analysis. SPM correlated significantly with stroke and swallow severity indices but not with age, time from stroke onset, or consciousness level. Patients with dysphagia demonstrated significantly lower SPM rates. SPM differed by dysphagia severity. Receiver operating characteristic curve analysis yielded a threshold of SPM≤0.40 that identified dysphagia (per the criterion referent) with 0.96 sensitivity, 0.68 specificity, and 0.96 negative predictive value. Time series analysis indicated that a 5- to 10-minute sampling window was sufficient to calculate spontaneous swallow frequency to identify dysphagia cases in acute stroke. Spontaneous swallowing frequency presents high potential to screen for dysphagia in acute stroke without the need for trained, available personnel.
Antioxidant-spotting in micelles and emulsions.
Aliaga, Carolina; López de Arbina, Amaia; Pastenes, Camila; Rezende, Marcos Caroli
2018-04-15
A simple protocol is described for locating the site of action of an antioxidant (AO) in a micro-heterogeneous mixture, based on the pattern of the reactivity curve towards the AO of a series of 4-alkanoyl TEMPO radicals. The resulting cut-off curves yield information regarding the hydrophobic microenvironment surrounding the reactive AO group, and its accessibility by the probe. Convex curves are an indication of an AO located in a more hydrophilic environment, while concave plots originate from AOs in a more hydrophobic location in the micro-heterogeneous system. Copyright © 2017 Elsevier Ltd. All rights reserved.
Surface wave phase velocities from 2-D surface wave tomography studies in the Anatolian plate
NASA Astrophysics Data System (ADS)
Arif Kutlu, Yusuf; Erduran, Murat; Çakır, Özcan; Vinnik, Lev; Kosarev, Grigoriy; Oreshin, Sergey
2014-05-01
We study the Rayleigh and Love surface wave fundamental mode propagation beneath the Anatolian plate. To examine the inter-station phase velocities a two-station method is used along with the Multiple Filter Technique (MFT) in the Computer Programs in Seismology (Herrmann and Ammon, 2004). The near-station waveform is deconvolved from the far-station waveform removing the propagation effects between the source and the station. This method requires that the near and far stations are aligned with the epicentre on a great circle path. The azimuthal difference of the earthquake to the two-stations and the azimuthal difference between the earthquake and the station are restricted to be smaller than 5o. We selected 3378 teleseismic events (Mw >= 5.7) recorded by 394 broadband local stations with high signal-to-noise ratio within the years 1999-2013. Corrected for the instrument response suitable seismogram pairs are analyzed with the two-station method yielding a collection of phase velocity curves in various period ranges (mainly in the range 25-185 sec). Diffraction from lateral heterogeneities, multipathing, interference of Rayleigh and Love waves can alter the dispersion measurements. In order to obtain quality measurements, we select only smooth portions of the phase velocity curves, remove outliers and average over many measurements. We discard these average phase velocity curves suspected of suffering from phase wrapping errors by comparing them with a reference Earth model (IASP91 by Kennett and Engdahl, 1991). The outlined analysis procedure yields 3035 Rayleigh and 1637 Love individual phase velocity curves. To obtain Rayleigh and Love wave travel times for a given region we performed 2-D tomographic inversion for which the Fast Marching Surface Tomography (FMST) code developed by N. Rawlinson at the Australian National University was utilized. This software package is based on the multistage fast marching method by Rawlinson and Sambridge (2004a, 2004b). The azimuthal coverage of the respective two-station paths is proper to analyze the observed dispersion curves in terms of both azimuthal and radial anisotropy beneath the study region. This research is supported by Joint Research Project of the Scientific and Research Council of Turkey (TUBİTAK- Grant number 111Y190) and the Russian Federation for Basic Research (RFBR).
Rapid Generation of Conceptual and Preliminary Design Aerodynamic Data by a Computer Aided Process
2000-06-01
methodologies, oftenpeculiar requirements such as flexibility and robustness of blended with sensible ’guess-estimated’ values. Due to peculiaremequirments...from the ’raw’ appropriate blending interpolation between the given data aerodynamic data is a process which certainly requires yields generally...like component patches are described by defining the evolution of a conic curve between two opposite boundary curves by means of blending functions
Apsidal rotation in the eclipsing binary AG Persei
NASA Technical Reports Server (NTRS)
Koch, Robert H.; Woodward, Edith J.
1987-01-01
New three-filter light curves of AG Per are given. These yield times of minimum light in accord with the known rate of apsidal rotation but do not improve that rate. These light curves and all other published historical ones have been treated with the code EBOP and are shown to give largely consistent geometric and photometric parameters no matter which orientation of the orbit is displayed to the observer.
Study of the Effects of Metallurgical Factors on the Growth of Fatigue Microcracks.
1987-11-25
polycrystalline) yield stress. 8. The resulting model, predicated on the notion of orientation-dependent microplastic grains, predicts quantitatively the entire...Figure 5. Predicted crack growth curves for small cracks propagating from a microplastic grain into elastic-plastic, contiguous grains; Ao is defined as...or the crack tip opening *displacement, 6. Figure 5. Predicted crack growth curves for small cracks propagating from a microplastic grain into
Smith, Joseph P; Smith, Frank C; Ottaway, Joshua; Krull-Davatzes, Alexandra E; Simonson, Bruce M; Glass, Billy P; Booksh, Karl S
2017-08-01
The high-pressure, α-PbO 2 -structured polymorph of titanium dioxide (TiO 2 -II) was recently identified in micrometer-sized grains recovered from four Neoarchean spherule layers deposited between ∼2.65 and ∼2.54 billion years ago. Several lines of evidence support the interpretation that these layers represent distal impact ejecta layers. The presence of shock-induced TiO 2 -II provides physical evidence to further support an impact origin for these spherule layers. Detailed characterization of the distribution of TiO 2 -II in these grains may be useful for correlating the layers, estimating the paleodistances of the layers from their source craters, and providing insight into the formation of the TiO 2 -II. Here we report the investigation of TiO 2 -II-bearing grains from these four spherule layers using multivariate curve resolution-alternating least squares (MCR-ALS) applied to Raman microspectroscopic mapping. Raman spectra provide evidence of grains consisting primarily of rutile (TiO 2 ) and TiO 2 -II, as shown by Raman bands at 174 cm -1 (TiO 2 -II), 426 cm -1 (TiO 2 -II), 443 cm -1 (rutile), and 610 cm -1 (rutile). Principal component analysis (PCA) yielded a predominantly three-phase system comprised of rutile, TiO 2 -II, and substrate-adhesive epoxy. Scanning electron microscopy (SEM) suggests heterogeneous grains containing polydispersed micrometer- and submicrometer-sized particles. Multivariate curve resolution-alternating least squares applied to the Raman microspectroscopic mapping yielded up to five distinct chemical components: three phases of TiO 2 (rutile, TiO 2 -II, and anatase), quartz (SiO 2 ), and substrate-adhesive epoxy. Spectral profiles and spatially resolved chemical maps of the pure chemical components were generated using MCR-ALS applied to the Raman microspectroscopic maps. The spatial resolution of the Raman microspectroscopic maps was enhanced in comparable, cost-effective analysis times by limiting spectral resolution and optimizing spectral acquisition parameters. Using the resolved spectra of TiO 2 -II generated from MCR-ALS analysis, a Raman spectrum for pure TiO 2 -II was estimated to further facilitate its identification.
Application of a GCM Ensemble Seasonal Climate Forecasts to Crop Yield Prediction in East Africa
NASA Astrophysics Data System (ADS)
Ogutu, G.; Franssen, W.; Supit, I.; Hutjes, R. W. A.
2016-12-01
We evaluated the potential use of ECMWF System-4 seasonal climate forecasts (S4) for impacts analysis over East Africa. Using the 15 member, 7 months ensemble forecasts initiated every month for 1981-2010, we tested precipitation (tp), air temperature (tas) and surface shortwave radiation (rsds) forecast skill against the WATCH forcing Data ERA-Interim (WFDEI) re-analysis and other data. We used these forecasts as input in the WOFOST crop model to predict maize yields. Forecast skill is assessed using anomaly correlation (ACC), Ranked Probability Skill Score (RPSS) and the Relative Operating Curve Skill Score (ROCSS) for MAM, JJA and OND growing seasons. Predicted maize yields (S4-yields) are verified against historical observed FAO and nationally reported (NAT) yield statistics, and yields from the same crop model forced by WFDEI (WFDEI-yields). Predictability of the climate forecasts vary with season, location and lead-time. The OND tp forecasts show skill over a larger area up to three months lead-time compared to MAM and JJA. Upper- and lower-tercile tp forecasts are 20-80% better than climatology. Good tas forecast skill is apparent with three months lead-time. The rsds is less skillful than tp and tas in all seasons when verified against WFDEI but higher against others. S4-forecasts captures ENSO related anomalous years with region dependent skill. Anomalous ENSO influence is also seen in simulated yields. Focussing on the main sowing dates in the northern (July), equatorial (March-April) and southern (December) regions, WFDEI-yields are lower than FAO and NAT but anomalies are comparable. Yield anomalies are predictable 3-months before sowing in most of the regions. Differences in interannual variability in the range of ±40% may be related to sensitivity of WOFOST to drought stress while the ACCs are largely positive ranging from 0.3 to 0.6. Above and below-normal yields are predictable with 2-months lead time. We evidenced a potential use of seasonal climate forecasts with a crop simulation model to predict anomalous maize yields over East Africa. The findings open a window to better use of climate forecasts in food security early warning systems, and pre-season policy and farm management decisions.
Novel methods for parameter-based analysis of myocardial tissue in MR images
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.
2007-03-01
The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.
Carballido-Gamio, Julio; Krug, Roland; Huber, Markus B; Hyun, Ben; Eckstein, Felix; Majumdar, Sharmila; Link, Thomas M
2009-02-01
In vivo assessment of trabecular bone microarchitecture could improve the prediction of fracture risk and the efficacy of osteoporosis treatment and prevention. Geodesic topological analysis (GTA) is introduced as a novel technique to quantify the trabecular bone microarchitecture from high-spatial resolution magnetic resonance (MR) images. Trabecular bone parameters that quantify the scale, topology, and anisotropy of the trabecular bone network in terms of its junctions are the result of GTA. The reproducibility of GTA was tested with in vivo images of human distal tibiae and radii (n = 6) at 1.5 Tesla; and its ability to discriminate between subjects with and without vertebral fracture was assessed with ex vivo images of human calcanei at 1.5 and 3.0 Tesla (n = 30). GTA parameters yielded an average reproducibility of 4.8%, and their individual areas under the curve (AUC) of the receiver operating characteristic curve analysis for fracture discrimination performed better at 3.0 than at 1.5 Tesla reaching values of up to 0.78 (p < 0.001). Logistic regression analysis demonstrated that fracture discrimination was improved by combining GTA parameters, and that GTA combined with bone mineral density (BMD) allow for better discrimination than BMD alone (AUC = 0.95; p < 0.001). Results indicate that GTA can substantially contribute in studies of osteoporosis involving imaging of the trabecular bone microarchitecture. Copyright 2009 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Atchison, C S; Miller, James A
1942-01-01
Tensile and compressive stress-strain curves, stress-deviation curves, and secant modulus-stress curves are given for longitudinal and transverse specimens of 17S-T, 24S-T, and 24S-RT aluminum-alloy sheet in thicknesses from 0.032 to 0.081 inch, 1025 carbon steel sheet in thicknesses of 0.054 and 0.120 inch, and chromium-nickel steel sheet in thicknesses form 0.020 to 0.0275 inch. Significant differences were found between the tensile and the compressive stress-strain curves, and also the corresponding corollary curves; similarly, differences were found between the curves for the longitudinal and transverse directions. These differences are of particular importance in considering the compressive strength of aircraft structures made of thin sheet. They are explored further for the case of compression by giving tangent modulus-stress curves in longitudinal and transverse compression and dimensionless curves of the ratio of tangent modulus to Young's modulus and of the ratio of reduced modulus for a rectangular section to Young's modulus, both plotted against the ratio of stress to secant yield strength.
SINGER, A.; GILLESPIE, D.; NORBURY, J.; EISENBERG, R. S.
2009-01-01
Ion channels are proteins with a narrow hole down their middle that control a wide range of biological function by controlling the flow of spherical ions from one macroscopic region to another. Ion channels do not change their conformation on the biological time scale once they are open, so they can be described by a combination of Poisson and drift-diffusion (Nernst–Planck) equations called PNP in biophysics. We use singular perturbation techniques to analyse the steady-state PNP system for a channel with a general geometry and a piecewise constant permanent charge profile. We construct an outer solution for the case of a constant permanent charge density in three dimensions that is also a valid solution of the one-dimensional system. The asymptotical current–voltage (I–V ) characteristic curve of the device (obtained by the singular perturbation analysis) is shown to be a very good approximation of the numerical I–V curve (obtained by solving the system numerically). The physical constraint of non-negative concentrations implies a unique solution, i.e., for each given applied potential there corresponds a unique electric current (relaxing this constraint yields non-physical multiple solutions for sufficiently large voltages). PMID:19809600
A Biometric Latent Curve Analysis of Memory Decline in Older Men of the NAS-NRC Twin Registry
McArdle, John J.; Plassman, Brenda L.
2010-01-01
Previous research has shown cognitive abilities to have different biometric patterns of age-changes. Here we examined the variation in episodic memory (Words Recalled) for over 6,000 twin pairs who were initially aged 59-75, and were subsequently re-assessed up to three more times over 12 years. In cross-sectional analyses, variation in Education was explained by strong additive genetic influences (~43%) together with shared family influences (~35%) that were independent of age. The longitudinal phenotypic analysis of the Word Recall task showed systematic linear declines over age, but with positive influences of Education and Retesting. The longitudinal biometric estimation yielded: (a) A separation of non-shared environmental influences and transient measurement error (~50%): (b) Strong additive genetic components of this latent curve (~70% at age 60) with increases over age that reach about 90% by age 90. (c) The minor influences of shared family environment (~17% at age 60) were effectively eliminated by age 75. (d) Non-shared environmental effects play an important role over most of the life-span (peak of 42% at age 70) but their relative role diminishes after age 75. PMID:19404731
I-V Curves from Photovoltaic Modules Deployed in Tucson
NASA Astrophysics Data System (ADS)
Kopp, Emily; Brooks, Adria; Lonij, Vincent; Cronin, Alex
2011-10-01
More than 30 Mega Watts of photo-voltaic (PV) modules are connected to the electric power grid in Tucson, AZ. However, predictions of PV system electrical yields are uncertain, in part because PV modules degrade at various rates (observed typically in the range 0% to 3 %/yr). We present I-V curves (PV output current as a function of PV output voltage) as a means to study PV module efficiency, de-ratings, and degradation. A student-made I-V curve tracer for 100-Watt modules will be described. We present I-V curves for several different PV technologies operated at an outdoor test yard, and we compare new modules to modules that have been operated in the field for 10 years.
Maleki, Ali; Movahed, Hamed; Ravaghi, Parisa
2017-01-20
In this work, design, preparation and performance of magnetic cellulose/Ag nanobiocomposite as a recyclable and highly efficient heterogeneous nanocatalyst is described. Fourier transform infrared (FT-IR) spectroscopy, X-ray diffraction (XRD) pattern, vibrating sample magnetometer (VSM) curve, field-emission scanning electron microscopy (FE-SEM) image, energy dispersive X-ray (EDX) analysis and thermogravimetric analysis/differential thermal analysis (TGA/DTA) were used for the characterization. Then, its activity was investigated in the synthesis of 2-amino-6-(2-oxo-2H-chromen-3-yl)-4-phenylnicotinonitrile derivatives. The main advantages of the reaction are high yields and short reaction times. The remarkable magnetic property of the nanobiocomposite catalyst provides easy separation from the reaction mixture by an external magnet without considerable loss of its catalytic activity. Copyright © 2016 Elsevier Ltd. All rights reserved.
In-pile electrochemical measurements on AISI 316 L(N) IG and EUROFER 97 I: experimental results
NASA Astrophysics Data System (ADS)
Vankeerberghen, Marc; Bosch, Rik-Wouter; Van Nieuwenhoven, Rudi
2003-02-01
In-pile electrochemical measurements were performed in order to investigate the effect of radiation on the electrochemical corrosion behaviour of two materials: reduced activation ferritic-martensitic steel EUROFER 97 and stainless steel AISI 316 L(N) IG. The corrosion potential was continuously monitored during the whole irradiation period. At regular intervals and under various flux levels, polarisation resistance measurements and electrochemical impedance spectroscopy were performed. Polarisation curves were recorded at the end of the reactor cycle. Analysis showed that the corrosion potential increased and the polarisation resistance decreased with the flux level. The impedance data showed two semi-circles in the Nyquist diagram which contracted with increasing flux level. A fit of the impedance data yielded a decrease of solution and polarisation resistances with the flux level. The polarisation curves could be fitted with a standard Butler-Volmer representation after correction for the solution resistance and showed an increase in the corrosion current density with the flux level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Guang; Choi, Kyoo Sil; Hu, Xiaohua
2016-01-15
A new inverse method was developed to predict the stress-strain behaviors of constituent phases in a multi-phase steel using the load-depth curves measured in nanoindentation tests combined with microhardness measurements. A power law hardening response was assumed for each phase, and an empirical relationship between hardness and yield strength was assumed. Adjustment was made to eliminate the indentation size effect and indenter bluntness effect. With the newly developed inverse method and statistical analysis of the hardness histogram for each phase, the average stress-strain curves of individual phases in a quench and partitioning (Q&P) steel, including austenite, tempered martensite and untemperedmore » martensite, were calculated and the results were compared with the phase properties obtained by in-situ high energy X-ray diffraction (HEXRD) test. It is demonstrated that multi-scale instrumented indentation tests together with the new inverse method are capable of determining the individual phase flow properties in multi-phase alloys.« less
[Different sedimentation rates of X- and Y- sperm and the question of arbitrary sex determination].
Bhattacharya, B C
1962-01-01
Separation of X and Y sperm using their sedimentation rates is repor ted. A colloidal medium (egg yolk and glycocoll solution) of a particular viscosity and density was developed for this purpose. Rabbit and bull sperm yeilded a 2-peaked curve when separated into sedimentatio n fractions, while rooster sperm gave a single-peak curve. Sedimentation rate of live and dead sperm were similar. 176 female rabbits were fertilized with various sedimentation fractions. 23.3% became pregnant, resulting in 122 young. The sex of the young was clearly related to the sedimentation rate of the sperm: the 2 uppermost fractions yielded 77.4% (P 0.01) male offspring, while the 2 lowest fractions gave 28.2% males (p 0.01) and the middle yielded 54.7% male young.
Wei, Likun; Huang, Xuxiong
2017-01-01
Microalga Nannochloropsis oculata is a promising alternative feedstock for biodiesel. Elevating its oil-yielding capacity is conducive to cost-saving biodiesel production. However, the regulatory processes of multi-factor collaborative stresses (MFCS) on the oil-yielding performance of N. oculata are unclear. The duration effects of MFCS (high irradiation, nitrogen deficiency and elevated iron supplementation) on N. oculata were investigated in an 18-d batch culture. Despite the reduction in cell division, the biomass concentration increased, resulting from the large accumulation of the carbon/energy-reservoir. However, different storage forms were found in different cellular storage compounds, and both the protein content and pigment composition swiftly and drastically changed. The analysis of four biodiesel properties using pertinent empirical equations indicated their progressive effective improvement in lipid classes and fatty acid composition. The variation curve of neutral lipid productivity was monitored with fluorescent Nile red and was closely correlated to the results from conventional methods. In addition, a series of changes in the organelles (e.g., chloroplast, lipid body and vacuole) and cell shape, dependent on the stress duration, were observed by TEM and LSCM. These changes presumably played an important role in the acclimation of N. oculata to MFCS and accordingly improved its oil-yielding performance. PMID:28346505
Composing chaotic music from the letter m
NASA Astrophysics Data System (ADS)
Sotiropoulos, Anastasios D.
Chaotic music is composed from a proposed iterative map depicting the letter m, relating the pitch, duration and loudness of successive steps. Each of the two curves of the letter m is based on the classical logistic map. Thus, the generating map is xn+1 = r xn(1/2 - xn) for xn between 0 and 1/2 defining the first curve, and xn+1 = r (xn - 1/2)(1 - xn) for xn between 1/2 and 1 representing the second curve. The parameter r which determines the height(s) of the letter m varies from 2 to 16, the latter value ensuring fully developed chaotic solutions for the whole letter m; r = 8 yielding full chaotic solutions only for its first curve. The m-model yields fixed points, bifurcation points and chaotic regions for each separate curve, as well as values of the parameter r greater than 8 which produce inter-fixed points, inter-bifurcation points and inter-chaotic regions from the interplay of the two curves. Based on this, music is composed from mapping the m- recurrence model solutions onto actual notes. The resulting musical score strongly depends on the sequence of notes chosen by the composer to define the musical range corresponding to the range of the chaotic mathematical solutions x from 0 to 1. Here, two musical ranges are used; one is the middle chromatic scale and the other is the seven- octaves range. At the composer's will and, for aesthetics, within the same composition, notes can be the outcome of different values of r and/or shifted in any octave. Compositions with endings of non-repeating note patterns result from values of r in the m-model that do not produce bifurcations. Scores of chaotic music composed from the m-model and the classical logistic model are presented.
NASA Astrophysics Data System (ADS)
Toyota, T.; Kimura, N.
2017-12-01
Sea ice rheology which relates sea ice stress to the large-scale deformation of the ice cover has been a big issue to numerical sea ice modelling. At present the treatment of internal stress within sea ice area is based mostly on the rheology formulated by Hibler (1979), where the whole sea ice area behaves like an isotropic and plastic matter under the ordinary stress with the yield curve given by an ellipse with an aspect ratio (e) of 2, irrespective of sea ice area and horizontal resolution of the model. However, this formulation was initially developed to reproduce the seasonal variation of the perennial ice in the Arctic Ocean. As for its applicability to the seasonal ice zones (SIZ), where various types of sea ice are present, it still needs validation from observational data. In this study, the validity of this rheology was examined for the Sea of Okhotsk ice, typical of the SIZ, based on the AMSR-derived ice drift pattern in comparison with the result obtained for the Beaufort Sea. To examine the dependence on a horizontal scale, the coastal radar data operated near the Hokkaido coast, Japan, were also used. Ice drift pattern was obtained by a maximum cross-correlation method with grid spacings of 37.5 km from the 89 GHz brightness temperature of AMSR-E for the entire Sea of Okhotsk and the Beaufort Sea and 1.3 km from the coastal radar for the near-shore Sea of Okhotsk. The validity of this rheology was investigated from a standpoint of work rate done by deformation field, following the theory of Rothrock (1975). In analysis, the relative rates of convergence were compared between theory and observation to check the shape of yield curve, and the strain ellipse at each grid cell was estimated to see the horizontal variation of deformation field. The result shows that the ellipse of e=1.7-2.0 as the yield curve represents the observed relative conversion rates well for all the ice areas. Since this result corresponds with the yield criterion by Tresca and Von Mises for a 2D plastic matter, it suggests the validity and applicability of this rheology to the SIZ to some extent. However, it was also noted that the variation of the deformation field in the Sea of Okhotsk is much larger than in the Beaufort Sea, which indicates the need for the careful treatment of grid size in the model.
Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.
Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G
2012-05-01
This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.
Giorgio Vacchiano; Renzo Motta; James N. Long; John D. Shaw
2008-01-01
Density management diagrams (DMD) are graphical tools used in the design of silvicultural regimes in even-aged forests. They depict the relationship between stand density, average tree size, stand yield and dominant height, based upon relevant ecological and allometric relationships such as the self-thinning rule, the yield-density effect, and site index curves. DMD...
Dose Calibration of the ISS-RAD Fast Neutron Detector
NASA Technical Reports Server (NTRS)
Zeitlin, C.
2015-01-01
The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.
Kim, DaeHee; Rhodes, Jeffrey A; Hashim, Jeffrey A; Rickabaugh, Lawrence; Brams, David M; Pinkus, Edward; Dou, Yamin
2018-06-07
Highly specific preoperative localizing test is required to select patients for minimally invasive parathyroidectomy (MIP) in lieu of traditional four-gland exploration. We hypothesized that Tc-99m sestamibi scan interpretation incorporating numerical measurements on the degree of asymmetrical activity from bilateral thyroid beds can be useful in localizing single adenoma for MIP. We devised a quantitative interpretation method for Tc-99m sestamibi scan based on the numerically graded asymmetrical activity on early phase. The numerical ratio value of each scan was obtained by dividing the number of counts from symmetrically drawn regions of interest (ROI) over bilateral thyroid beds. The final pathology and clinical outcome of 109 patients were used to perform receiver operating curve (ROC) analysis. Receiver operating curve analysis revealed the area under the curve (AUC) was calculated to be 0.71 (P = 0.0032), validating this method as a diagnostic tool. The optimal cut-off point for the ratio value with maximal combined sensitivity and specificity was found with corresponding sensitivity of 67.9% (56.5-77.2%, 95% CI) and specificity of 75.0% (52.8-91.8%, 95% CI). An additional higher cut-off with higher specificity with minimal possible sacrifice on sensitivity was also selected, yielding sensitivity of 28.6% (18.8-38.6%, 95% CI) and specificity of 90.0% (69.6-98.8%, 95% CI). Our results demonstrated that the more asymmetrical activity on the initial phase, the more successful it is to localize a single parathyroid adenoma on sestamibi scans. Using early-phase Tc-99m sestamibi scan only, we were able to select patients for minimally invasive parathyroidectomy with 90% specificity. © 2018 The Royal Australian and New Zealand College of Radiologists.
Using Machine Learning To Predict Which Light Curves Will Yield Stellar Rotation Periods
NASA Astrophysics Data System (ADS)
Agüeros, Marcel; Teachey, Alexander
2018-01-01
Using time-domain photometry to reliably measure a solar-type star's rotation period requires that its light curve have a number of favorable characteristics. The probability of recovering a period will be a non-linear function of these light curve features, which are either astrophysical in nature or set by the observations. We employ standard machine learning algorithms (artificial neural networks and random forests) to predict whether a given light curve will produce a robust rotation period measurement from its Lomb-Scargle periodogram. The algorithms are trained and validated using salient statistics extracted from both simulated light curves and their corresponding periodograms, and we apply these classifiers to the most recent Intermediate Palomar Transient Factory (iPTF) data release. With this pipeline, we anticipate measuring rotation periods for a significant fraction of the ∼4x108 stars in the iPTF footprint.
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less
PAPER-CHROMATOGRAM MEASUREMENT OF SUBSTANCES LABELLED WITH H$sup 3$ (in German)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, M.
1961-03-01
Compounds labelled with H/sup 3/ can be detected with a paper chromatogram using a methane flow counter with a count yield of 1%. The yield can be estimated from the beta maximum energy. A new double counter was developed which increases the count yield to 2% and also considerably decreases the margin of error. Calibration curves with leucine and glucosamine show satisfactory linearity between measured and applied activity in the range from 4 to 50 x 10/sup -//sup 3/ mu c of H/sup 3/. (auth)
Toward an Economic Definition of Sustainable Yield for Coastal Aquifers
NASA Astrophysics Data System (ADS)
Jenson, J. W.; Habana, N. C.; Lander, M.
2016-12-01
The concept of aquifer sustainable yield has long been criticized, debated, and even disparaged among groundwater hydrologists, but policy-makers and professional water resource managers inevitably ask them for unequivocal answers to such questions as "What is the absolute maximum volume of water that could be sustainably withdrawn from this aquifer?" We submit that it is therefore incumbent upon hydrologists to develop and offer valid practical definitions of sustainable yield that can be usefully applied to given conditions and types of aquifers. In coastal aquifers, water quality—in terms of salinity—is affected by changes in the natural water budget and the volume rate of artificial extraction. In principle, one can identify a family of assay curves for a given aquifer, showing the specific relationships between the quantity and quality of the water extracted under given conditions of recharge. The concept of the assay curve, borrowed from the literature of natural-resource extraction economics, has to our knowledge not yet found its way into the literature of applied hydrology. The relationships between recharge, extraction, and water quality that define the assay curve can be determined empirically from sufficient observations of groundwater response to recharge and extraction and can be estimated from models that have been reliably history-matched ("calibrated") to such data. We thus propose a working definition of sustainable yield for coastal aquifers in terms of the capacity that ultimately could be achieved by an ideal production system, given what is known or can be assumed about the natural limiting conditions. Accordingly, we also offer an approach for defining an ideal production system for a given aquifer, and demonstrate how observational data and/or modeling results can be used to develop assay curves of quality vs. quantity extracted, which can serve as reliable predictive tools for engineers, managers, regulators, and policy-makers interested in sustainable management of groundwater from coastal aquifers. Such tools can provide scientifically valid baselines against which to make informed economic evaluations of future options for holistic sustainable management of coastal aquifers.
NASA Astrophysics Data System (ADS)
Wagenbrenner, J.; Safeeq, M.; Hunsaker, C. T.
2017-12-01
Sediment yields are highly variable and controlled by multiple topographic, geomorphic, and hydrologic factors that make its generalization or prediction challenging. We examined the characteristics of sediment concentration across ten headwater catchments located in the Kings River Experimental Watersheds, Sierra Nevada, California. Study catchments ranged from 50 to 475 ha and spanned from 1,782 to 2,373 m in elevation in the rain-snow transition zone. Mean annual streamflow ranged from 281 to 408 mm in the low elevation Providence and 436 to 656 mm in the high elevation Bull catchments. We measured suspended sediment concentration (SSC) and bedload sediment yield from 2004-2016. We related these outputs to catchment mean elevation, relief, slope, and drainage density as natural controls and runoff ratio, baseflow index, recession constant, and slope of the flow duration curve as hydrologic controls. The SSC were higher in the high elevation Bull catchments (64 ± 34 mg L-1) as compared to low elevation Providence catchments (30 ± 17 mg L-1). Measured SSC in both Bull and Providence declined with increasing catchment mean elevation (R > - 0.5). We found slope of the flow duration curve (R = 0.85) and recession constant (R = -0.91) as the two of best predictors of SSC in Providence. In Bull, drainage area (R = 0.87) and baseflow index (R = -0.78) were the two best predictors of SSC. The intercept and slope of the suspended sediment yield - discharge rating curve (SSY-Q) in Providence was positively related to catchment relief. In contrast, the SSY-Q intercept increased and SSY-Q slope declined with increasing relief in Bull. The mean annual bedload sediment yield varied between 0.4 Mg km-2 and 4.2 Mg km-2 across the ten watersheds, and bedload contributed a relatively small fraction to the total sediment load. Mean bedload sediment yields across the catchments were most associated with catchment slope and relief. These preliminary results provide insight on the dynamics of sediment yield and the natural range of variability in small headwater Sierra Nevada streams. These results can guide selection of appropriate predictor variables for catchment-scale sediment yield models that inform forest management.
The role of off-line mass spectrometry in nuclear fission.
De Laeter, J R
1996-01-01
The role of mass spectrometry in nuclear fission has been invaluable since 1940, when A. O. C. Nier separated microgram quantities of (235) U from (238) U, using a gas source mass spectrometer. This experiment enabled the fissionable nature of (235) U to be established. During the Manhattan Project, the mass spectrometer was used to measure the isotope abundances of uranium after processing in various separation systems, in monitoring the composition of the gaseous products in the Oak Ridge Diffusion Plant, and as a helium leak detector. Following the construction of the first reactor at the University of Chicago, it was necessary to unravel the nuclear systematics of the various fission products produced in the fission process. Off-line mass spectrometry was able to identify stable and long-lived isotopes produced in fission, but more importantly, was used in numerous studies of the distribution of mass of the cumulative fission yields. Improvements in sensitivity enabled off-line mass spectrometric studies to identify fine structure in the mass-yield curve and, hence, demonstrate the importance of shell structure in nuclear fission. Solid-source mass spectrometry was also able to measure the cumulative fission yields in the valley of symmetry in the mass-yield curve, and enabled spontaneous fission yields to be quantified. Apart from the accurate measurement of abundances, the stable isotope mass spectrometric technique has been invaluable in establishing absolute cumulative fission yields for many isotopes making up the mass-yield distribution curve for a variety of fissile nuclides. Extensive mass spectrometric studies of noble gases in primitive meteorites revealed the presence of fission products from the now extinct nuclide (244) Pu, and have eliminated the possibility of fission products from a super-heavy nuclide contributing to isotopic anomalies in meteoritic material. Numerous mass spectrometric studies of the isotopic and elemental abundances of samples from the Oklo Natural Reactor have enabled the nuclear parameters of the various reactor zones to be calculated, and the mobility/retentivity of a number of elements to be established in the reactor zones and the surrounding rocks. These isotopic studies have given valuable information on the geochemical behavior of natural geological repositories for radioactive waste containment. © 1997 John Wiley & Sons, Inc. Copyright © 1997 John Wiley & Sons, Inc.
THE INACTIVATION OF DILUTE SOLUTIONS OF CRYSTALLINE TRYPSIN BY X-RADIATION
McDonald, Margaret R.
1955-01-01
The proteolytic activity of dilute solutions of clystalline trypsin is destroyed by x-rays, the amount of inactivation being an exponential function of the radiation dose. The reaction yield increases steadily with increasing concentration of trypsin, varying, as the concentration of enzyme is increased from 1 to 300 µM, from 0.068 to 0.958 micromole of trypsin per liter inactivated per 1000 r with 0.005 N hydrochloric acid as the solvent, from 0.273 to 0.866 with 0.005 N sulfuric acid as the solvent, and from 0.343 to 0.844 with 0.005 N nitric acid as the solvent. When the reaction yields are plotted as a function of the initial concentration of trypsin, they fall on a curve given by the expression Y α XK, in which Y is the reaction yield, X is the concentration of trypsin, and K is a constant equal to 0.46, 0.20, and 0.16, respectively, with 0.005 N hydrochloric, sulfuric, and nitric acids as solvents. The differences between the reaction yields found with chloride and sulfate ions in I to 10 µM trypsin solutions are significant only in the pH range from 2 to 4. The amount of inactivation obtained with a given dose of x-rays depends on the pH of the solution being irradiated and the nature of the solvent. The reaction yield-pH curve is a symmetrical one, with minimum yields at about pH 7. Buffers such as acetate, citrate, borate and barbiturate, and other organic molecules such as ethanol and glucose, in concentrations as low as 20 µM, inhibit the inactivation of trypsin by x-radiation. Sigmoid inactivation-dose curves instead of exponential ones are obtained in the presence of ethanol. The reaction yields for the inactivation of trypsin solutions by x-rays are approximately 1.5 times greater when the irradiation is done at 26°C. than when it is done at 5°C., when 0.005 N hydrochloric acid is the solvent. The dependence on temperature is less when 0.005 N sulfuric acid is used, and is negligible with 0.005 N nitric acid. The difficulties involved in interpreting radiation effects in aqueous systems, and in comparing the results obtained under different experimental conditions, are discussed. PMID:14367774
Optical and electrical characterization of a back-thinned CMOS active pixel sensor
NASA Astrophysics Data System (ADS)
Blue, Andrew; Clark, A.; Houston, S.; Laing, A.; Maneuski, D.; Prydderch, M.; Turchetta, R.; O'Shea, V.
2009-06-01
This work will report on the first work on the characterization of a back-thinned Vanilla-a 512×512 (25 μm squared) active pixel sensor (APS). Characterization of the detectors was carried out through the analysis of photon transfer curves to yield a measurement of full well capacity, noise levels, gain constants and linearity. Spectral characterization of the sensors was also performed in the visible and UV regions. A full comparison against non-back-thinned front illuminated Vanilla sensors is included. Such measurements suggest that the Vanilla APS will be suitable for a wide range of applications, including particle physics and biomedical imaging.
Performance characterization of a cross-flow hydrokinetic turbine in sheared inflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbush, Dominic; Polagye, Brian; Thomson, Jim
2016-12-01
A method for constructing a non-dimensional performance curve for a cross-flow hydrokinetic turbine in sheared flow is developed for a natural river site. The river flow characteristics are quasi-steady, with negligible vertical shear, persistent lateral shear, and synoptic changes dominated by long time scales (days to weeks). Performance curves developed from inflow velocities measured at individual points (randomly sampled) yield inconclusive turbine performance characteristics because of the spatial variation in mean flow. Performance curves using temporally- and spatially-averaged inflow velocities are more conclusive. The implications of sheared inflow are considered in terms of resource assessment and turbine control.
NASA Astrophysics Data System (ADS)
Pini, Ronny; Benson, Sally M.
2017-10-01
We report results from an experimental investigation on the hysteretic behaviour of the capillary pressure curve for the supercritical CO2-water system in a Berea Sandstone core. Previous observations have highlighted the importance of subcore-scale capillary heterogeneity in developing local saturations during drainage; we show in this study that the same is true for the imbibition process. Spatially distributed drainage and imbibition scanning curves were obtained for mm-scale subsets of the rock sample non-invasively using X-ray CT imagery. Core- and subcore-scale measurements are well described using the Brooks-Corey formalism, which uses a linear trapping model to compute mobile saturations during imbibition. Capillary scaling yields two separate universal drainage and imbibition curves that are representative of the full subcore-scale data set. This enables accurate parameterisation of rock properties at the subcore-scale in terms of capillary scaling factors and permeability, which in turn serve as effective indicators of heterogeneity at the same scale even when hysteresis is a factor. As such, the proposed core-analysis workflow is quite general and provides the required information to populate numerical models that can be used to extend core-flooding experiments to conditions prevalent in the subsurface, which would be otherwise not attainable in the laboratory.
Cold denaturation as a tool to measure protein stability
Sanfelice, Domenico; Temussi, Piero Andrea
2016-01-01
Protein stability is an important issue for the interpretation of a wide variety of biological problems but its assessment is at times difficult. The most common parameter employed to describe protein stability is the temperature of melting, at which the populations of folded and unfolded species are identical. This parameter may yield ambiguous results. It would always be preferable to measure the whole stability curve. The calculation of this curve is greatly facilitated whenever it is possible to observe cold denaturation. Using Yfh1, one of the few proteins whose cold denaturation occurs at neutral pH and low ionic strength, we could measure the variation of its full stability curve under several environmental conditions. Here we show the advantages of gauging stability as a function of external variables using stability curves. PMID:26026885
Using Landsat to provide potato production estimates to Columbia Basin farmers and processors
NASA Technical Reports Server (NTRS)
1990-01-01
A summary of project activities relative to the estimation of potato yields in the Columbia Basin is given. Oregon State University is using a two-pronged approach to yield estimation, one using simulation models and the other using purely empirical models. The simulation modeling approach has used satellite observations to determine key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Two empirical modeling approaches are illustrated. One relates tuber yield to estimates of cumulative intercepted solar radiation; the other relates tuber yield to the integral under the GVI curve.
Warm Forming of Aluminum Alloys using a Coupled Thermo-Mechanical Anisotropic Material Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abedrabbo, Nader; Pourboghrat, Farhang; Carsley, John E.
Temperature-dependant anisotropic material models for two types of automotive aluminum alloys (5754-O and 5182-O) were developed and implemented in LS-Dyna as a user material subroutine (UMAT) for coupled thermo-mechanical finite element analysis (FEA) of warm forming of aluminum alloys. The anisotropy coefficients of the Barlat YLD2000 plane stress yield function for both materials were calculated for the range of temperatures 25 deg. C-260 deg. C. Curve fitting was used to calculate the anisotropy coefficients of YLD2000 and the flow stress as a function of temperature. This temperature-dependent material model was successfully applied to the coupled thermo-mechanical analysis of stretching ofmore » aluminum sheets and results were compared with experiments.« less
W-curve alignments for HIV-1 genomic comparisons.
Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H
2010-06-01
The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of aligning extremes of the curves to effectively phase-shift them past the HIV-1 gap problem, is presented. Besides yielding similar neighbor-joining phenogram topologies, most Mother and Infant C2-V5 sequences in the cohort pairs geometrically map closest to each other, indicating that W-curve heuristics overcame any gap problem.
The Rise and Fall of Type Ia Supernova Light Curves in the SDSS-II Supernova Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayden, Brian T.; /Notre Dame U.; Garnavich, Peter M.
2010-01-01
We analyze the rise and fall times of Type Ia supernova (SN Ia) light curves discovered by the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey. From a set of 391 light curves k-corrected to the rest-frame B and V bands, we find a smaller dispersion in the rising portion of the light curve compared to the decline. This is in qualitative agreement with computer models which predict that variations in radioactive nickel yield have less impact on the rise than on the spread of the decline rates. The differences we find in the rise and fall properties suggest that amore » single 'stretch' correction to the light curve phase does not properly model the range of SN Ia light curve shapes. We select a subset of 105 light curves well observed in both rise and fall portions of the light curves and develop a '2-stretch' fit algorithm which estimates the rise and fall times independently. We find the average time from explosion to B-band peak brightness is 17.38 {+-} 0.17 days, but with a spread of rise times which range from 13 days to 23 days. Our average rise time is shorter than the 19.5 days found in previous studies; this reflects both the different light curve template used and the application of the 2-stretch algorithm. The SDSS-II supernova set and the local SNe Ia with well-observed early light curves show no significant differences in their average rise-time properties. We find that slow-declining events tend to have fast rise times, but that the distribution of rise minus fall time is broad and single peaked. This distribution is in contrast to the bimodality in this parameter that was first suggested by Strovink (2007) from an analysis of a small set of local SNe Ia. We divide the SDSS-II sample in half based on the rise minus fall value, t{sub r} - t{sub f} {approx}< 2 days and t{sub r} - t{sub f} > 2 days, to search for differences in their host galaxy properties and Hubble residuals; we find no difference in host galaxy properties or Hubble residuals in our sample.« less
Classification of Uxo by Principal Dipole Polarizability
NASA Astrophysics Data System (ADS)
Kappler, K. N.
2010-12-01
Data acquired by multiple-Transmitter, multiple-receiver time-domain electromagnetic devices show great potential for determining the geometric and compositional information relating to near surface conductive targets. Here is presented an analysis of data from one such system; the Berkeley Unexploded-ordnance Discriminator (BUD) system. BUD data are succinctly reduced by processing the multi-static data matrices to obtain magnetic dipole polarizability matrices for data from each time gate. When viewed over all time gates, the projections of the data onto the principal polar axes yield so-called polarizability curves. These curves are especially well suited to discriminating between subsurface conductivity anomalies which correspond to objects of rotational symmetry and irregularly shaped objects. The curves have previously been successfully employed as library elements in a pattern recognition scheme aimed at discriminating harmless scrap metal from dangerous intact unexploded ordnance. However, previous polarizability-curve matching methods have only been applied at field sites which are known a priori to be contaminated by a single type of ordnance, and furthermore, the particular ordnance present in the subsurface was known to be large. Thus signal amplitude was a key element in the discrimination process. The work presented here applies feature-based pattern classification techniques to BUD field data where more than 20 categories of object are present. Data soundings from a calibration grid at the Yuma, AZ proving ground are used in a cross validation study to calibrate the pattern recognition method. The resultant method is then applied to a Blind Test Grid. Results indicate that when lone UXO are present and SNR is reasonably high, Polarizability Curve Matching successfully discriminates UXO from scrap metal when a broad range of objects are present.
Dynamic linear models to explore time-varying suspended sediment-discharge rating curves
NASA Astrophysics Data System (ADS)
Ahn, Kuk-Hyun; Yellen, Brian; Steinschneider, Scott
2017-06-01
This study presents a new method to examine long-term dynamics in sediment yield using time-varying sediment-discharge rating curves. Dynamic linear models (DLMs) are introduced as a time series filter that can assess how the relationship between streamflow and sediment concentration or load changes over time in response to a wide variety of natural and anthropogenic watershed disturbances or long-term changes. The filter operates by updating parameter values using a recursive Bayesian design that responds to 1 day-ahead forecast errors while also accounting for observational noise. The estimated time series of rating curve parameters can then be used to diagnose multiscale (daily-decadal) variability in sediment yield after accounting for fluctuations in streamflow. The technique is applied in a case study examining changes in turbidity load, a proxy for sediment load, in the Esopus Creek watershed, part of the New York City drinking water supply system. The results show that turbidity load exhibits a complex array of variability across time scales. The DLM highlights flood event-driven positive hysteresis, where turbidity load remained elevated for months after large flood events, as a major component of dynamic behavior in the rating curve relationship. The DLM also produces more accurate 1 day-ahead loading forecasts compared to other static and time-varying rating curve methods. The results suggest that DLMs provide a useful tool for diagnosing changes in sediment-discharge relationships over time and may help identify variability in sediment concentrations and loads that can be used to inform dynamic water quality management.
Fitting milk production curves through nonlinear mixed models.
Piccardi, Monica; Macchiavelli, Raúl; Funes, Ariel Capitaine; Bó, Gabriel A; Balzarini, Mónica
2017-05-01
The aim of this work was to fit and compare three non-linear models (Wood, Milkbot and diphasic) to model lactation curves from two approaches: with and without cow random effect. Knowing the behaviour of lactation curves is critical for decision-making in a dairy farm. Knowledge of the model of milk production progress along each lactation is necessary not only at the mean population level (dairy farm), but also at individual level (cow-lactation). The fits were made in a group of high production and reproduction dairy farms; in first and third lactations in cool seasons. A total of 2167 complete lactations were involved, of which 984 were first-lactations and the remaining ones, third lactations (19 382 milk yield tests). PROC NLMIXED in SAS was used to make the fits and estimate the model parameters. The diphasic model resulted to be computationally complex and barely practical. Regarding the classical Wood and MilkBot models, although the information criteria suggest the selection of MilkBot, the differences in the estimation of production indicators did not show a significant improvement. The Wood model was found to be a good option for fitting the expected value of lactation curves. Furthermore, the three models fitted better when the subject (cow) random effect was considered, which is related to magnitude of production. The random effect improved the predictive potential of the models, but it did not have a significant effect on the production indicators derived from the lactation curves, such as milk yield and days in milk to peak.
Smoot, Betty J.; Wong, Josephine F.; Dodd, Marylin J.
2013-01-01
Objective To compare diagnostic accuracy of measures of breast cancer–related lymphedema (BCRL). Design Cross-sectional design comparing clinical measures with the criterion standard of previous diagnosis of BCRL. Setting University of California San Francisco Translational Science Clinical Research Center. Participants Women older than 18 years and more than 6 months posttreatment for breast cancer (n=141; 70 with BCRL, 71 without BCRL). Interventions Not applicable. Main Outcome Measures Sensitivity, specificity, receiver operator characteristic curve, and area under the curve (AUC) were used to evaluate accuracy. Results A total of 141 women were categorized as having (n=70) or not having (n=71) BCRL based on past diagnosis by a health care provider, which was used as the reference standard. Analyses of ROC curves for the continuous outcomes yielded AUC of .68 to .88 (P<.001); of the physical measures bioimpedance spectroscopy yielded the highest accuracy with an AUC of .88 (95% confidence interval, .80–.96) for women whose dominant arm was the affected arm. The lowest accuracy was found using the 2-cm diagnostic cutoff score to identify previously diagnosed BCRL (AUC, .54–.65). Conclusions Our findings support the use of bioimpedance spectroscopy in the assessment of existing BCRL. Refining diagnostic cutoff values may improve accuracy of diagnosis and warrant further investigation. PMID:21440706
Crystal plasticity assisted prediction on the yield locus evolution and forming limit curves
NASA Astrophysics Data System (ADS)
Lian, Junhe; Liu, Wenqi; Shen, Fuhui; Münstermann, Sebastian
2017-10-01
The aim of this study is to predict the plastic anisotropy evolution and its associated forming limit curves of bcc steels purely based on their microstructural features by establishing an integrated multiscale modelling approach. Crystal plasticity models are employed to describe the micro deformation mechanism and correlate the microstructure with mechanical behaviour on micro and mesoscale. Virtual laboratory is performed considering the statistical information of the microstructure, which serves as the input for the phenomenological plasticity model on the macroscale. For both scales, the microstructure evolution induced evolving features, such as the anisotropic hardening, r-value and yield locus evolution are seamlessly integrated. The predicted plasticity behaviour by the numerical simulations are compared with experiments. These evolutionary features of the material deformation behaviour are eventually considered for the prediction of formability.
Modelling of loading, stress relaxation and stress recovery in a shape memory polymer.
Sweeney, J; Bonner, M; Ward, I M
2014-09-01
A multi-element constitutive model for a lactide-based shape memory polymer has been developed that represents loading to large tensile deformations, stress relaxation and stress recovery at 60, 65 and 70°C. The model consists of parallel Maxwell arms each comprising neo-Hookean and Eyring elements. Guiu-Pratt analysis of the stress relaxation curves yields Eyring parameters. When these parameters are used to define the Eyring process in a single Maxwell arm, the resulting model yields at too low a stress, but gives good predictions for longer times. Stress dip tests show a very stiff response on unloading by a small strain decrement. This would create an unrealistically high stress on loading to large strain if it were modelled by an elastic element. Instead it is modelled by an Eyring process operating via a flow rule that introduces strain hardening after yield. When this process is incorporated into a second parallel Maxwell arm, there results a model that fully represents both stress relaxation and stress dip tests at 60°C. At higher temperatures a third arm is required for valid predictions. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wang, John T.; Bomarito, Geoffrey F.
2016-01-01
This study implements a plasticity tool to predict the nonlinear shear behavior of unidirectional composite laminates under multiaxial loadings, with an intent to further develop the tool for use in composite progressive damage analysis. The steps for developing the plasticity tool include establishing a general quadratic yield function, deriving the incremental elasto-plastic stress-strain relations using the yield function with associated flow rule, and integrating the elasto-plastic stress-strain relations with a modified Euler method and a substepping scheme. Micromechanics analyses are performed to obtain normal and shear stress-strain curves that are used in determining the plasticity parameters of the yield function. By analyzing a micromechanics model, a virtual testing approach is used to replace costly experimental tests for obtaining stress-strain responses of composites under various loadings. The predicted elastic moduli and Poisson's ratios are in good agreement with experimental data. The substepping scheme for integrating the elasto-plastic stress-strain relations is suitable for working with displacement-based finite element codes. An illustration problem is solved to show that the plasticity tool can predict the nonlinear shear behavior for a unidirectional laminate subjected to multiaxial loadings.
Koppenol-Gonzalez, Gabriela V; Bouwmeester, Samantha; Vermunt, Jeroen K
2014-10-01
In studies on the development of cognitive processes, children are often grouped based on their ages before analyzing the data. After the analysis, the differences between age groups are interpreted as developmental differences. We argue that this approach is problematic because the variance in cognitive performance within an age group is considered to be measurement error. However, if a part of this variance is systematic, it can provide very useful information about the cognitive processes used by some children of a certain age but not others. In the current study, we presented 210 children aged 5 to 12 years with serial order short-term memory tasks. First we analyze our data according to the approach using age groups, and then we apply latent class analysis to form latent classes of children based on their performance instead of their ages. We display the results of the age groups and the latent classes in terms of serial position curves, and we discuss the differences in results. Our findings show that there are considerable differences in performance between the age groups and the latent classes. We interpret our findings as indicating that the latent class analysis yielded a much more meaningful way of grouping children in terms of cognitive processes than the a priori grouping of children based on their ages. Copyright © 2014 Elsevier Inc. All rights reserved.
Tallarida, Ronald J.; Raffa, Robert B.
2014-01-01
In this review we show that the concept of dose equivalence for two drugs, the theoretical basis of the isobologram, has a wider use in the analysis of pharmacological data derived from single and combination drug use. In both its application to drug combination analysis with isoboles and certain other actions, listed below, the determination of doses, or receptor occupancies, that yield equal effects provide useful metrics that can be used to obtain quantitative information on drug actions without postulating any intimate mechanism of action. These other drug actions discussed here include (1) combinations of agonists that produce opposite effects, (2) analysis of inverted U-shaped dose effect curves of single agents, (3) analysis on the effect scale as an alternative to isoboles and (4) the use of occupation isoboles to examine competitive antagonism in the dual receptor case. New formulas derived to assess the statistical variance for additive combinations are included, and the more detailed mathematical topics are included in the appendix. PMID:20546783
Analysis of three tests of the unconfined aquifer in southern Nassau County, Long Island, New York
Lindner, J.B.; Reilly, T.E.
1982-01-01
Drawdown and recovery data from three 2-day aquifer tests (OF) the unconfined (water-table) aquifer in southern Nassau County, N.Y., during the fall of 1979, were analyzed. Several simple analytical solutions, a typecurve-matching procedure, and a Galerkin finite-element radial-flow model were used to determine hydraulic conductivity, ratio of horizontal to vertical hydraulic conductivity, and specific yield. Results of the curve-matching procedure covered a broad range of values that could be narrowed through consideration of data from other sources such as published reports, drillers ' logs, or values determined by analytical solutions. Analysis by the radial-flow model was preferred because it allows for vertical variability in aquifer properties and solves the system for all observation points simultaneously, whereas the other techniques treat the aquifer as homogeneous and must treat each observation well separately. All methods produced fairly consistent results. The ranges of aquifer values at the three sites were: horizontal hydraulic conductivity, 140 to 380 feet per day; transmissivity 11,200 to 17,100 feet squared per day; ratio of horizontal to vertical hydraulic conductivity 2.4:1 to 7:1, and specific yield , 0.13 to 0.23. (USGS)
Manufacturing complexity analysis
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1977-01-01
The analysis of the complexity of a typical system is presented. Starting with the subsystems of an example system, the step-by-step procedure for analysis of the complexity of an overall system is given. The learning curves for the various subsystems are determined as well as the concurrent numbers of relevant design parameters. Then trend curves are plotted for the learning curve slopes versus the various design-oriented parameters, e.g. number of parts versus slope of learning curve, or number of fasteners versus slope of learning curve, etc. Representative cuts are taken from each trend curve, and a figure-of-merit analysis is made for each of the subsystems. Based on these values, a characteristic curve is plotted which is indicative of the complexity of the particular subsystem. Each such characteristic curve is based on a universe of trend curve data taken from data points observed for the subsystem in question. Thus, a characteristic curve is developed for each of the subsystems in the overall system.
Nagel-Alne, G E; Krontveit, R; Bohlin, J; Valle, P S; Skjerve, E; Sølverød, L S
2014-07-01
In 2001, the Norwegian Goat Health Service initiated the Healthier Goats program (HG), with the aim of eradicating caprine arthritis encephalitis, caseous lymphadenitis, and Johne's disease (caprine paratuberculosis) in Norwegian goat herds. The aim of the present study was to explore how control and eradication of the above-mentioned diseases by enrolling in HG affected milk yield by comparison with herds not enrolled in HG. Lactation curves were modeled using a multilevel cubic spline regression model where farm, goat, and lactation were included as random effect parameters. The data material contained 135,446 registrations of daily milk yield from 28,829 lactations in 43 herds. The multilevel cubic spline regression model was applied to 4 categories of data: enrolled early, control early, enrolled late, and control late. For enrolled herds, the early and late notations refer to the situation before and after enrolling in HG; for nonenrolled herds (controls), they refer to development over time, independent of HG. Total milk yield increased in the enrolled herds after eradication: the total milk yields in the fourth lactation were 634.2 and 873.3 kg in enrolled early and enrolled late herds, respectively, and 613.2 and 701.4 kg in the control early and control late herds, respectively. Day of peak yield differed between enrolled and control herds. The day of peak yield came on d 6 of lactation for the control early category for parities 2, 3, and 4, indicating an inability of the goats to further increase their milk yield from the initial level. For enrolled herds, on the other hand, peak yield came between d 49 and 56, indicating a gradual increase in milk yield after kidding. Our results indicate that enrollment in the HG disease eradication program improved the milk yield of dairy goats considerably, and that the multilevel cubic spline regression was a suitable model for exploring effects of disease control and eradication on milk yield. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Film characteristics pertinent to coherent optical data processing systems.
Thomas, C E
1972-08-01
Photographic film is studied quantitatively as the input mechanism for coherent optical data recording and processing systems. The two important film characteristics are the amplitude transmission vs exposure (T(A) - E) curve and the film noise power spectral density. Both functions are measured as a function of the type of film, the type of developer, developer time and temperature, and the exposing and readout light wavelengths. A detailed analysis of a coherent optical spatial frequency analyzer reveals that the optimum do bias point for 649-F film is an amplitude transmission of about 70%. This operating point yields minimum harmonic and intermodulation distortion, whereas the 50% amplitude transmission bias point recommended by holographers yields maximum diffraction efficiency. It is also shown that the effective ac gain or contrast of the film is nearly independent of the development conditions for a given film. Finally, the linear dynamic range of one particular coherent optical spatial frequency analyzer is shown to be about 40-50 dB.
NASA Technical Reports Server (NTRS)
Boclair, J. W.; Braterman, P. S.; Jiang, J.; Lou, S.; Yarberry, F.
1999-01-01
Solutions containing divalent metal [M(II) = Mg2+, Zn2+, Co2+, Ni2+, Mn2+] chlorides and CrCl3 6H2O were titrated with NaOH to yield, for M(II) = Zn, Co, and Ni, hydrotalcite-like layered double hydroxides (LDHs), [[M(II)]1-z[Cr(III)]z(OH)2][Cl]z yH2O, in a single step, without intermediate formation of chromium hydroxide. Analysis of the resultant titration curves yields solubility constants for these compounds. These are in the order Zn < Ni approximately Co, with a clear preference for formation of the phase with z = 1/3. With Mg2+ as chloride, titration gives a mixture of Cr(OH)3 and Mg(OH)2, but the metal sulfates give Mg2Cr(OH)6 1/2(SO4) by a two-step process. Titrimetric and spectroscopic evidence suggests short-range cation order in the one-step LDH systems.
Rg to Lg Scattering Observations and Modeling
NASA Astrophysics Data System (ADS)
Baker, G. E.; Stevens, J. L.; Xu, H.
2005-12-01
Lg is important to explosion yield estimation and earthquake/explosion discrimination, but the source of explosion generated Lg is still an area of active investigation. We investigate the contribution of Rg scattering to Lg. Common spectral nulls in vertical component Rg and Lg have been interpreted as evidence that scattered Rg is the dominant source of Lg in some areas. The nulls are assumed to result from non-spherical components of the explosion source, modeled as a CLVD located above the explosion. We compare Rg with 3-component Sg and Lg spectra in different source areas. Wavenumber synthetics and nonlinear source calculations constrain the predicted source spectra of Rg and directly generated Lg. Modal scattering calculations place bounds on the contribution of Rg to Lg relative to pS, S*, and directly generated S-waves. Rg recorded east and west of the Quartz 3 Deep Seismic Sounding explosion have persistent spectral nulls, but at different frequencies. The azimuthal dependence of the source spectra suggests that it may not be simply related to a CLVD source. The spectral nulls of Sg, Lg, and Lg coda do not correspond to the Rg spectral nulls, so for this overburied source, the spectral observations do not indicate that Rg scattering is a dominant contributor to Lg. Preliminary comparisons of Rg with Lg spectra for events from the Semipalatinsk Test Site yield a similar result. We compare Rg at 20-100 km with Lg at 650 km for Balapan and Degelen explosions with known yield and source depth. The events range from 130 to 50 percent of theoretical containment depth, so relative contributions from a CLVD are expected to vary significantly. For studied previously NTS and Kazakh depth of burial data, the use of 3-components provides further insight into scattering between components. In a complementary analysis, to assess whether S-wave generation is affected by source depth or scaled depth, we have examined regional phase amplitudes of 13 Degelen explosions with known yields and source depths. Initial Pn, the entire P wavetrain, Sn, Lg, and Lg coda have similar log amplitude vs. log yield curves. The slope of those curves varies with frequency, from approximately 0.84 at 0.6 Hz to 0.65 at 6 Hz. We will complement these results with similar observations of Balapan explosion records.
A new comparison of hyperspectral anomaly detection algorithms for real-time applications
NASA Astrophysics Data System (ADS)
Díaz, María.; López, Sebastián.; Sarmiento, Roberto
2016-10-01
Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by the conclusions drawn from the simulations.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
NASA Astrophysics Data System (ADS)
Zafiropoulos, Demetre; Facco, E.; Sarchiapone, Lucia
2016-09-01
In case of a radiation accident, it is well known that in the absence of physical dosimetry biological dosimetry based on cytogenetic methods is a unique tool to estimate individual absorbed dose. Moreover, even when physical dosimetry indicates an overexposure, scoring chromosome aberrations (dicentrics and rings) in human peripheral blood lymphocytes (PBLs) at metaphase is presently the most widely used method to confirm dose assessment. The analysis of dicentrics and rings in PBLs after Giemsa staining of metaphase cells is considered the most valid assay for radiation injury. This work shows that applying the fluorescence in situ hybridization (FISH) technique, using telomeric/centromeric peptide nucleic acid (PNA) probes in metaphase chromosomes for radiation dosimetry, could become a fast scoring, reliable and precise method for biological dosimetry after accidental radiation exposures. In both in vitro methods described above, lymphocyte stimulation is needed, and this limits the application in radiation emergency medicine where speed is considered to be a high priority. Using premature chromosome condensation (PCC), irradiated human PBLs (non-stimulated) were fused with mitotic CHO cells, and the yield of excess PCC fragments in Giemsa stained cells was scored. To score dicentrics and rings under PCC conditions, the necessary centromere and telomere detection of the chromosomes was obtained using FISH and specific PNA probes. Of course, a prerequisite for dose assessment in all cases is a dose-effect calibration curve. This work illustrates the various methods used; dose response calibration curves, with 95% confidence limits used to estimate dose uncertainties, have been constructed for conventional metaphase analysis and FISH. We also compare the dose-response curve constructed after scoring of dicentrics and rings using PCC combined with FISH and PNA probes. Also reported are dose response curves showing scored dicentrics and rings per cell, combining PCC of lymphocytes and CHO cells with FISH using PNA probes after 10 h and 24 h after irradiation, and, finally, calibration data of excess PCC fragments (Giemsa) to be used if human blood is available immediately after irradiation or within 24 h.
Influence of caffeine on X-ray-induced killing and mutation in V79 cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharjee, S.B.; Bhattacharyya, N.; Chatterjee, S.
1987-02-01
Effects produced by caffeine on X-irradiated Chinese hamster V79 cells depended on the growth conditions of the cells. For exponentially growing cells, nontoxic concentrations of caffeine decreased the shoulder width from the survival curve, but the slope remained unchanged. The yield of mutants under the same conditions also remained unaffected. In case of density-inhibited cells, delaying trypsinization for 24 h after X irradiation increased the survival and decreased the yield of mutants. The presence of caffeine during this incubation period inhibited such recovery and significantly increased the yield of X-ray-induced mutants.
Design of air-gapped magnetic-core inductors for superimposed direct and alternating currents
NASA Technical Reports Server (NTRS)
Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.
1976-01-01
Using data on standard magnetic-material properties and standard core sizes for air-gap-type cores, an algorithm designed for a computer solution is developed which optimally determines the air-gap length and locates the quiescent point on the normal magnetization curve so as to yield an inductor design with the minimum number of turns for a given ac voltage and frequency and with a given dc bias current superimposed in the same winding. Magnetic-material data used in the design are the normal magnetization curve and a family of incremental permeability curves. A second procedure, which requires a simpler set of calculations, starts from an assigned quiescent point on the normal magnetization curve and first screens candidate core sizes for suitability, then determines the required turns and air-gap length.
NASA Astrophysics Data System (ADS)
Hassan, A.-P.
2014-07-01
The small-punch testing (SPT) method is used for determining the mechanical properties of AISI 410 (0.14% C, 12% Cr) stainless steel. A thin disc-shaped specimen with known mechanical properties is pressed with a small ball until the appearance of cracks in the former. The load - displacement curves are recorded. Computation of the yield strength and fracture energy by the curve obtained and by known formulas shows good convergence with the characteristics obtained by standard testing.
Quantum field theory on curved spacetimes: Axiomatic framework and examples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredenhagen, Klaus; Rejzner, Kasia
In this review article, we want to expose a systematic development of quantum field theory on curved spacetimes. The leading principle is the emphasis on local properties. It turns out that this requires a reformulation of the QFT framework which also yields a new perspective for the theories on Minkowski space. The aim of the present work is to provide an almost self-contained introduction into the framework, which should be accessible for both mathematical physicists and mathematicians.
Effect of curve sawing on lumber recovery and warp of short cherry logs containing sweep
Brian H. Bond; Philip Araman
2008-01-01
It has been estimated that approximately one-third of hardwood sawlogs have a significant amount of sweep and that 7 to nearly 40 percent of the yield is lost from logs that have greater than 1 inch of sweep. While decreased yield is important, for hardwood logs the loss of lumber value is likely more significant. A method that produced lumber while accounting for log...
The column strength of aluminum alloy 75S-T extruded shapes
NASA Technical Reports Server (NTRS)
Holt, Marshall; Leary, J R
1946-01-01
Because the tensile strength and tensile yield strength of alloy 75S-T are appreciably higher than those of the materials used in the tests leading to the use of the straight-line column curve, it appeared advisable to establish the curve of column strength by test rather than by extrapolation of relations determined empirically in the earlier tests. The object of this investigation was to determine the curve of column strength for extruded aluminum alloy 75S-T. In addition to three extruded shapes, a rolled-and-drawn round rod was included. Specimens of various lengths covering the range of effective slenderness ratios up to about 100 were tested.
Molecular dynamics study of the melting curve of NiTi alloy under pressure
NASA Astrophysics Data System (ADS)
Zeng, Zhao-Yi; Hu, Cui-E.; Cai, Ling-Cang; Chen, Xiang-Rong; Jing, Fu-Qian
2011-02-01
The melting curve of NiTi alloy was predicted by using molecular dynamics simulations combining with the embedded atom model potential. The calculated thermal equation of state consists well with our previous results obtained from quasiharmonic Debye approximation. Fitting the well-known Simon form to our Tm data yields the melting curves for NiTi: 1850(1 + P/21.938)0.328 (for one-phase method) and 1575(1 + P/7.476)0.305 (for two-phase method). The two-phase simulations can effectively eliminate the superheating in one-phase simulations. At 1 bar, the melting temperature of NiTi is 1575 ± 25 K and the corresponding melting slope is 64 K/GPa.
NASA Astrophysics Data System (ADS)
Adeloye, A. J.; Ojha, C. S.; Soundharajan, B.; Remesan, R.
2013-12-01
There is considerable change in both the spatial and temporal patterns of monsoon rainfall in India, with implications for water resources availability and security. 'Mitigating the Impacts of Climate Change on India Agriculture' (MICCI) is one of five on-going scientific efforts being sponsored as part of the UK-NERC/India-MOES Changing Water Cycle (South Asia) initiative to further the understanding of the problem and proffer solutions that are robust and effective. This paper focuses on assessing the implications of projected climate change on the yield and performance characteristics of the Pong Reservoir on the Beas River, Himachal Pradesh, India. The Pong serves both hydropower and irrigation needs and is therefore strategic for the socio-economic well-being of the region as well as sustaining the livelihoods of millions of farmers that rely on it for irrigation. Simulated baseline and climate-change perturbed hydro-climate scenarios developed as part of a companion Work Package of MICCI formed the basis of the analysis. For both of these scenarios, reservoir analyses were carried out using the Sequent Peak Algorithm (SPA) and Pong's existing level of releases to derive rule curves for the reservoir. These rule curves then formed the basis of further reservoir behaviour simulations in WEAP and the resulting performance of the reservoir was summarised in terms of reliability, resilience, vulnerability and sustainability. The whole exercise was implemented within a Monte Carlo framework for the benefit of characterising the variability in the assessments. The results show that the rule curves developed using future hydro-climate are significantly changed from the baseline in that higher storages will be required to be maintained in the Pong in the future to achieve reliable performance. As far as the overall performance of the reservoir is concerned, future reliability (both time-based and volume-based) is not significantly different from the baseline, provided the future simulations adopt the future rule curves. This is, however, not the case with the resilience, with the future hydro-climate resulting in a less resilient system when compared with the baseline. The resilience is the ability of the system to recover from a hydrological failure; consequently, lower resilience for the future systems is an indication that longer, continuous failure periods are likely with implications for the two purposes of the reservoir. For example, extended periods of water scarcity that may result from a low resilient system will mean that crops are likely to experience longer periods of water stress with implications for crop yields. In such situations, better operational practices that manage the available water through hedging and irrigation water scheduling will be required. Other interventions may include the introduction of water from other sources, e.g. groundwater.
NASA Technical Reports Server (NTRS)
George, Kerry A.; Cucinotta, Francis A.
2009-01-01
The yield of chromosome damage in astronauts blood lymphocytes has been shown to increase after long duration space missions of a few months or more. This provides a useful in vivo measurement of space radiation induced damage that takes into account individual radiosensitivity and considers the influence of microgravity and other stress conditions. We present our latest follow-up analyses of chromosome damage in astronauts blood lymphocytes assessed by fluorescence in situ hybridization (FISH) chromosome painting and collected at various times, from directly after return from space to several years after flight. For most individuals the analysis of individual time-courses for translocations revealed a temporal decline of yields with different half-lives. Dose was derived from frequencies of chromosome exchanges using preflight calibration curves, and estimates derived from samples collected a few days after return to earth lie within the range expected from physical dosimetry. However, a temporal decline in yields may indicate complications with the use of stable aberrations for retrospective dose reconstruction, and the differences in the decay time may reflect individual variability in risk from space radiation exposure. Limited data on three individuals who have participated in repeat long duration space flights indicates a lack of correlation between time in space and translocation yields, and show a possible adaptive response to space radiation exposure.
NASA Astrophysics Data System (ADS)
Christanto, N.; Sartohadi, J.; Setiawan, M. A.; Shrestha, D. B. P.; Jetten, V. G.
2018-04-01
Land use change influences the hydrological as well as landscape processes such as runoff and sediment yields. The main objectives of this study are to assess the land use change and its impact on the runoff and sediment yield of the upper Serayu Catchment. Land use changes of 1991 to 2014 have been analyzed. Spectral similarity and vegetation indices were used to classify the old image. Therefore, the present and the past images are comparable. The influence of the past and present land use on runoff and sediment yield has been compared with field measurement. The effect of land use changes shows the increased surface runoff which is the result of change in the curve number (CN) values. The study shows that it is possible to classify previously obtained image based on spectral characteristics and indices of major land cover types derived from recently obtained image. This avoids the necessity of having training samples which will be difficult to obtain. On the other hand, it also demonstrates that it is possible to link land cover changes with land degradation processes and finally to sedimentation in the reservoir. The only condition is the requirement for having the comparable dataset which should not be difficult to generate. Any variation inherent in the data which are other than surface reflectance has to be corrected.
Guillermina Socías, María; Van Nieuwenhove, Guido; Murúa, María Gabriela; Willink, Eduardo; Liljesthröm, Gerardo Gustavo
2016-04-01
The soybean stalk weevil, Sternechus subsignatus Boheman 1836 (Coleoptera: Curculionidae), is a very serious soybean pest in the Neotropical region. Both adults and larvae feed on soybean, causing significant yield losses. Adult survival was evaluated during three soybean growing seasons under controlled environmental conditions. A survival analysis was performed using a parametric survival fit approach in order to generate survival curves and obtain information that could help optimize integrated management strategies for this weevil pest. Sex of the weevils, crop season, fortnight in which weevils emerged, and their interaction were studied regarding their effect on adult survival. The results showed that females lived longer than males, but both genders were actually long-lived, reaching 224 and 176 d, respectively. Mean lifetime (l50) was 121.88±4.56 d for females and 89.58±2.72 d for males. Although variations were observed in adult longevities among emergence fortnights and soybean seasons, only in December and January fortnights of the 2007–2008 season and December fortnights of 2009–2010 did the statistically longest and shortest longevities occur, respectively. Survivorship data (lx) of adult females and males were fitted to the Weibull frequency distribution model. The survival curve was type I for both sexes, which indicated that mortality corresponded mostly to old individuals.
Carvalho, Lucas de F; Pianowski, Giselle; Filho, Nelson H
2017-05-01
The Clinical Dimensional Personality Inventory (IDCP) is a 163-item self-report tool developed for the assessment of 12 dimensions of personality pathology. One of the scales comprising the instrument-the Dependency scale-is intended to provide psychometric information on traits closely related to the Dependent Personality Disorder (DPD). In the present study, we used both Item Response Theory modeling and Receiver Operating Characteristic curve analysis to establishing a clinically meaningful cutoff for the IDCP Dependency Scale. Participants were 2.481 adults, comprised by outpatients diagnosed with DPD, outpatients diagnosed with other PDs, and adults from the general population. The Wright's item map graphing technique revealed that outpatients were located at the very high levels in the latent scale continuum of the Dependency Scale, with a very large effect size for the mean difference between patients and non-patients. The ROC curve analysis supported a cutoff at 2.3 points in the Dependency Scale, which yielded 0.86 of sensitivity and 0.79 of specificity. Findings from the present investigation suggest the IDCP Dependency Scale is useful as a screening tool of the core features of the DPD. We address potential clinical applications for the instrument, and discuss limitations from the present study. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Enhos, Asım; Sahin, Irfan; Can, Mehmet Mustafa; Biter, Ibrahim; Dinckal, Mustafa Hakan; Serebruany, Victor
2013-01-01
Objective To investigated the relationship between epicardial fat volume (EFV) and coronary collateral circulation (CCC) in patients with stable coronary artery disease (CAD). Methods The study population consisted of 152 consecutive patients with CAD who underwent coronary angiography and were found to have at least 95% significiant lesion in at least one major coronary artery. EFV was assessed utilizing 64-multislice computed tomography. The patients were classifield into impaired CCC group (Group 1, Rentrop grades 0−1, n = 58), or adequate CCC (Group 2, Rentrop grades 2−3, n = 94). Results The EFV values were significantly higher in paitients with adequate CCC than in those with impaired CCC. In multivariate logistic regression analysis, EFV (OR = 1.059; 95% CI: 1.035−1.085; P = 0.001); and presence of angina were independent predictors of adequate CCC. In receiver-operating characteristic curve analysis, the EFV value > 106.5 mL yielded an area under the curve value of 0.84, with the test sensitivity of 49.3%, and with 98.3% specifity. Conclusions High EFV, and the presence of angina independently predict adequate CCC in patients with stable coronary artery disease. This association offers new diagnostic opportinities to assess collateral flow by conventional ultrasound techniques. PMID:24454327
A fractal analysis of quaternary, Cenozoic-Mesozoic, and Late Pennsylvanian sea level changes
NASA Technical Reports Server (NTRS)
Hsui, Albert T.; Rust, Kelly A.; Klein, George D.
1993-01-01
Sea level changes are related to both climatic variations and tectonic movements. The fractal dimensions of several sea level curves were compared to a modern climatic fractal dimension of 1.26 established for annual precipitation records. A similar fractal dimension (1.22) based on delta(O-18/O-16) in deep-sea sediments has been suggested to characterize climatic change during the past 2 m.y. Our analysis indicates that sea level changes over the past 150,000 to 250,000 years also exhibit comparable fractal dimensions. Sea level changes for periods longer than about 30 m.y. are found to produce fractal dimensions closer to unity and Missourian (Late Pennsylvanian) sea level changes yield a fractal dimension of 1.41. The fact that these sea level curves all possess fractal dimensions less than 1.5 indicates that sea level changes exhibit nonperiodic, long-run persistence. The different fractal dimensions calculated for the various time periods could be the result of a characteristic overprinting of the sediment recored by prevailing processes during deposition. For example, during the Quaternary, glacio-eustatic sea level changes correlate well with the present climatic signature. During the Missourian, however, mechanisms such as plate reorganization may have dominated, resulting in a significantly different fractal dimension.
Song, Yang; Zhang, Yu-Dong; Yan, Xu; Liu, Hui; Zhou, Minxiong; Hu, Bingwen; Yang, Guang
2018-04-16
Deep learning is the most promising methodology for automatic computer-aided diagnosis of prostate cancer (PCa) with multiparametric MRI (mp-MRI). To develop an automatic approach based on deep convolutional neural network (DCNN) to classify PCa and noncancerous tissues (NC) with mp-MRI. Retrospective. In all, 195 patients with localized PCa were collected from a PROSTATEx database. In total, 159/17/19 patients with 444/48/55 observations (215/23/23 PCas and 229/25/32 NCs) were randomly selected for training/validation/testing, respectively. T 2 -weighted, diffusion-weighted, and apparent diffusion coefficient images. A radiologist manually labeled the regions of interest of PCas and NCs and estimated the Prostate Imaging Reporting and Data System (PI-RADS) scores for each region. Inspired by VGG-Net, we designed a patch-based DCNN model to distinguish between PCa and NCs based on a combination of mp-MRI data. Additionally, an enhanced prediction method was used to improve the prediction accuracy. The performance of DCNN prediction was tested using a receiver operating characteristic (ROC) curve, and the area under the ROC curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Moreover, the predicted result was compared with the PI-RADS score to evaluate its clinical value using decision curve analysis. Two-sided Wilcoxon signed-rank test with statistical significance set at 0.05. The DCNN produced excellent diagnostic performance in distinguishing between PCa and NC for testing datasets with an AUC of 0.944 (95% confidence interval: 0.876-0.994), sensitivity of 87.0%, specificity of 90.6%, PPV of 87.0%, and NPV of 90.6%. The decision curve analysis revealed that the joint model of PI-RADS and DCNN provided additional net benefits compared with the DCNN model and the PI-RADS scheme. The proposed DCNN-based model with enhanced prediction yielded high performance in statistical analysis, suggesting that DCNN could be used in computer-aided diagnosis (CAD) for PCa classification. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Hughes, G; Burnett, F J; Havis, N D
2013-11-01
Disease risk curves are simple graphical relationships between the probability of need for treatment and evidence related to risk factors. In the context of the present article, our focus is on factors related to the occurrence of disease in crops. Risk is the probability of adverse consequences; specifically in the present context it denotes the chance that disease will reach a threshold level at which crop protection measures can be justified. This article describes disease risk curves that arise when risk is modeled as a function of more than one risk factor, and when risk is modeled as a function of a single factor (specifically the level of disease at an early disease assessment). In both cases, disease risk curves serve as calibration curves that allow the accumulated evidence related to risk to be expressed on a probability scale. When risk is modeled as a function of the level of disease at an early disease assessment, the resulting disease risk curve provides a crop loss assessment model in which the downside is denominated in terms of risk rather than in terms of yield loss.
Asymptotic scalings of developing curved pipe flow
NASA Astrophysics Data System (ADS)
Ault, Jesse; Chen, Kevin; Stone, Howard
2015-11-01
Asymptotic velocity and pressure scalings are identified for the developing curved pipe flow problem in the limit of small pipe curvature and high Reynolds numbers. The continuity and Navier-Stokes equations in toroidal coordinates are linearized about Dean's analytical curved pipe flow solution (Dean 1927). Applying appropriate scaling arguments to the perturbation pressure and velocity components and taking the limits of small curvature and large Reynolds number yields a set of governing equations and boundary conditions for the perturbations, independent of any Reynolds number and pipe curvature dependence. Direct numerical simulations are used to confirm these scaling arguments. Fully developed straight pipe flow is simulated entering a curved pipe section for a range of Reynolds numbers and pipe-to-curvature radius ratios. The maximum values of the axial and secondary velocity perturbation components along with the maximum value of the pressure perturbation are plotted along the curved pipe section. The results collapse when the scaling arguments are applied. The numerically solved decay of the velocity perturbation is also used to determine the entrance/development lengths for the curved pipe flows, which are shown to scale linearly with the Reynolds number.
14-3-3η Autoantibodies: Diagnostic Use in Early Rheumatoid Arthritis.
Maksymowych, Walter P; Boire, Gilles; van Schaardenburg, Dirkjan; Wichuk, Stephanie; Turk, Samina; Boers, Maarten; Siminovitch, Katherine A; Bykerk, Vivian; Keystone, Ed; Tak, Paul Peter; van Kuijk, Arno W; Landewé, Robert; van der Heijde, Desiree; Murphy, Mairead; Marotta, Anthony
2015-09-01
To describe the expression and diagnostic use of 14-3-3η autoantibodies in early rheumatoid arthritis (RA). 14-3-3η autoantibody levels were measured using an electrochemiluminescent multiplexed assay in 500 subjects (114 disease-modifying antirheumatic drug-naive patients with early RA, 135 with established RA, 55 healthy, 70 autoimmune, and 126 other non-RA arthropathy controls). 14-3-3η protein levels were determined in an earlier analysis. Two-tailed Student t tests and Mann-Whitney U tests compared differences among groups. Receiver-operator characteristic (ROC) curves were generated and diagnostic performance was estimated by area under the curve (AUC), as well as specificity, sensitivity, and likelihood ratios (LR) for optimal cutoffs. Median serum 14-3-3η autoantibody concentrations were significantly higher (p < 0.0001) in patients with early RA (525 U/ml) when compared with healthy controls (235 U/ml), disease controls (274 U/ml), autoimmune disease controls (274 U/ml), patients with osteoarthritis (259 U/ml), and all controls (265 U/ml). ROC curve analysis comparing early RA with healthy controls demonstrated a significant (p < 0.0001) AUC of 0.90 (95% CI 0.85-0.95). At an optimal cutoff of ≥ 380 U/ml, the ROC curve yielded a sensitivity of 73%, a specificity of 91%, and a positive LR of 8.0. Adding 14-3-3η autoantibodies to 14-3-3η protein positivity enhanced the identification of patients with early RA from 59% to 90%; addition of 14-3-3η autoantibodies to anticitrullinated protein antibodies (ACPA) and/or rheumatoid factor (RF) increased identification from 72% to 92%. Seventy-two percent of RF- and ACPA-seronegative patients were positive for 14-3-3η autoantibodies. 14-3-3η autoantibodies, alone and in combination with the 14-3-3η protein, RF, and/or ACPA identified most patients with early RA.
Johansson, K. Olof; Z?dor, Judit; Elvati, Paolo; ...
2017-05-18
We present a critical evaluation of photoionization efficiency (PIE) measurements coupled with aerosol mass spectrometry for the identification of condensed soot-precursor species extracted from a premixed atmospheric-pressure ethylene/oxygen/nitrogen flame. Definitive identification of isomers by any means is complicated by the large number of potential isomers at masses likely to comprise particles at flame temperatures. This problem is compounded using PIE measurements by the similarity in ionization energies and PIE-curve shapes among many of these isomers. Nevertheless, PIE analysis can provide important chemical information. For example, our PIE curves show that neither pyrene nor fluoranthene alone can describe the signal frommore » C 16H 10 isomers and that coronene alone cannot describe the PIE signal from C 24H 12 species. A linear combination of the reference PIE curves for pyrene and fluoranthene yields good agreement with flame-PIE curves measured at 202 u, which is consistent with pyrene and fluoranthene being the two major C 16H 10 isomers in the flame samples, but does not provide definite proof. The suggested ratio between fluoranthene and pyrene depends on the sampling conditions. We calculated the values of the adiabatic-ionization energy (AIE) of 24 C 16H 10 isomers. Despite the small number of isomers considered, the calculations show that the differences in AIEs between several of the isomers can be smaller than the average thermal energy at room temperature. The calculations also show that PIE analysis can sometimes be used to separate hydrocarbon species into those that contain mainly aromatic rings and those that contain significant aliphatic content for species sizes investigated in this study. Our calculations suggest an inverse relationship between AIE and the number of aromatic rings. We have demonstrated that further characterization of precursors can be facilitated by measurements that test species volatility.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johansson, K. Olof; Z?dor, Judit; Elvati, Paolo
We present a critical evaluation of photoionization efficiency (PIE) measurements coupled with aerosol mass spectrometry for the identification of condensed soot-precursor species extracted from a premixed atmospheric-pressure ethylene/oxygen/nitrogen flame. Definitive identification of isomers by any means is complicated by the large number of potential isomers at masses likely to comprise particles at flame temperatures. This problem is compounded using PIE measurements by the similarity in ionization energies and PIE-curve shapes among many of these isomers. Nevertheless, PIE analysis can provide important chemical information. For example, our PIE curves show that neither pyrene nor fluoranthene alone can describe the signal frommore » C 16H 10 isomers and that coronene alone cannot describe the PIE signal from C 24H 12 species. A linear combination of the reference PIE curves for pyrene and fluoranthene yields good agreement with flame-PIE curves measured at 202 u, which is consistent with pyrene and fluoranthene being the two major C 16H 10 isomers in the flame samples, but does not provide definite proof. The suggested ratio between fluoranthene and pyrene depends on the sampling conditions. We calculated the values of the adiabatic-ionization energy (AIE) of 24 C 16H 10 isomers. Despite the small number of isomers considered, the calculations show that the differences in AIEs between several of the isomers can be smaller than the average thermal energy at room temperature. The calculations also show that PIE analysis can sometimes be used to separate hydrocarbon species into those that contain mainly aromatic rings and those that contain significant aliphatic content for species sizes investigated in this study. Our calculations suggest an inverse relationship between AIE and the number of aromatic rings. We have demonstrated that further characterization of precursors can be facilitated by measurements that test species volatility.« less
Sustainability and Environmental Economics: Some Critical Foci
I present five seminal concepts of environmental economic thought and discuss their applicability to the idea of sustainability. These five, Maximum Sustainable Yield and Steady-state, The Environmental Kuznet’s curve, Substitutability, Discount rate and Intergenerational equity...
EXPERIMENTAL MEASUREMENT AND INTERPRETATION OF VOLT-AMPERE CURVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gingrich, J.E.; Warner, C.; Weeks, C.C.
1962-07-01
Cylindrical and parallel-plane cesium vapor thermionic converters were used for obtaining volt-ampere curves for systematic variations of emitter, collector, and cesium reservoir temperatures, with electrode spacings ranging from a few to many mean free paths, and with space charge conditions varying from electron-rich to ion-rich. The resulting curves exhibit much variety. The saturation currents agree well with the data of Houston and Aamodt for the space charge neutralized, few-mean-free-path cases. Apparent'' saturation currents for space charge limited cases were observed and were always less than the currents predicted by Houston and Aamodt. Several discontinuities in slope were observed in themore » reverse current portion of the curves and these have tentatively been identified with volume ionization of atoms in both the ground and excited states. Similar processes may be important for obtaining the ignited mode. The methods used to measure static and dynamic volt-ampere curves are described. The use of a controlled-current load has yielded a negative resistance'' region in the curves which show the ignited mode. The curves obtained with poor current control do not show this phenomenon. Extinction is considered from the standpoint of Kaufmann' s criterion for stability. (auth)« less
Optimal threshold estimation for binary classifiers using game theory.
Sanchez, Ignacio Enrique
2016-01-01
Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.
Recession curve analysis for groundwater levels: case study in Latvia
NASA Astrophysics Data System (ADS)
Gailuma, A.; Vītola, I.; Abramenko, K.; Lauva, D.; Vircavs, V.; Veinbergs, A.; Dimanta, Z.
2012-04-01
Recession curve analysis is powerful and effective analysis technique in many research areas related with hydrogeology where observations have to be made, such as water filtration and absorption of moisture, irrigation and drainage, planning of hydroelectric power production and chemical leaching (elution of chemical substances) as well as in other areas. The analysis of the surface runoff hydrograph`s recession curves, which is performed to conceive the after-effects of interaction of precipitation and surface runoff, has approved in practice. The same method for analysis of hydrograph`s recession curves can be applied for the observations of the groundwater levels. There are manually prepared hydrograph for analysis of recession curves for observation wells (MG2, BG2 and AG1) in agricultural monitoring sites in Latvia. Within this study from the available monitoring data of groundwater levels were extracted data of declining periods, splitted by month. The drop-down curves were manually (by changing the date) moved together, until to find the best match, thereby obtaining monthly drop-down curves, representing each month separately. Monthly curves were combined and manually joined, for obtaining characterizing drop-down curves of the year for each well. Within the process of decreased recession curve analysis, from the initial curve was cut out upward areas, leaving only the drops of the curve, consequently, the curve is transformed more closely to the groundwater flow, trying to take out the impact of rain or drought periods from the curve. Respectively, the drop-down curve is part of the data, collected with hydrograph, where data with the discharge dominates, without considering impact of precipitation. Using the recession curve analysis theory, ready tool "A Visual Basic Spreadsheet Macro for Recession Curve Analysis" was used for selection of data and logarithmic functions matching (K. Posavec et.al., GROUND WATER 44, no. 5: 764-767, 2006), as well as functions were developed by manual processing of data. For displaying data the mathematical model of data equalization was used, finding the corresponding or closest logarithmic function of the recession for the graph. Obtained recession curves were similar but not identical. With full knowledge of the fluctuations of ground water level, it is possible to indirectly (without taking soil samples) determine the filtration coefficient: more rapid decline in the recession curve correspond for the better filtration conditions. This research could be very useful in construction planning, road constructions, agriculture etc. Acknowledgments The authors gratefully acknowledge the funding from ESF Project "Establishment of interdisciplinary scientist group and modeling system for groundwater research" (Agreement No. 2009/0212/1DP/1.1.1.2.0/09/APIA/VIAA/060EF7)
Resistance Curves in the Tensile and Compressive Longitudinal Failure of Composites
NASA Technical Reports Server (NTRS)
Camanho, Pedro P.; Catalanotti, Giuseppe; Davila, Carlos G.; Lopes, Claudio S.; Bessa, Miguel A.; Xavier, Jose C.
2010-01-01
This paper presents a new methodology to measure the crack resistance curves associated with fiber-dominated failure modes in polymer-matrix composites. These crack resistance curves not only characterize the fracture toughness of the material, but are also the basis for the identification of the parameters of the softening laws used in the analytical and numerical simulation of fracture in composite materials. The method proposed is based on the identification of the crack tip location by the use of Digital Image Correlation and the calculation of the J-integral directly from the test data using a simple expression derived for cross-ply composite laminates. It is shown that the results obtained using the proposed methodology yield crack resistance curves similar to those obtained using FEM-based methods in compact tension carbon-epoxy specimens. However, it is also shown that the Digital Image Correlation based technique can be used to extract crack resistance curves in compact compression tests for which FEM-based techniques are inadequate.
Analysis of Voyager spectra of the beta Cephei star nu Eridani
NASA Technical Reports Server (NTRS)
Porri, A.; Stalio, R.; Ali, B.; Polidan, R. S.; Morossi, C.
1994-01-01
Voyager 500-1700 A spectrophotometric observations of the beta Cephei star nu Eri are presented and discussed. The Voyager observations were obtained in 1981 and cover six pulsation cycles of the star. These data are supplemented with a set of nine International Ultraviolet Explorer (IUE) SWP high-resolution observations covering one, earlier epoch, pulsation cycle. Light curves are derived from the Voyager data at 1055 and 1425 A. These light curves are found to be consistent in both shape and period with published optical curves. The 1055 A light curve also exhibits a phenomenon not seen in the optical curves: a small but highly significant systematic increase in the flux of the maximum light phases while maintaining a constant minimum light level over the interval of observation. Substantially larger errors in the longer wavelength data preclude discussion of this phenomenon in the 1425 A light curve. Examination of the far-UV continuum in nu Eri during this period shows that the color temperature is lower for the brighter maxima. Analysis of the far-UV continuum at maximum and minimum light yields an effective temperature difference between these two phases of 2200 + or - 750 K. Spectroscopically, three prominent features are seen in the Voyager data: a feature at 985 A mostly due to a blend of C III 977 A, H I Ly gamma 972 A, and N III 990 A; a feature at 1030 A due to H I Ly beta 1026 A and C II 1037 A; and the Si IV resonance doublet near 1400 A. A comparison of the 912-1700 A spectral region in nu Eri with a set of standard, i.e., nonpulsating stars, shows that nu Eri closely resembles the standard both in continuum shape and spectral line strengths with the possible exception of a slight flux excess between 912 and 975 A. The equivalent width of the 985 A feature is shown to vary in strength over the pulsation cycle in antiphase with the light curve and variations seen in the C IV 1548-1551 lines from the IUE data. This behavior of the 985 A feature is most likely caused by variations in the strength of the Ly gamma component of the blend. Comparisons are also made between nu Eri and the only other beta Cephei star studied in the far-UV, BW Vul, with the most notable differences between the two stars being the much larger delta(T(sub eff)) for BW Vul and the almost total absence of abnormalities in observed spectrum of nu Eri.
Bayesian Inference and Application of Robust Growth Curve Models Using Student's "t" Distribution
ERIC Educational Resources Information Center
Zhang, Zhiyong; Lai, Keke; Lu, Zhenqiu; Tong, Xin
2013-01-01
Despite the widespread popularity of growth curve analysis, few studies have investigated robust growth curve models. In this article, the "t" distribution is applied to model heavy-tailed data and contaminated normal data with outliers for growth curve analysis. The derived robust growth curve models are estimated through Bayesian…
Optical Rotation Curves and Linewidths for Tully-Fisher Applications
NASA Astrophysics Data System (ADS)
Courteau, Stephane
1997-12-01
We present optical long-slit rotation curves for 304 northern Sb-Sc UGC galaxies from a sample designed for Tully-Fisher (TF) applications. Matching r-band photometry exists for each galaxy. We describe the procedures of rotation curve (RC) extraction and construction of optical profiles analogous to 21 cm integrated linewidths. More than 20% of the galaxies were observed twice or more, allowing for a proper determination of systematic errors. Various measures of maximum rotational velocity to be used as input in the TF relation are tested on the basis of their repeatability, minimization of TF scatter, and match with 21 cm linewidths. The best measure of TF velocity, V2.2 is given at the location of peak rotational velocity of a pure exponential disk. An alternative measure to V2.2 which makes no assumption about the luminosity profile or shape of the rotation curve is Vhist, the 20% width of the velocity histogram, though the match with 21 cm linewidths is not as good. We show that optical TF calibrations yield internal scatter comparable to, if not smaller than, the best calibrations based on single-dish 21 cm radio linewidths. Even though resolved H I RCs are more extended than their optical counterpart, a tight match between optical and radio linewidths exists since the bulk of the H I surface density is enclosed within the optical radius. We model the 304 RCs presented here plus a sample of 958 curves from Mathewson et al. (1992, APJS, 81, 413) with various fitting functions. An arctan function provides an adequate simple fit (not accounting for non-circular motions and spiral arms). More elaborate, empirical models may yield a better match at the expense of strong covariances. We caution against physical or "universal" parametrizations for TF applications.
Kim, Joo-Hwan; Kim, Jin Ho; Wang, Pengbin; Park, Bum Soo; Han, Myung-Soo
2016-01-01
The identification and quantification of Heterosigma akashiwo cysts in sediments by light microscopy can be difficult due to the small size and morphology of the cysts, which are often indistinguishable from those of other types of algae. Quantitative real-time PCR (qPCR) based assays represent a potentially efficient method for quantifying the abundance of H. akashiwo cysts, although standard curves must be based on cyst DNA rather than on vegetative cell DNA due to differences in gene copy number and DNA extraction yield between these two cell types. Furthermore, qPCR on sediment samples can be complicated by the presence of extracellular DNA debris. To solve these problems, we constructed a cyst-based standard curve and developed a simple method for removing DNA debris from sediment samples. This cyst-based standard curve was compared with a standard curve based on vegetative cells, as vegetative cells may have twice the gene copy number of cysts. To remove DNA debris from the sediment, we developed a simple method involving dilution with distilled water and heating at 75°C. A total of 18 sediment samples were used to evaluate this method. Cyst abundance determined using the qPCR assay without DNA debris removal yielded results up to 51-fold greater than with direct counting. By contrast, a highly significant correlation was observed between cyst abundance determined by direct counting and the qPCR assay in conjunction with DNA debris removal (r2 = 0.72, slope = 1.07, p < 0.001). Therefore, this improved qPCR method should be a powerful tool for the accurate quantification of H. akashiwo cysts in sediment samples.
Accuracy and reliability of the Pfeffer Questionnaire for the Brazilian elderly population
Dutra, Marina Carneiro; Ribeiro, Raynan dos Santos; Pinheiro, Sarah Brandão; de Melo, Gislane Ferreira; Carvalho, Gustavo de Azevedo
2015-01-01
The aging population calls for instruments to assess functional and cognitive impairment in the elderly, aiming to prevent conditions that affect functional abilities. Objective To verify the accuracy and reliability of the Pfeffer (FAQ) scale for the Brazilian elderly population and to evaluate the reliability and reproducibility of the translated version of the Pfeffer Questionnaire. Methods The Brazilian version of the FAQ was applied to 110 elderly divided into two groups. Both groups were assessed by two blinded investigators at baseline and again after 15 days. In order to verify the accuracy and reliability of the instrument, sensitivity and specificity measurements for the presence or absence of functional and cognitive decline were calculated for various cut-off points and the ROC curve. Intra and inter-examiner reliability were assessed using the Interclass Correlation Coefficient (ICC) and Bland-Altman plots. Results For the occurrence of cognitive decline, the ROC curve yielded an area under the curve of 0.909 (95%CI of 0.845 to 0.972), sensitivity of 75.68% (95%CI of 93.52% to 100%) and specificity of 97.26%. For the occurrence of functional decline, the ROC curve yielded an area under the curve of 0.851 (95%CI of 64.52% to 87.33%) and specificity of 80.36% (95%CI of 69.95% to 90.76%). The ICC was excellent, with all values exceeding 0.75. On the Bland-Altman plot, intra-examiner agreement was good, with p>0.05consistently close to 0. A systematic difference was found for inter-examiner agreement. Conclusion The Pfeffer Questionnaire is applicable in the Brazilian elderly population and showed reliability and reproducibility compared to the original test. PMID:29213959
Hsu, Sze-Bi; Yang, Ya-Tang
2016-04-01
We present the theory of a microfluidic bioreactor with a two-compartment growth chamber and periodic serial dilution. In the model, coexisting planktonic and biofilm populations exchange by adsorption and detachment. The criteria for coexistence and global extinction are determined by stability analysis of the global extinction state. Stability analysis yields the operating diagram in terms of the dilution and removal ratios, constrained by the plumbing action of the bioreactor. The special case of equal uptake function and logistic growth is analytically solved and explicit growth curves are plotted. The presented theory is applicable to generic microfluidic bioreactors with discrete growth chambers and periodic dilution at discrete time points. Therefore, the theory is expected to assist the design of microfluidic devices for investigating microbial competition and microbial biofilm growth under serial dilution conditions.
NASA Astrophysics Data System (ADS)
Yao, H. J.; Chang, P. Y.
2017-12-01
The Minzu Basin is located at the central part of Taiwan, which is bounded by the Changhua fault in the west and the Chelungpu thrust fault in its east. The Chuoshui river flows through the basin and brings in thick unconsolidated gravel layers deposited over the Pleistocene rocks and gravels. Thus, the area has a great potential for groundwater developments. However, there are not enough observation wells in the study area for a further investigation of groundwater characteristics. Therefore, we tried to use the electrical resistivity imaging(ERI) method for estimating the depth of the groundwater table and the specific yield of the unconfined aquifer in dry and wet seasons. We have deployed 13 survey lines with the Wenner-Schlumberger array in the study area in March and June of 2017. Based on the data from the ERI measurements and the nearby Xinming observation well, we turned the resistivity into the relative saturation with respect to the saturated background based on the Archie's Law. With the depth distribution curve of the relative saturation, we found that the curve exhibits a similar shape to the Soil-Water Characteristic Curve. Hence we attempted to use the Van-Genuchten model for characterizing the depth of the water table. And we also tried to calculated the specific yield by taking the difference between the saturated and residual water contents. According to our preliminary results, we found that the depth of groundwater is ranging from 8-m to 10.7-m and the specific yield is about 0.095 0.146 in March. In addition, the depth of groundwater in June is ranging from about 7.6m to 9.8m and the estimated specific yield is about 0.1 0.157. The average level of groundwater in the wet season of June is raised about 0.6m than that in March. We are now working on collecting more time-lapse data, as well as making the direct comparisons with the data from new observation wells completed recently, in order to verify our estimations from the resistivity surveys.
Electrorheological suspensions of laponite in oil: rheometry studies.
Parmar, K P S; Méheust, Y; Schjelderupsen, Børge; Fossum, J O
2008-03-04
We have studied the effect of an external direct current (DC) electric field ( approximately 1 kV/mm) on the rheological properties of colloidal suspensions consisting of aggregates of laponite particles in a silicone oil. Microscopy observations show that, under application of an electric field greater than a triggering electric field Ec approximately 0.6 kV/mm, laponite aggregates assemble into chain- and/or columnlike structures in the oil. Without an applied electric field, the steady-state shear behavior of such suspensions is Newtonian-like. Under application of an electric field larger than Ec, it changes dramatically as a result of the changes in the microstructure: a significant yield stress is measured, and under continuous shear the fluid is shear-thinning. The rheological properties, in particular the dynamic and static shear stress, were studied as a function of particle volume fraction for various strengths (including null) of the applied electric field. The flow curves at constant shear rate can be scaled with respect to both the particle fraction and electric field strength onto a master curve. This scaling is consistent with simple scaling arguments. The shape of the master curve accounts for the system's complexity; it approaches a standard power-law model at high Mason numbers. Both dynamic and static yield stresses are observed to depend on the particle fraction Phi and electric field E as PhibetaEalpha, with alpha approximately 1.85 and beta approximately 1 and 1.70 for the dynamic and static yield stresses, respectively. The yield stress was also determined as the critical stress at which there occurs a bifurcation in the rheological behavior of suspensions that are submitted to a constant shear stress; a scaling law with alpha approximately 1.84 and beta approximately 1.70 was obtained. The effectiveness of the latter technique confirms that such electrorheological (ER) fluids can be studied in the framework of thixotropic fluids. The method is very reproducible; we suggest that it could be used routinely for studying ER fluids. The measured overall yield stress behavior of the suspensions may be explained in terms of standard conduction models for electrorheological systems. Interesting prospects include using such systems for guided self-assembly of clay nanoparticles.
NASA Astrophysics Data System (ADS)
Gentile, G.; Famaey, B.; de Blok, W. J. G.
2011-03-01
We present an analysis of 12 high-resolution galactic rotation curves from The HI Nearby Galaxy Survey (THINGS) in the context of modified Newtonian dynamics (MOND). These rotation curves were selected to be the most reliable for mass modelling, and they are the highest quality rotation curves currently available for a sample of galaxies spanning a wide range of luminosities. We fit the rotation curves with the "simple" and "standard" interpolating functions of MOND, and we find that the "simple" function yields better results. We also redetermine the value of a0, and find a median value very close to the one determined in previous studies, a0 = (1.22 ± 0.33) × 10-8 cm s-2. Leaving the distance as a free parameter within the uncertainty of its best independently determined value leads to excellent quality fits for 75% of the sample. Among the three exceptions, two are also known to give relatively poor fits in Newtonian dynamics plus dark matter. The remaining case (NGC 3198) presents some tension between the observations and the MOND fit, which might, however, be explained by the presence of non-circular motions, by a small distance, or by a value of a0 at the lower end of our best-fit interval, 0.9 × 10-8 cm s-2. The best-fit stellar M/L ratios are generally in remarkable agreement with the predictions of stellar population synthesis models. We also show that the narrow range of gravitational accelerations found to be generated by dark matter in galaxies is consistent with the narrow range of additional gravity predicted by MOND.
New Risk Curves for NHTSA's Brain Injury Criterion (BrIC): Derivations and Assessments.
Laituri, Tony R; Henry, Scott; Pline, Kevin; Li, Guosong; Frankstein, Michael; Weerappuli, Para
2016-11-01
The National Highway Traffic Safety Administration (NHTSA) recently published a Request for Comments regarding a potential upgrade to the US New Car Assessment Program (US NCAP) - a star-rating program pertaining to vehicle crashworthiness. Therein, NHTSA (a) cited two metrics for assessing head risk: Head Injury Criterion (HIC15) and Brain Injury Criterion (BrIC), and (b) proposed to conduct risk assessment via its risk curves for those metrics, but did not prescribe a specific method for applying them. Recent studies, however, have indicated that the NHTSA risk curves for BrIC significantly overstate field-based head injury rates. Therefore, in the present three-part study, a new set of BrIC-based risk curves was derived, an overarching head risk equation involving risk curves for both BrIC and HIC15 was assessed, and some additional candidatepredictor- variable assessments were conducted. Part 1 pertained to the derivation. Specifically, data were pooled from various sources: Navy volunteers, amateur boxers, professional football players, simple-fall subjects, and racecar drivers. In total, there were 4,501 cases, with brain injury reported in 63. Injury outcomes were approximated on the Abbreviated Injury Scale (AIS). The statistical analysis was conducted subject to ordinal logistic regression analysis (OLR), such that the various levels of brain injury were cast as a function of BrIC. The resulting risk curves, with Goodman Kruksal Gamma=0.83, were significantly different than those from NHTSA. Part 2 pertained to the assessment relative to field data. Two perspectives were considered: "aggregate" (ΔV=0-56 km/h) and "point" (high-speed, regulatory focus). For the aggregate perspective, the new risk curves for BrIC were applied in field models pertaining to belted, mid-size, adult drivers in 11-1 o'clock, full-engagement frontal crashes in the National Automotive Sampling System (NASS, 1993-2014 calendar years). For the point perspective, BrIC data from tests were used. The assessments were conducted for minor, moderate, and serious injury levels for both Newer Vehicles (airbag-fitted) and Older Vehicles (not airbag-fitted). Curve-based injury rates and NASS-based injury rates were compared via average percent difference (AvgPctDiff). The new risk curves demonstrated significantly better fidelity than those from NHTSA. For example, for the aggregate perspective (n=12 assessments), the results were as follows: AvgPctDiff (present risk curves) = +67 versus AvgPctDiff (NHTSA risk curves) = +9378. Part 2 also contained a more comprehensive assessment. Specifically, BrIC-based risk curves were used to estimate brain-related injury probabilities, HIC15-based risk curves from NHTSA were used to estimate bone/other injury probabilities, and the maximum of the two resulting probabilities was used to represent the attendant headinjury probabilities. (Those HIC15-based risk curves yielded AvgPctDiff=+85 for that application.) Subject to the resulting 21 assessments, similar results were observed: AvgPctDiff (present risk curves) = +42 versus AvgPctDiff (NHTSA risk curves) = +5783. Therefore, based on the results from Part 2, if the existing BrIC metric is to be applied by NHTSA in vehicle assessment, we recommend that the corresponding risk curves derived in the present study be considered. Part 3 pertained to the assessment of various other candidate brain-injury metrics. Specifically, Parts 1 and 2 were revisited for HIC15, translation acceleration (TA), rotational acceleration (RA), rotational velocity (RV), and a different rotational brain injury criterion from NHTSA (BRIC). The rank-ordered results for the 21 assessments for each metric were as follows: RA, HIC15, BRIC, TA, BrIC, and RV. Therefore, of the six studied sets of OLR-based risk curves, the set for rotational acceleration demonstrated the best performance relative to NASS.
Using LANDSAT to provide potato production estimates to Columbia Basin farmers and processors
NASA Technical Reports Server (NTRS)
1991-01-01
The estimation of potato yields in the Columbia basin is described. The fundamental objective is to provide CROPIX with working models of potato production. A two-pronged approach was used to yield estimation: (1) using simulation models, and (2) using purely empirical models. The simulation modeling approach used satellite observations to determine certain key dates in the development of the crop for each field identified as potatoes. In particular, these include planting dates, emergence dates, and harvest dates. These critical dates are fed into simulation models of crop growth and development to derive yield forecasts. Purely empirical models were developed to relate yield to some spectrally derived measure of crop development. Two empirical approaches are presented: one relates tuber yield to estimates of cumulative intercepted solar radiation, the other relates tuber yield to the integral under GVI (Global Vegetation Index) curve.
Simon, Amy A; Rowe, Jason F; Gaulme, Patrick; Hammel, Heidi B; Casewell, Sarah L; Fortney, Jonathan J; Gizis, John E; Lissauer, Jack J; Morales-Juberias, Raul; Orton, Glenn S; Wong, Michael H; Marley, Mark S
2016-02-01
Observations of Neptune with the Kepler Space Telescope yield a 49 day light curve with 98% coverage at a 1 minute cadence. A significant signature in the light curve comes from discrete cloud features. We compare results extracted from the light curve data with contemporaneous disk-resolved imaging of Neptune from the Keck 10-m telescope at 1.65 microns and Hubble Space Telescope visible imaging acquired nine months later. This direct comparison validates the feature latitudes assigned to the K2 light curve periods based on Neptune's zonal wind profile, and confirms observed cloud feature variability. Although Neptune's clouds vary in location and intensity on short and long timescales, a single large discrete storm seen in Keck imaging dominates the K2 and Hubble light curves; smaller or fainter clouds likely contribute to short-term brightness variability. The K2 Neptune light curve, in conjunction with our imaging data, provides context for the interpretation of current and future brown dwarf and extrasolar planet variability measurements. In particular we suggest that the balance between large, relatively stable, atmospheric features and smaller, more transient, clouds controls the character of substellar atmospheric variability. Atmospheres dominated by a few large spots may show inherently greater light curve stability than those which exhibit a greater number of smaller features.
BLIND EXTRACTION OF AN EXOPLANETARY SPECTRUM THROUGH INDEPENDENT COMPONENT ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P.; Tinetti, G.; Hollis, M. D. J.
2013-03-20
Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The detrending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior or auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10%-30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of {approx}0.09 {mu}m. This represents a reasonablemore » trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.« less
Curchod, Basile F E; Penfold, Thomas J; Rothlisberger, Ursula; Tavernelli, Ivano
2013-01-01
The implementation of local control theory using nonadiabatic molecular dynamics within the framework of linear-response time-dependent density functional theory is discussed. The method is applied to study the photoexcitation of lithium fluoride, for which we demonstrate that this approach can efficiently generate a pulse, on-the-fly, able to control the population transfer between two selected electronic states. Analysis of the computed control pulse yields insights into the photophysics of the process identifying the relevant frequencies associated to the curvature of the initial and final state potential energy curves and their energy differences. The limitations inherent to the use of the trajectory surface hopping approach are also discussed.
Kinetic analysis of elastomeric lag damper for helicopter rotors
NASA Astrophysics Data System (ADS)
Liu, Yafang; Wang, Jidong; Tong, Yan
2018-02-01
The elastomeric lag dampers suppress the ground resonance and air resonance that play a significant role in the stability of the helicopter. In this paper, elastomeric lag damper which is made from silicone rubber is built. And a series of experiments are conducted on this elastomeric lag damper. The stress-strain curves of elastomeric lag dampers employed shear forces at different frequency are obtained. And a finite element model is established based on Burgers model. The result of simulation and tests shows that the simple, linear model will yield good predictions of damper energy dissipation and it is adequate for predicting the stress-strain hysteresis loop within the operating frequency and a small-amplitude oscillation.
Flow curve analysis of a Pickering emulsion-polymerized PEDOT:PSS/PS-based electrorheological fluid
NASA Astrophysics Data System (ADS)
Kim, So Hee; Choi, Hyoung Jin; Leong, Yee-Kwong
2017-11-01
The steady shear electrorheological (ER) response of poly(3, 4-ethylenedioxythiophene): poly(styrene sulfonate)/polystyrene (PEDOT:PSS/PS) composite particles, which were initially fabricated from Pickering emulsion polymerization, was tested with a 10 vol% ER fluid dispersed in a silicone oil. The model independent shear rate and yield stress obtained from the raw torque-rotational speed data using a Couette type rotational rheometer under an applied electric field strength were then analyzed by Tikhonov regularization, which is the most suitable technique for solving an ill-posed inverse problem. The shear stress-shear rate data also fitted well with the data extracted from the Bingham fluid model.
Model equations for the Eiffel Tower profile: historical perspective and new results
NASA Astrophysics Data System (ADS)
Weidman, Patrick; Pinelis, Iosif
2004-07-01
Model equations for the shape of the Eiffel Tower are investigated. One model purported to be based on Eiffel's writing does not give a tower with the correct curvature. A second popular model not connected with Eiffel's writings provides a fair approximation to the tower's skyline profile of 29 contiguous panels. Reported here is a third model derived from Eiffel's concern about wind loads on the tower, as documented in his communication to the French Civil Engineering Society on 30 March 1885. The result is a nonlinear, integro-differential equation which is solved to yield an exponential tower profile. It is further verified that, as Eiffel wrote, "in reality the curve exterior of the tower reproduces, at a determined scale, the same curve of the moments produced by the wind". An analysis of the actual tower profile shows that it is composed of two piecewise continuous exponentials with different growth rates. This is explained by specific safety factors for wind loading that Eiffel & Company incorporated in the design of the free-standing tower. To cite this article: P. Weidman, I. Pinelis, C. R. Mecanique 332 (2004).
Four spot laser anemometer and optical access techniques for turbine applications
NASA Astrophysics Data System (ADS)
Wernet, Mark P.
A time-of-flight anemometer (TOFA) system utilizing a spatial lead-lag filter for bipolar pulse generation has been constructed and tested. This system, called a four-spot laser anemometer, was specifically designed for use in high-speed, turbulent flows in the presence of walls or surfaces. The TOFA system uses elliptical spots to increase the flow acceptance angle to be comparable with that of a fringe-type anemometer. The tightly focused spots used in the four spot yield excellent flare light rejection capabilities. Good results have been obtained to 75 microns normal to a surface, with an f/2.5 collection lens. This system is being evaluated for use in a warm turbine facility. Results from both a particle-lag velocity experiment and boundary layer profiles will be discussed. In addition, an analysis of the use of curved windows in a turbine casing will be presented. Curved windows, matching the inner radius of the turbine casing, preserve the flow conditions, but introduce astigmatic aberrations. A correction optic was designed that virtually eliminates these astigmatic aberrations throughout the intrablade survey region for normal incidence.
Four spot laser anemometer and optical access techniques for turbine applications
NASA Astrophysics Data System (ADS)
Wernet, Mark P.
A time-of-flight anemometer (TOFA) system, utilizing a spatial lead-lag filter for bipolar pulse generation was constructed and tested. This system, called a Four Spot Laser Anemometer, was specifically designed for use in high speed, turbulent flows in the presence of walls or surfaces. The TOFA system uses elliptical spots to increase the flow acceptance angle to be comparable with that of a fringe type anemometer. The tightly focused spots used in the Four Spot yield excellent flare light rejection capabilities. Good results were obtained to 75 microns normal to a surface, with a f/2.5 collecting lens. This system is being evaluated for use in a warm turbine facility. Results from both a particle lag velocity experiment and boundary layer profiles will be discussed. In addition, an analysis of the use of curved windows in a turbine casing will be presented. Curved windows, matching the inner radius of the turbine casing, preserve the flow conditions, but introduce astigmatic aberrations. A correction optic was designed that virtually eliminates these astigmatic aberrations throughout the intrablade survey region for normal incidence.
Thermoluminescence response of rare earth activated zinc lithium borate glass
NASA Astrophysics Data System (ADS)
Saidu, A.; Wagiran, H.; Saeed, M. A.; Obayes, H. K.; Bala, A.; Usman, F.
2018-03-01
New glasses of zinc lithium borate doped with terbium oxide were synthesized by high temperature solid-state reaction. The amorphous nature of the glasses was confirmed using x-ray diffraction analysis (XRD). Thermoluminescence (TL) response of pure zinc lithium borate (ZLB) and zinc lithium borate doped with terbium (ZLB: Tb) exposed to gamma radiation was measured and compared. There is significant enhancement in the TL yields of ZLB: Tb compared to that of pure ZLB. Effect of varying concentration of dopant (Tb4O7) on the TL response of zinc lithium borate was investigated. 0.3 mol% concentration of Tb exhibited strongest TL intensity. Thermoluminescence curve of the phosphor consist of single isolated peak. The TL response of the new materials to the exposed radiation is linear within 0.5-100 Gy range of dose with sublinearity at the lower region of the curve. High sensitivity was exhibited by the new amorphous materials. Reproducibility, thermal fading and energy response of the proposed TLD were investigated and shows remarkable result that made the phosphor suitable for radiation dosimetry.
NASA Astrophysics Data System (ADS)
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
Foam morphology, frustration and topological defects in a Negatively curved Hele-Shaw geometry
NASA Astrophysics Data System (ADS)
Mughal, Adil; Schroeder-Turk, Gerd; Evans, Myfanwy
2014-03-01
We present preliminary simulations of foams and single bubbles confined in a narrow gap between parallel surfaces. Unlike previous work, in which the bounding surfaces are flat (the so called Hele-Shaw geometry), we consider surfaces with non-vanishing Gaussian curvature. We demonstrate that the curvature of the bounding surfaces induce a geometric frustration in the preferred order of the foam. This frustration can be relieved by the introduction of topological defects (disclinations, dislocations and complex scar arrangements). We give a detailed analysis of these defects for foams confined in curved Hele-Shaw cells and compare our results with exotic honeycombs, built by bees on surfaces of varying Gaussian curvature. Our simulations, while encompassing surfaces of constant Gaussian curvature (such as the sphere and the cylinder), focus on surfaces with negative Gaussian curvature and in particular triply periodic minimal surfaces (such as the Schwarz P-surface and the Schoen's Gyroid surface). We use the results from a sphere-packing algorithm to generate a Voronoi partition that forms the basis of a Surface Evolver simulation, which yields a realistic foam morphology.
Four spot laser anemometer and optical access techniques for turbine applications
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1987-01-01
A time-of-flight anemometer (TOFA) system utilizing a spatial lead-lag filter for bipolar pulse generation has been constructed and tested. This system, called a four-spot laser anemometer, was specifically designed for use in high-speed, turbulent flows in the presence of walls or surfaces. The TOFA system uses elliptical spots to increase the flow acceptance angle to be comparable with that of a fringe-type anemometer. The tightly focused spots used in the four spot yield excellent flare light rejection capabilities. Good results have been obtained to 75 microns normal to a surface, with an f/2.5 collection lens. This system is being evaluated for use in a warm turbine facility. Results from both a particle-lag velocity experiment and boundary layer profiles will be discussed. In addition, an analysis of the use of curved windows in a turbine casing will be presented. Curved windows, matching the inner radius of the turbine casing, preserve the flow conditions, but introduce astigmatic aberrations. A correction optic was designed that virtually eliminates these astigmatic aberrations throughout the intrablade survey region for normal incidence.
Four spot laser anemometer and optical access techniques for turbine applications
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1987-01-01
A time-of-flight anemometer (TOFA) system, utilizing a spatial lead-lag filter for bipolar pulse generation was constructed and tested. This system, called a Four Spot Laser Anemometer, was specifically designed for use in high speed, turbulent flows in the presence of walls or surfaces. The TOFA system uses elliptical spots to increase the flow acceptance angle to be comparable with that of a fringe type anemometer. The tightly focused spots used in the Four Spot yield excellent flare light rejection capabilities. Good results were obtained to 75 microns normal to a surface, with a f/2.5 collecting lens. This system is being evaluated for use in a warm turbine facility. Results from both a particle lag velocity experiment and boundary layer profiles will be discussed. In addition, an analysis of the use of curved windows in a turbine casing will be presented. Curved windows, matching the inner radius of the turbine casing, preserve the flow conditions, but introduce astigmatic aberrations. A correction optic was designed that virtually eliminates these astigmatic aberrations throughout the intrablade survey region for normal incidence.
NASA Astrophysics Data System (ADS)
Wang, Zhiqiang; Jiang, Jingyi; Ma, Qing
2016-12-01
Climate change is affecting every aspect of human activities, especially the agriculture. In China, extreme drought events caused by climate change have posed a great threat to food safety. In this work we aimed to study the drought risk of maize in the farming-pastoral ecotone in Northern China based on physical vulnerability assessment. The physical vulnerability curve was constructed from the relationship between drought hazard intensity index and yield loss rate. The risk assessment of agricultural drought was conducted from the drought hazard intensity index and physical vulnerability curve. The probability distribution of drought hazard intensity index decreased from south-west to north-east and increased from south-east to north-west along the rainfall isoline. The physical vulnerability curve had a reduction effect in three parts of the farming-pastoral ecotone in Northern China, which helped to reduce drought hazard vulnerability on spring maize. The risk of yield loss ratio calculated based on physical vulnerability curve was lower compared with the drought hazard intensity index, which suggested that the capacity of spring maize to resist and adapt to drought is increasing. In conclusion, the farming-pastoral ecotone in Northern China is greatly sensitive to climate change and has a high probability of severe drought hazard. Risk assessment of physical vulnerability can help better understand the physical vulnerability to agricultural drought and can also promote measurements to adapt to climate change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin Yunpeng; Sawin, Herbert H.
The surface roughness evolutions of single crystal silicon, thermal silicon dioxide (SiO{sub 2}), and low dielectric constant film coral in argon plasma have been measured by atomic force microscopy as a function of ion bombardment energy, ion impingement angle, and etching time in an inductively coupled plasma beam chamber, in which the plasma chemistry, ion energy, ion flux, and ion incident angle can be adjusted independently. The sputtering yield (or etching rate) scales linearly with the square root of ion energy at normal impingement angle; additionally, the angular dependence of the etching yield of all films in argon plasma followedmore » the typical sputtering yield curve, with a maximum around 60 deg. -70 deg. off-normal angle. All films stayed smooth after etching at normal angle but typically became rougher at grazing angles. In particular, at grazing angles the rms roughness level of all films increased if more material was removed; additionally, the striation structure formed at grazing angles can be either parallel or transverse to the beam impingement direction, which depends on the off-normal angle. More interestingly, the sputtering caused roughness evolution at different off-normal angles can be qualitatively explained by the corresponding angular dependent etching yield curve. In addition, the roughening at grazing angles is a strong function of the type of surface; specifically, coral suffers greater roughening compared to thermal silicon dioxide.« less
In-situ determination of energy species yields of intense particle beams
Kugel, H.W.; Kaita, R.
1983-09-26
Objects of the present invention are provided for a particle beam having a full energy component at least as great as 25 keV, which is directed onto a beamstop target, such that Rutherford backscattering, preferably near-surface backscattering occurs. The geometry, material composition and impurity concentration of the beam stop are predetermined, using any suitable conventional technique. The energy-yield characteristic response of backscattered particles is measured over a range of angles using a fast ion electrostatic analyzer having a microchannel plate array at its focal plane. The knee of the resulting yield curve, on a plot of yield versus energy, is analyzed to determine the energy species components of various beam particles having the same mass.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2015-01-01
Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.
Bossaert, P; Leroy, J L M R; De Vliegher, S; Opsomer, G
2008-09-01
High-yielding dairy cows are more susceptible to metabolic and reproductive disorders than low-yielding cows. Insulin plays a pivotal role in the development of both problems. In the present study, we aimed to assess the glucose-induced insulin responses of dairy cows at different time points relative to calving and to relate this to the metabolic status and the time of first ovulation. Twenty-three healthy, multiparous Holstein-Friesian cows with a high genetic merit for milk yield were studied from 14 d prepartum to 42 d postpartum. Intravenous glucose tolerance tests were performed on -14, 14, and 42 d relative to calving to evaluate the plasma insulin and glucose responses to a glucose load, as estimated by the peak concentration, the area under the curve (AUC), and the clearance rates of insulin and glucose. Blood samples were obtained at 3-d intervals and analyzed for glucose, insulin, and nonesterified fatty acids (NEFA). The time of first ovulation was defined by transrectal ultrasonography and plasma progesterone analysis. Glucose-induced insulin AUC and peak concentration decreased and glucose clearance increased during lactation compared with the dry period. Plasma NEFA concentrations were negatively related to insulin AUC and peak concentrations. Fourteen cows ovulated within 42 d postpartum, and the remaining 9 cows suffered from delayed resumption of ovarian function. Survival analysis demonstrated that cows with lower NEFA concentrations during the dry period tended to have earlier resumption of ovarian activity. In conclusion, our data suggest a decreased plasma insulin response to glucose postpartum in high-yielding dairy cows, possibly contributing to metabolic stress during the early postpartum period. It is hypothesized that NEFA impair glucose-induced insulin secretion in dairy cows. Additionally, our results suggest the importance of lipolysis during the transition period as a risk factor for delayed ovulation.
Closing the N-use efficiency gap to achieve food and environmental security.
Cui, Zhenling; Wang, Guiliang; Yue, Shanchao; Wu, Liang; Zhang, Weifeng; Zhang, Fusuo; Chen, Xinping
2014-05-20
To achieve food and environmental security, closing the gap between actual and attainable N-use efficiency should be as important as closing yield gaps. Using a meta-analysis of 205 published studies from 317 study sites, including 1332 observations from rice, wheat, and maize system in China, reactive N (Nr) losses, and total N2O emissions from N fertilization both increased exponentially with increasing N application rate. On the basis of the N loss response curves from the literature meta-analysis, the direct N2O emission, NH3 volatilization, N leaching, and N runoff, and total N2O emission (direct + indirect) were calculated using information from the survey of farmers. The PFP-N (kilogram of harvested product per kilogram of N applied (kg (kg of N)(-1))) for 6259 farmers were relative low with only 37, 23, and 32 kg (kg of N)(-1) for rice, wheat, and maize systems, respectively. In comparison, the PFP-N for highest yield and PFP-N group (refers to fields where the PFP-N was within the 80-100th percentile among those fields that achieved yields within the 80-100th percentile) averaged 62, 42, and 53 kg (kg of N)(-1) for rice, wheat, and maize systems, respectively. The corresponding grain yield would increase by 1.6-2.3 Mg ha(-1), while the N application rate would be reduced by 56-100 kg of N ha(-1) from average farmer field to highest yield and PFP-N group. In return, the Nr loss intensity (4-11 kg of N (Mg of grain)(-1)) and total N2O emission intensity (0.15-0.29 kg of N (Mg of grain)(-1)) would both be reduced significantly as compared to current agricultural practices. In many circumstances, closing the PFP-N gap in intensive cropping systems is compatible with increased crop productivity and reductions in both Nr losses and total N2O emissions.
Influence of Different Yield Loci on Failure Prediction with Damage Models
NASA Astrophysics Data System (ADS)
Heibel, S.; Nester, W.; Clausmeyer, T.; Tekkaya, A. E.
2017-09-01
Advanced high strength steels are widely used in the automotive industry to simultaneously improve crash performance and reduce the car body weight. A drawback of these multiphase steels is their sensitivity to damage effects and thus the reduction of ductility. For that reason the Forming Limit Curve is only partially suitable for this class of steels. An improvement in failure prediction can be obtained by using damage mechanics. The objective of this paper is to comparatively review the phenomenological damage model GISSMO and the Enhanced Lemaitre Damage Model. GISSMO is combined with three different yield loci, namely von Mises, Hill48 and Barlat2000 to investigate the influence of the choice of the plasticity description on damage modelling. The Enhanced Lemaitre Model is used with Hill48. An inverse parameter identification strategy for a DP1000 based on stress-strain curves and optical strain measurements of shear, uniaxial, notch and (equi-)biaxial tension tests is applied to calibrate the models. A strong dependency of fracture strains on the choice of yield locus can be observed. The identified models are validated on a cross-die cup showing ductile fracture with slight necking.
Electron impact fragmentation of thymine: partial ionization cross sections for positive fragments
NASA Astrophysics Data System (ADS)
van der Burgt, Peter J. M.; Mahon, Francis; Barrett, Gerard; Gradziel, Marcin L.
2014-06-01
We have measured mass spectra for positive ions for low-energy electron impact on thymine using a reflectron time-of-flight mass spectrometer. Using computer controlled data acquisition, mass spectra have been acquired for electron impact energies up to 100 eV in steps of 0.5 eV. Ion yield curves for most of the fragment ions have been determined by fitting groups of adjacent peaks in the mass spectra with sequences of normalized Gaussians. The ion yield curves have been normalized by comparing the sum of the ion yields to the average of calculated total ionization cross sections. Appearance energies have been determined. The nearly equal appearance energies of 83 u and 55 u observed in the present work strongly indicate that near threshold the 55 u ion is formed directly by the breakage of two bonds in the ring, rather than from a successive loss of HNCO and CO from the parent ion. Likewise 54 u is not formed by CO loss from 82 u. The appearance energies are in a number of cases consistent with the loss of one or more hydrogen atoms from a heavier fragment, but 70 u is not formed by hydrogen loss from 71 u.
Broadband Photometric Reverberation Mapping Analysis on SDSS-RM and Stripe 82 Quasars
NASA Astrophysics Data System (ADS)
Zhang, Haowen; Yang, Qian; Wu, Xue-Bing
2018-02-01
We modified the broadband photometric reverberation mapping (PRM) code, JAVELIN, and tested the availability to get broad-line region time delays that are consistent with the spectroscopic reverberation mapping (SRM) project SDSS-RM. The broadband light curves of SDSS-RM quasars produced by convolution with the system transmission curves were used in the test. We found that under similar sampling conditions (evenly and frequently sampled), the key factor determining whether the broadband PRM code can yield lags consistent with the SRM project is the flux ratio of the broad emission line to the reference continuum, which is in line with the previous findings. We further found a critical line-to-continuum flux ratio, about 6%, above which the mean of the ratios between the lags from PRM and SRM becomes closer to unity, and the scatter is pronouncedly reduced. We also tested our code on a subset of SDSS Stripe 82 quasars, and found that our program tends to give biased lag estimations due to the observation gaps when the R-L relation prior in Markov Chain Monte Carlo is discarded. The performance of the damped random walk (DRW) model and the power-law (PL) structure function model on broadband PRM were compared. We found that given both SDSS-RM-like or Stripe 82-like light curves, the DRW model performs better in carrying out broadband PRM than the PL model.
Covariance Matrix Evaluations for Independent Mass Fission Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less
Shi, Hong-Bin; Yu, Jia-Xing; Yu, Jian-Xiu; Feng, Zheng; Zhang, Chao; Li, Guang-Yong; Zhao, Rui-Ning; Yang, Xiao-Bo
2017-08-03
Previous studies have revealed the importance of microRNAs' (miRNAs) function as biomarkers in diagnosing human bladder cancer (BC). However, the results are discordant. Consequently, the possibility of miRNAs to be BC biomarkers was summarized in this meta-analysis. In this study, the relevant articles were systematically searched from CBM, PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The bivariate model was used to calculate the pooled diagnostic parameters and summary receiver operator characteristic (SROC) curve in this meta-analysis, thereby estimating the whole predictive performance. STATA software was used during the whole analysis. Thirty-one studies from 10 articles, including 1556 cases and 1347 controls, were explored in this meta-analysis. In short, the pooled sensitivity, area under the SROC curve, specificity, positive likelihood ratio, diagnostic odds ratio, and negative likelihood ratio were 0.72 (95%CI 0.66-0.76), 0.80 (0.77-0.84), 0.76 (0.71-0.81), 3.0 (2.4-3.8), 8 (5.0-12.0), and 0.37 (0.30-0.46) respectively. Additionally, sub-group and meta-regression analyses revealed that there were significant differences between ethnicity, miRNA profiling, and specimen sub-groups. These results suggested that Asian population-based studies, multiple-miRNA profiling, and blood-based assays might yield a higher diagnostic accuracy than their counterparts. This meta-analysis demonstrated that miRNAs, particularly multiple miRNAs in the blood, might be novel, useful biomarkers with relatively high sensitivity and specificity and can be used for the diagnosis of BC. However, further prospective studies with more samples should be performed for further validation.
Using the weighted area under the net benefit curve for decision curve analysis.
Talluri, Rajesh; Shete, Sanjay
2016-07-18
Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
Dorn, Melissa J; Bockstahler, Barbara A; Dupré, Gilles P
2017-05-01
OBJECTIVE To evaluate the pressure-volume relationship during capnoperitoneum in dogs and effects of body weight and body conformation. ANIMALS 86 dogs scheduled for routine laparoscopy. PROCEDURES Dogs were allocated into 3 groups on the basis of body weight. Body measurements, body condition score, and body conformation indices were calculated. Carbon dioxide was insufflated into the abdomen with a syringe, and pressure was measured at the laparoscopic cannula. Volume and pressure data were processed, and the yield point, defined by use of a cutoff volume (COV) and cutoff pressure (COP), was calculated. RESULTS 20 dogs were excluded because of recording errors, air leakage attributable to surgical flaws, or trocar defects. For the remaining 66 dogs, the pressure-volume curve was linear-like until the yield point was reached, and then it became visibly exponential. Mean ± SD COP was 5.99 ± 0.805 mm Hg. No correlation was detected between yield point, body variables, or body weight. Mean COV was 1,196.2 ± 697.9 mL (65.15 ± 20.83 mL of CO 2 /kg), and COV was correlated significantly with body weight and one of the body condition indices but not with other variables. CONCLUSION AND CLINICAL RELEVANCE In this study, there was a similar COP for all dogs of all sizes. In addition, results suggested that increasing the abdominal pressure after the yield point was reached did not contribute to a substantial increase in working space in the abdomen. No correlation was found between yield point, body variables, and body weight.
Saeidabadi, Mohammad Sadegh; Nili, Hassan; Dadras, Habibollah; Sharifiyazdi, Hassan; Connolly, Joanne; Valcanis, Mary; Raidal, Shane; Ghorashi, Seyed Ali
2017-06-01
Consumption of poultry products contaminated with Salmonella is one of the major causes of foodborne diseases worldwide and therefore detection and differentiation of Salmonella spp. in poultry is important. In this study, oligonucleotide primers were designed from hemD gene and a PCR followed by high-resolution melt (HRM) curve analysis was developed for rapid differentiation of Salmonella isolates. Amplicons of 228 bp were generated from 16 different Salmonella reference strains and from 65 clinical field isolates mainly from poultry farms. HRM curve analysis of the amplicons differentiated Salmonella isolates and analysis of the nucleotide sequence of the amplicons from selected isolates revealed that each melting curve profile was related to a unique DNA sequence. The relationship between reference strains and tested specimens was also evaluated using a mathematical model without visual interpretation of HRM curves. In addition, the potential of the PCR-HRM curve analysis was evaluated for genotyping of additional Salmonella isolates from different avian species. The findings indicate that PCR followed by HRM curve analysis provides a rapid and robust technique for genotyping of Salmonella isolates to determine the serovar/serotype.
NASA Technical Reports Server (NTRS)
Manning, Charles R., Jr.; Price, Howard L.
1961-01-01
Results are presented of rapid-heating tests of 17-7 PH and 12 MoV stainless-steel sheet heated to failure at temperature rates from about 1 F to 170 F per second under constant-load conditions. Yield and rupture strengths obtained from rapid-heating tests are compared with yield and tensile strengths obtained from short-time elevated-temperature tensile tests (30-minute exposure). A rate-temperature parameter was used to construct master curves from which yield and rupture stresses or temperatures can be predicted. A method for measuring strain by optical means is described.
NASA Technical Reports Server (NTRS)
Gibbs, Thomas W
1956-01-01
Specimens of HK31XA-H24 magnesium-alloy sheet from an experimental batch were heated to failure at nominal temperature rates from 0.2 F to 100 F per second under constant-load conditions. Rapid-heating yield and rupture stresses are presented and compared with the yield and ultimate stresses from elevated-temperature tensile stress-strain tests for 1/2-hour exposure. Linear temperature-rate parameters were used to correlate rapid-heating results by constructing master curves which can be used for predicting yield stresses and temperatures and for estimating rupture stresses and temperatures.
Antonios, Tarek F T; Nama, Vivek; Wang, Duolao; Manyonda, Isaac T
2013-09-01
Preeclampsia is a major cause of maternal and neonatal mortality and morbidity. The incidence of preeclampsia seems to be rising because of increased prevalence of predisposing disorders, such as essential hypertension, diabetes, and obesity, and there is increasing evidence to suggest widespread microcirculatory abnormalities before the onset of preeclampsia. We hypothesized that quantifying capillary rarefaction could be helpful in the clinical prediction of preeclampsia. We measured skin capillary density according to a well-validated protocol at 5 consecutive predetermined visits in 322 consecutive white women, of whom 16 subjects developed preeclampsia. We found that structural capillary rarefaction at 20-24 weeks of gestation yielded a sensitivity of 0.87 with a specificity of 0.50 at the cutoff of 2 capillaries/field with the area under the curve of the receiver operating characteristic value of 0.70, whereas capillary rarefaction at 27-32 weeks of gestation yielded a sensitivity of 0.75 and a higher specificity of 0.77 at the cutoff of 8 capillaries/field with area under the curve of the receiver operating characteristic value of 0.82. Combining capillary rarefaction with uterine artery Doppler pulsatility index increased the sensitivity and specificity of the prediction. Multivariable analysis shows that the odds of preeclampsia are increased in women with previous history of preeclampsia or chronic hypertension and in those with increased uterine artery Doppler pulsatility index, but the most powerful and independent predictor of preeclampsia was capillary rarefaction at 27-32 weeks. Quantifying structural rarefaction of skin capillaries in pregnancy is a potentially useful clinical marker for the prediction of preeclampsia.
Corn response to climate stress detected with satellite-based NDVI time series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ruoyu; Cherkauer, Keith; Bowling, Laura
Corn growth conditions and yield are closely dependent on climate variability. Leaf growth, measured as the leaf area index, can be used to identify changes in crop growth in response to climate stress. This research was conducted to capture patterns of spatial and temporal corn leaf growth under climate stress for the St. Joseph River watershed, in northeastern Indiana. Leaf growth is represented by the Normalized Difference Vegetative Index (NDVI) retrieved from multiple years (2000–2010) of Landsat 5 TM images. By comparing NDVI values for individual image dates with the derived normal curve, the response of crop growth to environmentalmore » factors is quantified as NDVI residuals. Regression analysis revealed a significant relationship between yield and NDVI residual during the pre-silking period, indicating that NDVI residuals reflect crop stress in the early growing period that impacts yield. Both the mean NDVI residuals and the percentage of image pixels where corn was under stress (risky pixel rate) are significantly correlated with water stress. Dry weather is prone to hamper potential crop growth, with stress affecting most of the observed corn pixels in the area. Oversupply of rainfall at the end of the growing season was not found to have a measurable effect on crop growth, while above normal precipitation earlier in the growing season reduces the risk of yield loss at the watershed scale. Furthermore, the spatial extent of stress is much lower when precipitation is above normal than under dry conditions, masking the impact of small areas of yield loss at the watershed scale.« less
Corn response to climate stress detected with satellite-based NDVI time series
Wang, Ruoyu; Cherkauer, Keith; Bowling, Laura
2016-03-23
Corn growth conditions and yield are closely dependent on climate variability. Leaf growth, measured as the leaf area index, can be used to identify changes in crop growth in response to climate stress. This research was conducted to capture patterns of spatial and temporal corn leaf growth under climate stress for the St. Joseph River watershed, in northeastern Indiana. Leaf growth is represented by the Normalized Difference Vegetative Index (NDVI) retrieved from multiple years (2000–2010) of Landsat 5 TM images. By comparing NDVI values for individual image dates with the derived normal curve, the response of crop growth to environmentalmore » factors is quantified as NDVI residuals. Regression analysis revealed a significant relationship between yield and NDVI residual during the pre-silking period, indicating that NDVI residuals reflect crop stress in the early growing period that impacts yield. Both the mean NDVI residuals and the percentage of image pixels where corn was under stress (risky pixel rate) are significantly correlated with water stress. Dry weather is prone to hamper potential crop growth, with stress affecting most of the observed corn pixels in the area. Oversupply of rainfall at the end of the growing season was not found to have a measurable effect on crop growth, while above normal precipitation earlier in the growing season reduces the risk of yield loss at the watershed scale. Furthermore, the spatial extent of stress is much lower when precipitation is above normal than under dry conditions, masking the impact of small areas of yield loss at the watershed scale.« less
Guzman, L; Ortega-Hrepich, C; Polyzos, N P; Anckaert, E; Verheyen, G; Coucke, W; Devroey, P; Tournaye, H; Smitz, J; De Vos, M
2013-05-01
Which baseline patient characteristics can help assisted reproductive technology practitioners to identify patients who are suitable for in-vitro maturation (IVM) treatment? In patients with polycystic ovary syndrome (PCOS) who undergo oocyte IVM in a non-hCG-triggered system, circulating anti-Müllerian hormone (AMH), antral follicle count (AFC) and total testosterone are independently related to the number of immature oocytes and hold promise as outcome predictors to guide the patient selection process for IVM. Patient selection criteria for IVM treatment have been described in normo-ovulatory patients, although patients with PCOS constitute the major target population for IVM. With this study, we assessed the independent predictive value of clinical and endocrine parameters that are related to oocyte yield in patients with PCOS undergoing IVM. Cohort study involving 124 consecutive patients with PCOS undergoing IVM whose data were prospectively collected. Enrolment took place between January 2010 and January 2012. Only data relating to the first IVM cycle of each patient were included. Patients with PCOS underwent oocyte retrieval for IVM after minimal gonadotrophin stimulation and no hCG trigger. Correlation coefficients were calculated to investigate which parameters are related to immature oocyte yield (patient's age, BMI, baseline hormonal profile and AMH, AFC). The independence of predictive parameters was tested using multivariate linear regression analysis. Finally, multivariate receiver operating characteristic (ROC) analyses for cumulus oocyte complexes (COC) yield were performed to assess the efficiency of the prediction model to select suitable candidates for IVM. Using multivariate regression analysis, circulating baseline AMH, AFC and baseline total testosterone serum concentration were incorporated into a model to predict the number of COC retrieved in an IVM cycle, with unstandardized coefficients [95% confidence interval (CI)] of 0.03 (0.02-0.03) (P < 0.001), 0.012 (0.008-0.017) (P < 0.001) and 0.37 (0.18-0.57) (P < 0.001), respectively. Logistic regression analysis shows that a prediction model based on AMH and AFC, with unstandardized coefficients (95% CI) of 0.148 (0.03-0.25) (P < 0.001) and 0.034 (-0.003-0.07) (P = 0.025), respectively, is a useful patient selection tool to predict the probability to yield at least eight COCs for IVM in patients with PCOS. In this population, patients with at least eight COC available for IVM have a statistically higher number of embryos of good morphological quality (2.9 ± 2.3; 0.9 ± 0.9; P < 0.001) and cumulative ongoing pregnancy rate [30.4% (24 out of 79); 11% (5 out of 45); P = 0.01] when compared with patients with less than eight COC. ROC curve analysis showed that this prediction model has an area under the curve of 0.7864 (95% CI = 0.6997-0.8732) for the prediction of oocyte yield in IVM. The proposed model has been constructed based on a genuine IVM system, i.e. no hCG trigger was given and none of the oocytes matured in vivo. However, other variables, such as needle type, aspiration technique and whether or not hCG-triggering is used, should be considered as confounding factors. The results of this study have to be confirmed using a second independent validation sample. The proposed model could be applied to patients with PCOS after confirmation through a further validation study. This study was supported by a research grant by the Institute for the Promotion of Innovation by Science and Technology in Flanders, Project number IWT 070719.
Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas
2015-01-01
Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.
Towards a universal master curve in magnetorheology
NASA Astrophysics Data System (ADS)
Ruiz-López, José Antonio; Hidalgo-Alvarez, Roque; de Vicente, Juan
2017-05-01
We demonstrate that inverse ferrofluids behave as model magnetorheological fluids. A universal master curve is proposed, using a reduced Mason number, under the frame of a structural viscosity model where the magnetic field strength dependence is solely contained in the Mason number and the particle concentration is solely contained in the critical Mason number (i.e. the yield stress). A linear dependence of the critical Mason number with the particle concentration is observed that is in good agreement with a mean (average) magnetization approximation, particle level dynamic simulations and micromechanical models available in the literature.
Documentation of the Tonge-Ramesh Material Model for Release 2015-06-05-152756
2015-10-01
crush curves showing pressure required to initiate pore collapse for pure hydrostatic loading as a function of distension for the 2 different granular...Illustration of the crush curves showing pressure required to initiate pore collapse for pure hydrostatic loading as a function of distension for the 2...is an additional yield surface that depends on only the hydrostatic pressure (p = −1/3tr(σ)). 13 It is defined by fφ(P, J GP , J) = P Pc−P0
LEED-AES-Thin Layer Electrochemical Studies of Hydrogen Adsorption on Platinum Single Crystals.
1982-08-01
of the voltametry sweep and has been observed in HF but not H2SO4 as the electrolyte (1). This anomalous behavior is not easily explained by any...Fig. 3. Cyclic linear sweep voltametry curve for Pt(3ll) in 0.1 N WF. Sveep rate: 50 uV/s. Solid line: secoad cycle 0.05 to 0.5 V; dotted line: fourth...Without such cycling, the hydrogen region of the voltametry curves usually does not yield well defined peaks in either polycrystalline or single crystal
Bertran, E A; Berlie, H D; Taylor, A; Divine, G; Jaber, L A
2017-02-01
To examine differences in the performance of HbA 1c for diagnosing diabetes in Arabs compared with Europeans. The PubMed, Embase and Cochrane library databases were searched for records published between 1998 and 2015. Estimates of sensitivity, specificity and log diagnostic odds ratios for an HbA 1c cut-point of 48 mmol/mol (6.5%) were compared between Arabs and Europeans, using a bivariate linear mixed-model approach. For studies reporting multiple cut-points, population-specific summary receiver operating characteristic (SROC) curves were constructed. In addition, sensitivity, specificity and Youden Index were estimated for strata defined by HbA 1c cut-point and population type. Database searches yielded 1912 unique records; 618 full-text articles were reviewed. Fourteen studies met the inclusion criteria; hand-searching yielded three additional eligible studies. Three Arab (N = 2880) and 16 European populations (N = 49 127) were included in the analysis. Summary sensitivity and specificity for a HbA 1c cut-point of 48 mmol/mol (6.5%) in both populations were 42% (33-51%), and 97% (95-98%). There was no difference in area under SROC curves between Arab and European populations (0.844 vs. 0.847; P = 0.867), suggesting no difference in HbA 1c diagnostic accuracy between populations. Multiple cut-point summary estimates stratified by population suggest that Arabs have lower sensitivity and higher specificity at a HbA 1c cut-point of 44 mmol/mol (6.2%) compared with European populations. Estimates also suggest similar test performance at cut-points of 44 mmol/mol (6.2%) and 48 mmol/mol (6.5%) for Arabs. Given the low sensitivity of HbA 1c in the high-risk Arab American population, we recommend a combination of glucose-based and HbA 1c testing to ensure an accurate and timely diagnosis of diabetes. © 2016 Diabetes UK.
Fauconnot, Laëtitia; Hau, Jörg; Aeschlimann, Jean-Marc; Fay, Laurent-Bernard; Dionisi, Fabiola
2004-01-01
Positional distribution of fatty acyl chains of triacylglycerols (TGs) in vegetable oils and fats (palm oil, cocoa butter) and animal fats (beef, pork and chicken fats) was examined by reversed-phase high-performance liquid chromatography (RP-HPLC) coupled to atmospheric pressure chemical ionization using a quadrupole mass spectrometer. Quantification of regioisomers was achieved for TGs containing two different fatty acyl chains (palmitic (P), stearic (S), oleic (O), and/or linoleic (L)). For seven pairs of 'AAB/ABA'-type TGs, namely PPS/PSP, PPO/POP, SSO/SOS, POO/OPO, SOO/OSO, PPL/PLP and LLS/LSL, calibration curves were established on the basis of the difference in relative abundances of the fragment ions produced by preferred losses of the fatty acid from the 1/3-position compared to the 2-position. In practice the positional isomers AAB and ABA yield mass spectra showing a significant difference in relative abundance ratios of the ions AA(+) to AB(+). Statistical analysis of the validation data obtained from analysis of TG standards and spiked oils showed that, under repeatability conditions, least-squares regression can be used to establish calibration curves for all pairs. The regression models show linear behavior that allow the determination of the proportion of each regioisomer in an AAB/ABA pair, within a working range from 10 to 1000 microg/mL and a 95% confidence interval of +/-3% for three replicates. Copyright 2003 John Wiley & Sons, Ltd.
Kshirsagar, Parthraj R; Hegde, Harsha; Pai, Sandeep R
2016-05-01
This study was designed to understand the effect of storage in polypropylene microcentrifuge tubes and glass vials during ultra-flow liquid chromatographic (UFLC) analysis. One ml of methanol was placed in polypropylene microcentrifuge tubes (PP material, Autoclavable) and glass vials (Borosilicate) separately for 1, 2, 4, 8, 10, 20, 40, and 80 days intervals stored at -4°C. Contaminant peak was detected in methanol stored in polypropylene microcentrifuge tubes using UFLC analysis. The contaminant peak detected was prominent, sharp detectable at 9.176 ± 0.138 min on a Waters 250-4.6 mm, 4 μ, Nova-Pak C18 column with mobile phase consisting of methanol:water (70:30). It was evident from the study that long-term storage of biological samples prepared using methanol in polypropylene microcentrifuge tubes produce contaminant peak. Further, this may mislead in future reporting an unnatural compound by researchers. Long-term storage of biological samples prepared using methanol in polypropylene microcentrifuge tubes produce contaminant peakContamination peak with higher area under the curve (609993) was obtained in ultra-flow liquid chromatographic run for methanol stored in PP microcentrifuge tubesContamination peak was detected at retention time 9.113 min with a lambda max of 220.38 nm and 300 mAU intensity on the given chromatographic conditionsGlass vials serve better option over PP microcentrifuge tubes for storing biological samples. Abbreviations used: UFLC: Ultra Flow Liquid Chromatography; LC: Liquid Chromatography; MS: Mass spectrometry; AUC: Area Under Curve.
1991-05-22
infinite number of possi’le crystal orientations is assumed, this infinitely sided polyhedron becomes a curved yield surface. Plastic strain in the...families, each surface of yield polyhedron mentioned above expands and shifts differently. These slip directions are all more or less parallel to the...result, only the monotonic portion of test D29 was corrected for membrane compliance and used as part of the monotonic proportional test database
Tensile and Microindentation Stress-Strain Curves of Al-6061
Weaver, Jordan S [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Center for Integrated Nanotechnologies (CINT); Khosravani, Ali [Georgia Inst. of Technology, Atlanta, GA (United States); Castillo, Andrew [Georgia Inst. of Technology, Atlanta, GA (United States); Kalidind, Surya R [Georgia Inst. of Technology, Atlanta, GA (United States)
2016-07-13
Recent spherical microindentation stress-strain protocols were developed and validated on Al-6061 (DOI: 10.1186/s40192-016-0054-3). The scaling factor between the uniaxial yield strength and the indentation yield strength was determined to be about 1.9. The microindentation stress-strain protocols were then applied to a microstructurally graded sample in an effort to extract high throughput process-property relationships. The tensile and microindentation force-displacement and stress-strain data are presented in this data set.
CT-derived indices of canine osteosarcoma-affected antebrachial strength.
Garcia, Tanya C; Steffey, Michele A; Zwingenberger, Allison L; Daniel, Leticia; Stover, Susan M
2017-05-01
To improve the prediction of fractures in dogs with bone tumors of the distal radius by identifying computed tomography (CT) indices that correlate with antebrachial bone strength and fracture location. Prospective experimental study. Dogs with antebrachial osteosarcoma (n = 10), and normal cadaver bones (n=9). Antebrachia were imaged with quantitative CT prior to biomechanical testing to failure. CT indices of structural properties were compared to yield force and maximum force using Pearson correlation tests. Straight beam failure (Fs), axial rigidity, curved beam failure (Fc), and craniocaudal bending moment of inertia (MOICrCd) CT indices most highly correlated (0.77 > R > 0.57) with yield and maximum forces when iOSA-affected and control bones were included in the analysis. Considering only OSA-affected bones, Fs, Fc, and axial rigidity correlated highly (0.85 > R > 0.80) with maximum force. In affected bones, the location of minimum axial rigidity and maximum MOICrCd correlated highly (R > 0.85) with the actual fracture location. CT-derived axial rigidity, Fs, and MOICrCd have strong linear relationships with yield and maximum force. These indices should be further evaluated prospectively in OSA-affected dogs that do, and do not, experience pathologic fracture. © 2017 The American College of Veterinary Surgeons.
Irwin, R John; Irwin, Timothy C
2011-06-01
Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Hydrologic budget of the Beaverdam Creek basin, Maryland
Rasmussen, W.C.; Andreasen, Gordon E.
1959-01-01
A hydrologic budget is a statement accounting for the water gains and losses for selected periods in an area. Weekly measurements of precipitation streamflow, surface-water storage, ground-water stage, and soil resistivity were made during a 2year period, April 1, 1950, to March 28, 1952, in the Beaverdam Creek basin, Wicomico County, Md. The hydrologic measurements are summarized in two budgets, a total budget and a ground-water budget, and in supporting tables and graphs. The results of the investigation have some potentially significant applications because they describe a method for determining the annual replenishment of the water supply of a basin and the ways of water disposal under natural conditions. The information helps to determine the 'safe' yield of water in diversion from natural to artificial discharge. The drainage basin of Beaverdam Creek was selected because it appeared to have fewer hydrologic variables than are generally found. However, the methods may prove applicable in many places under a variety of conditions. The measurements are expressed in inches of water over the area of the basin. The equation of the hydrologic cycle is the budget balance: P= R+E+ASW+ delta SW + delta SM + delta GW where P is precipitation; R is runoff; ET is evapotranspiration; delta SW is change in surface-water storage; delta SM is change in soil moisture; and delta GW is change in ground-water storage. In this report 'change' is the final quantity minus the initial quantity and thus is synonymous with 'increase.' Further, ,delta GW= delta H .x Yg, in which delta H is the change in ground-water stage and Yg is the gravity yield, or the specific yield of the sediments as measured during the short periods of declining ground-water levels characteristic of the area. The complex sum of the revised equation P ? R - delta SW ? ET - delta SM, which is equal to delta H. x Yg, has been named the 'infiltration residual'; it is equivalent to ground-water recharge. Two unmeasured, but not entirely unknown, quantities, evapotranspiration, (ET) and gravity yield, (Yg), are included in the equation. They are derived statistically by a method of convergent approximations, one of the contributions of this investigation. On the basis of laboratory analysis, well-field tests, and general information on rates of drainage from saturated sediments, a gravity yield of 14 percent was assumed as a first approximation. The equation was then solved, by weeks, for evapotranspiration, ET. The evapotranspiration losses were plotted against the calendar week. Using the time of year as a control, a smooth curve was fitted to the evapotranspiration data, and modified values of ET were read from the curve. These were used to compute weekly values of the infiltration residual which were plotted against ground-water stage. The slope of the line of best fit gave a closer approximation of gravity yield, Yg. The process was repeated. The approximations converged, so that a fourth and final approximation resulted in a close grouping of all the points along a line whose slope indicated a Yg of 11.0 percent, and a slightly asymmetric bell-shaped curve of total evapotranspiration by weeks was obtained that is considered representative of this area. Check calculations of gravity yield were made during periods of low evapotranspiration and high infiltration, which substantiate the computed average of 11.0 percent. Refinements in the method of deriving the ground-water budget were introduced to supplement the techniques developed by Meinzer and Stearns in the study of the Pomperaug River basin in Connecticut in 1913 and 1916. The hydrologic equation for the ground-water cycle may be written Gr=D + delta H. x Yg + ETg, in which Gr is ground-water recharge (infiltration); D is ground-water drainage; delta H is the change in mean ground-water stage (final stage minus initial stage); Yg is gravity yield (taken as 11.0 percent in computations here); an
The Importance of Protons in Reactive Transport Modeling
NASA Astrophysics Data System (ADS)
McNeece, C. J.; Hesse, M. A.
2014-12-01
The importance of pH in aqueous chemistry is evident; yet, its role in reactive transport is complex. Consider a column flow experiment through silica glass beads. Take the column to be saturated and flowing with solution of a distinct pH. An instantaneous change in the influent solution pH can yield a breakthrough curve with both a rarefaction and shock component (composite wave). This behavior is unique among aqueous ions in transport and is more complex than intuition would tell. Analysis of the hyperbolic limit of this physical system can explain these first order transport phenomenon. This analysis shows that transport behavior is heavily dependent on the shape of the adsorption isotherm. Hence it is clear that accurate surface chemistry models are important in reactive transport. The proton adsorption isotherm has nonconstant concavity due to the proton's ability to partition into hydroxide. An eigenvalue analysis shows that an inflection point in the adsorption isotherm allows the development of composite waves. We use electrostatic surface complexation models to calculate realistic proton adsorption isotherms. Surface characteristics such as specific surface area, and surface site density were determined experimentally. We validate the model by comparison against silica glass bead flow through experiments. When coupled to surface complexation models, the transport equation captures the timing and behavior of breakthrough curves markedly better than with commonly used Langmuir assumptions. Furthermore, we use the adsorption isotherm to predict, a priori, the transport behavior of protons across pH composition space. Expansion of the model to multicomponent systems shows that proton adsorption can force composite waves to develop in the breakthrough curves of ions that would not otherwise exhibit such behavior. Given the abundance of reactive surfaces in nature and the nonlinearity of chemical systems, we conclude that building a greater understanding of proton adsorption is of utmost importance to reactive transport modeling.
Rowe, Jason F.; Gaulme, Patrick; Hammel, Heidi B.; Casewell, Sarah L.; Fortney, Jonathan J.; Gizis, John E.; Lissauer, Jack J.; Morales-Juberias, Raul; Orton, Glenn S.; Wong, Michael H.; Marley, Mark S.
2017-01-01
Observations of Neptune with the Kepler Space Telescope yield a 49 day light curve with 98% coverage at a 1 minute cadence. A significant signature in the light curve comes from discrete cloud features. We compare results extracted from the light curve data with contemporaneous disk-resolved imaging of Neptune from the Keck 10-m telescope at 1.65 microns and Hubble Space Telescope visible imaging acquired nine months later. This direct comparison validates the feature latitudes assigned to the K2 light curve periods based on Neptune’s zonal wind profile, and confirms observed cloud feature variability. Although Neptune’s clouds vary in location and intensity on short and long timescales, a single large discrete storm seen in Keck imaging dominates the K2 and Hubble light curves; smaller or fainter clouds likely contribute to short-term brightness variability. The K2 Neptune light curve, in conjunction with our imaging data, provides context for the interpretation of current and future brown dwarf and extrasolar planet variability measurements. In particular we suggest that the balance between large, relatively stable, atmospheric features and smaller, more transient, clouds controls the character of substellar atmospheric variability. Atmospheres dominated by a few large spots may show inherently greater light curve stability than those which exhibit a greater number of smaller features. PMID:28127087
Movement between Mexico and Canada: Analysis of a New Migration Stream
Massey, Douglas; Brown, Amelia E.
2011-01-01
In this analysis we use data from the Mexican Migration Project to contrast processes of Mexican migration to Canada and the United States. All migrants to Canada entered through the Seasonal Agricultural Worker Program and consistent with program criteria, migration there is strongly predicted by marital status and number of dependents, yielding a migrant population that is made up of males of prime labor-force age who are married and have multiple children at home. In contrast, the vast majority of migrants to the United States are undocumented and thus self-selected without regard to marital status or parenthood. Migration to the United States is strongly predicted by age, and migration probabilities display the age curve classically associated with labor migration. Within countries of destination, migrants to Canada enjoy superior labor market outcomes compared with those to the United States, with higher wages and more compact work schedules that yield higher earnings and shorter periods away from families compared with undocumented migrants to the United States. Labor migration to Canada also tends to operate as a circular flow with considerable repeat migration whereas undocumented migrants to the United States do not come and go so regularly, as crossing the Mexico-U.S. border has become increasingly difficult and costly. PMID:24347678
Tiecco, Matteo; Corte, Laura; Roscini, Luca; Colabella, Claudia; Germani, Raimondo; Cardinali, Gianluigi
2014-07-25
Conductometry is widely used to determine critical micellar concentration and micellar aggregates surface properties of amphiphiles. Current conductivity experiments of surfactant solutions are typically carried out by manual pipetting, yielding some tens reading points within a couple of hours. In order to study the properties of surfactant-cells interactions, each amphiphile must be tested in different conditions against several types of cells. This calls for complex experimental designs making the application of current methods seriously time consuming, especially because long experiments risk to determine alterations of cells, independently of the surfactant action. In this paper we present a novel, accurate and rapid automated procedure to obtain conductometric curves with several hundreds reading points within tens of minutes. The method was validated with surfactant solutions alone and in combination with Saccharomyces cerevisiae cells. An easy-to use R script, calculates conductometric parameters and their statistical significance with a graphic interface to visualize data and results. The validations showed that indeed the procedure works in the same manner with surfactant alone or in combination with cells, yielding around 1000 reading points within 20 min and with high accuracy, as determined by the regression analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics
NASA Technical Reports Server (NTRS)
Wang, John T.
2010-01-01
The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.
NASA Astrophysics Data System (ADS)
Taha, Mutasem O.; Habash, Maha; Khanfar, Mohammad A.
2014-05-01
Glucokinase (GK) is involved in normal glucose homeostasis and therefore it is a valid target for drug design and discovery efforts. GK activators (GKAs) have excellent potential as treatments of hyperglycemia and diabetes. The combined recent interest in GKAs, together with docking limitations and shortages of docking validation methods prompted us to use our new 3D-QSAR analysis, namely, docking-based comparative intermolecular contacts analysis (dbCICA), to validate docking configurations performed on a group of GKAs within GK binding site. dbCICA assesses the consistency of docking by assessing the correlation between ligands' affinities and their contacts with binding site spots. Optimal dbCICA models were validated by receiver operating characteristic curve analysis and comparative molecular field analysis. dbCICA models were also converted into valid pharmacophores that were used as search queries to mine 3D structural databases for new GKAs. The search yielded several potent bioactivators that experimentally increased GK bioactivity up to 7.5-folds at 10 μM.
Jåstad, Eirik O; Torheim, Turid; Villeneuve, Kathleen M; Kvaal, Knut; Hole, Eli O; Sagstuen, Einar; Malinen, Eirik; Futsaether, Cecilia M
2017-09-28
The amino acid l-α-alanine is the most commonly used material for solid-state electron paramagnetic resonance (EPR) dosimetry, due to the formation of highly stable radicals upon irradiation, with yields proportional to the radiation dose. Two major alanine radical components designated R1 and R2 have previously been uniquely characterized from EPR and electron-nuclear double resonance (ENDOR) studies as well as from quantum chemical calculations. There is also convincing experimental evidence of a third minor radical component R3, and a tentative radical structure has been suggested, even though no well-defined spectral signature has been observed experimentally. In the present study, temperature dependent EPR spectra of X-ray irradiated polycrystalline alanine were analyzed using five multivariate methods in further attempts to understand the composite nature of the alanine dosimeter EPR spectrum. Principal component analysis (PCA), maximum likelihood common factor analysis (MLCFA), independent component analysis (ICA), self-modeling mixture analysis (SMA), and multivariate curve resolution (MCR) were used to extract pure radical spectra and their fractional contributions from the experimental EPR spectra. All methods yielded spectral estimates resembling the established R1 spectrum. Furthermore, SMA and MCR consistently predicted both the established R2 spectrum and the shape of the R3 spectrum. The predicted shape of the R3 spectrum corresponded well with the proposed tentative spectrum derived from spectrum simulations. Thus, results from two independent multivariate data analysis techniques strongly support the previous evidence that three radicals are indeed present in irradiated alanine samples.
Maldonado, Fabien; Duan, Fenghai; Raghunath, Sushravya M.; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Garg, Kavita; Greco, Erin; Nath, Hrudaya; Robb, Richard A.; Bartholmai, Brian J.
2015-01-01
Rationale: Screening for lung cancer using low-dose computed tomography (CT) reduces lung cancer mortality. However, in addition to a high rate of benign nodules, lung cancer screening detects a large number of indolent cancers that generally belong to the adenocarcinoma spectrum. Individualized management of screen-detected adenocarcinomas would be facilitated by noninvasive risk stratification. Objectives: To validate that Computer-Aided Nodule Assessment and Risk Yield (CANARY), a novel image analysis software, successfully risk stratifies screen-detected lung adenocarcinomas based on clinical disease outcomes. Methods: We identified retrospective 294 eligible patients diagnosed with lung adenocarcinoma spectrum lesions in the low-dose CT arm of the National Lung Screening Trial. The last low-dose CT scan before the diagnosis of lung adenocarcinoma was analyzed using CANARY blinded to clinical data. Based on their parametric CANARY signatures, all the lung adenocarcinoma nodules were risk stratified into three groups. CANARY risk groups were compared using survival analysis for progression-free survival. Measurements and Main Results: A total of 294 patients were included in the analysis. Kaplan-Meier analysis of all the 294 adenocarcinoma nodules stratified into the Good, Intermediate, and Poor CANARY risk groups yielded distinct progression-free survival curves (P < 0.0001). This observation was confirmed in the unadjusted and adjusted (age, sex, race, and smoking status) progression-free survival analysis of all stage I cases. Conclusions: CANARY allows the noninvasive risk stratification of lung adenocarcinomas into three groups with distinct post-treatment progression-free survival. Our results suggest that CANARY could ultimately facilitate individualized management of incidentally or screen-detected lung adenocarcinomas. PMID:26052977
Maldonado, Fabien; Duan, Fenghai; Raghunath, Sushravya M; Rajagopalan, Srinivasan; Karwoski, Ronald A; Garg, Kavita; Greco, Erin; Nath, Hrudaya; Robb, Richard A; Bartholmai, Brian J; Peikert, Tobias
2015-09-15
Screening for lung cancer using low-dose computed tomography (CT) reduces lung cancer mortality. However, in addition to a high rate of benign nodules, lung cancer screening detects a large number of indolent cancers that generally belong to the adenocarcinoma spectrum. Individualized management of screen-detected adenocarcinomas would be facilitated by noninvasive risk stratification. To validate that Computer-Aided Nodule Assessment and Risk Yield (CANARY), a novel image analysis software, successfully risk stratifies screen-detected lung adenocarcinomas based on clinical disease outcomes. We identified retrospective 294 eligible patients diagnosed with lung adenocarcinoma spectrum lesions in the low-dose CT arm of the National Lung Screening Trial. The last low-dose CT scan before the diagnosis of lung adenocarcinoma was analyzed using CANARY blinded to clinical data. Based on their parametric CANARY signatures, all the lung adenocarcinoma nodules were risk stratified into three groups. CANARY risk groups were compared using survival analysis for progression-free survival. A total of 294 patients were included in the analysis. Kaplan-Meier analysis of all the 294 adenocarcinoma nodules stratified into the Good, Intermediate, and Poor CANARY risk groups yielded distinct progression-free survival curves (P < 0.0001). This observation was confirmed in the unadjusted and adjusted (age, sex, race, and smoking status) progression-free survival analysis of all stage I cases. CANARY allows the noninvasive risk stratification of lung adenocarcinomas into three groups with distinct post-treatment progression-free survival. Our results suggest that CANARY could ultimately facilitate individualized management of incidentally or screen-detected lung adenocarcinomas.
Kirkpatrick, Naomi C; Blacker, Hayley P; Woods, Wayne G; Gasser, Robin B; Noormohammadi, Amir H
2009-02-01
Coccidiosis is a significant disease of poultry caused by different species of Eimeria. Differentiation of Eimeria species is important for the quality control of the live attenuated Eimeria vaccines derived from monospecific lines of Eimeria spp. In this study, high-resolution melting (HRM) curve analysis of the amplicons generated from the second internal transcribed spacer of nuclear ribosomal DNA (ITS-2) was used to distinguish between seven pathogenic Eimeria species of chickens, and the results were compared with those obtained from the previously described technique, capillary electrophoresis. Using a series of known monospecific lines of Eimeria species, HRM curve analysis was shown to distinguish between Eimeria acervulina, Eimeria brunetti, Eimeria maxima, Eimeria mitis, Eimeria necatrix, Eimeria praecox and Eimeria tenella. Computerized analysis of the HRM curves and capillary electrophoresis profiles could detect the dominant species in several specimens containing different ratios of E. necatrix and E. maxima and of E. tenella and E. acervulina. The HRM curve analysis identified all of the mixtures as "variation" to the reference species, and also identified the minor species in some mixtures. Computerized HRM curve analysis also detected impurities in 21 possible different combinations of the seven Eimeria species. The PCR-based HRM curve analysis of the ITS-2 provides a powerful tool for the detection and identification of pure Eimeria species. The HRM curve analysis could also be used as a rapid tool in the quality assurance of Eimeria vaccine production to confirm the purity of the monospecific cell lines. The HRM curve analysis is rapid and reliable and can be performed in a single test tube in less than 3 h.
Cluster signal-to-noise analysis for evaluation of the information content in an image.
Weerawanich, Warangkana; Shimizu, Mayumi; Takeshita, Yohei; Okamura, Kazutoshi; Yoshida, Shoko; Yoshiura, Kazunori
2018-01-01
(1) To develop an observer-free method of analysing image quality related to the observer performance in the detection task and (2) to analyse observer behaviour patterns in the detection of small mass changes in cone-beam CT images. 13 observers detected holes in a Teflon phantom in cone-beam CT images. Using the same images, we developed a new method, cluster signal-to-noise analysis, to detect the holes by applying various cut-off values using ImageJ and reconstructing cluster signal-to-noise curves. We then evaluated the correlation between cluster signal-to-noise analysis and the observer performance test. We measured the background noise in each image to evaluate the relationship with false positive rates (FPRs) of the observers. Correlations between mean FPRs and intra- and interobserver variations were also evaluated. Moreover, we calculated true positive rates (TPRs) and accuracies from background noise and evaluated their correlations with TPRs from observers. Cluster signal-to-noise curves were derived in cluster signal-to-noise analysis. They yield the detection of signals (true holes) related to noise (false holes). This method correlated highly with the observer performance test (R 2 = 0.9296). In noisy images, increasing background noise resulted in higher FPRs and larger intra- and interobserver variations. TPRs and accuracies calculated from background noise had high correlation with actual TPRs from observers; R 2 was 0.9244 and 0.9338, respectively. Cluster signal-to-noise analysis can simulate the detection performance of observers and thus replace the observer performance test in the evaluation of image quality. Erroneous decision-making increased with increasing background noise.
A new method for testing pile by single-impact energy and P-S curve
NASA Astrophysics Data System (ADS)
Xu, Zhao-Yong; Duan, Yong-Kang; Wang, Bin; Hu, Yi-Li; Yang, Run-Hai; Xu, Jun; Zhao, Jin-Ming
2004-11-01
By studying the pile-formula and stress-wave methods ( e.g., CASE method), the authors propose a new method for testing piles using the single-impact energy and P-S curves. The vibration and wave figures are recorded, and the dynamic and static displacements are measured by different transducers near the top of piles when the pile is impacted by a heavy hammer or micro-rocket. By observing the transformation coefficient of driving energy (total energy), the consumed energy of wave motion and vibration and so on, the vertical bearing capacity for single pile is measured and calculated. Then, using the vibration wave diagram, the dynamic relation curves between the force ( P) and the displacement ( S) is calculated and the yield points are determined. Using the static-loading test, the dynamic results are checked and the relative constants of dynamic-static P-S curves are determined. Then the subsidence quantity corresponding to the bearing capacity is determined. Moreover, the shaped quality of the pile body can be judged from the formation of P-S curves.
Rastkhah, E; Zakeri, F; Ghoranneviss, M; Rajabpour, M R; Farshidpour, M R; Mianji, F; Bayat, M
2016-03-01
An in vitro study of the dose responses of human peripheral blood lymphocytes was conducted with the aim of creating calibrated dose-response curves for biodosimetry measuring up to 4 Gy (0.25-4 Gy) of gamma radiation. The cytokinesis-blocked micronucleus (CBMN) assay was employed to obtain the frequencies of micronuclei (MN) per binucleated cell in blood samples from 16 healthy donors (eight males and eight females) in two age ranges of 20-34 and 35-50 years. The data were used to construct the calibration curves for men and women in two age groups, separately. An increase in micronuclei yield with the dose in a linear-quadratic way was observed in all groups. To verify the applicability of the constructed calibration curve, MN yields were measured in peripheral blood lymphocytes of two real overexposed subjects and three irradiated samples with unknown dose, and the results were compared with dose values obtained from measuring dicentric chromosomes. The comparison of the results obtained by the two techniques indicated a good agreement between dose estimates. The average baseline frequency of MN for the 130 healthy non-exposed donors (77 men and 55 women, 20-60 years old divided into four age groups) ranged from 6 to 21 micronuclei per 1000 binucleated cells. Baseline MN frequencies were higher for women and for the older age group. The results presented in this study point out that the CBMN assay is a reliable, easier and valuable alternative method for biological dosimetry.
Design, Optimization and Evaluation of Integrally Stiffened Al 7050 Panel with Curved Stiffeners
NASA Technical Reports Server (NTRS)
Slemp, Wesley C. H.; Bird, R. Keith; Kapania, Rakesh K.; Havens, David; Norris, Ashley; Olliffe, Robert
2011-01-01
A curvilinear stiffened panel was designed, manufactured, and tested in the Combined Load Test Fixture at NASA Langley Research Center. The panel was optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis tool named EBF3PanelOpt. The panel was designed for a combined compression-shear loading configuration that is a realistic load case for a typical aircraft wing panel. The panel was loaded beyond buckling and strains and out-of-plane displacements were measured. The experimental data were compared with the strains and out-of-plane deflections from a high fidelity nonlinear finite element analysis and linear elastic finite element analysis of the panel/test-fixture assembly. The numerical results indicated that the panel buckled at the linearly elastic buckling eigenvalue predicted for the panel/test-fixture assembly. The experimental strains prior to buckling compared well with both the linear and nonlinear finite element model.
Three-dimensional elastic-plastic finite-element analysis of fatigue crack propagation
NASA Technical Reports Server (NTRS)
Goglia, G. L.; Chermahini, R. G.
1985-01-01
Fatigue cracks are a major problem in designing structures subjected to cyclic loading. Cracks frequently occur in structures such as aircraft and spacecraft. The inspection intervals of many aircraft structures are based on crack-propagation lives. Therefore, improved prediction of propagation lives under flight-load conditions (variable-amplitude loading) are needed to provide more realistic design criteria for these structures. The main thrust was to develop a three-dimensional, nonlinear, elastic-plastic, finite element program capable of extending a crack and changing boundary conditions for the model under consideration. The finite-element model is composed of 8-noded (linear-strain) isoparametric elements. In the analysis, the material is assumed to be elastic-perfectly plastic. The cycle stress-strain curve for the material is shown Zienkiewicz's initial-stress method, von Mises's yield criterion, and Drucker's normality condition under small-strain assumptions are used to account for plasticity. The three-dimensional analysis is capable of extending the crack and changing boundary conditions under cyclic loading.
NASA Astrophysics Data System (ADS)
Bhatta, G.; Zola, S.; Stawarz, Ł.; Ostrowski, M.; Winiarski, M.; Ogłoza, W.; Dróżdż, M.; Siwak, M.; Liakos, A.; Kozieł-Wierzbowska, D.; Gazeas, K.; Debski, B.; Kundera, T.; Stachowski, G.; Paliya, V. S.
2016-11-01
The detection of periodicity in the broadband non-thermal emission of blazars has so far been proven to be elusive. However, there are a number of scenarios that could lead to quasi-periodic variations in blazar light curves. For example, an orbital or thermal/viscous period of accreting matter around central supermassive black holes could, in principle, be imprinted in the multi-wavelength emission of small-scale blazar jets, carrying such crucial information about plasma conditions within the jet launching regions. In this paper, we present the results of our time series analysis of the ˜9.2 yr long, and exceptionally well-sampled, optical light curve of the BL Lac object OJ 287. The study primarily used the data from our own observations performed at the Mt. Suhora and Kraków Observatories in Poland, and at the Athens Observatory in Greece. Additionally, SMARTS observations were used to fill some of the gaps in the data. The Lomb-Scargle periodogram and the weighted wavelet Z-transform methods were employed to search for possible quasi-periodic oscillations in the resulting optical light curve of the source. Both methods consistently yielded a possible quasi-periodic signal around the periods of ˜400 and ˜800 days, the former with a significance (over the underlying colored noise) of ≥slant 99 % . A number of likely explanations for this are discussed, with preference given to a modulation of the jet production efficiency by highly magnetized accretion disks. This supports previous findings and the interpretation reported recently in the literature for OJ 287 and other blazar sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, Joerg; Kessler, Lutz; Paul, Udo
2007-05-17
The concept of forming limit curves (FLC) is widely used in industrial practice. The required data should be delivered for typical material properties (measured on coils with properties in a range of +/- of the standard deviation from the mean production values) by the material suppliers. In particular it should be noted that its use for the validation of forming robustness providing forming limit curves for the variety of scattering in the mechanical properties is impossible. Therefore a forecast of the expected limit strains without expensive cost and time-consuming experiments is necessary. In the paper the quality of a regressionmore » analysis for determining forming limit curves based on tensile test results is presented and discussed.Owing to the specific definition of limit strains with FLCs following linear strain paths, the significance of this failure definition is limited. To consider nonlinear strain path effects, different methods are given in literature. One simple method is the concept of limit stresses. It should be noted that the determined value of the critical stress is dependent on the extrapolation of the tensile test curve. When the yield curve extrapolation is very similar to an exponential function, the definition of the critical stress value is very complicated due to the low slope of the hardening function at large strains.A new method to determine general failure behavior in sheet metal forming is the common use and interpretation of three criteria: onset on material instability (comparable with FLC concept), value of critical shear fracture and the value of ductile fracture. This method seems to be particularly successful for newly developed high strength steel grades in connection with more complex strain paths for some specific material elements. Nevertheless the identification of the different failure material parameters or functions will increase and the user has to learn with the interpretation of the numerical results.« less
Diagnosis of adrenal insufficiency.
Dorin, Richard I; Qualls, Clifford R; Crapo, Lawrence M
2003-08-05
The cosyntropin stimulation test is the initial endocrine evaluation of suspected primary or secondary adrenal insufficiency. To critically review the utility of the cosyntropin stimulation test for evaluating adrenal insufficiency. The MEDLINE database was searched from 1966 to 2002 for all English-language papers related to the diagnosis of adrenal insufficiency. Studies with fewer than 5 persons with primary or secondary adrenal insufficiency or with fewer than 10 persons as normal controls were excluded. For secondary adrenal insufficiency, only studies that stratified participants by integrated tests of adrenal function were included. Summary receiver-operating characteristic (ROC) curves were generated from all studies that provided sensitivity and specificity data for 250-microg and 1-microg cosyntropin tests; these curves were then compared by using area under the curve (AUC) methods. All estimated values are given with 95% CIs. At a specificity of 95%, sensitivities were 97%, 57%, and 61% for summary ROC curves in tests for primary adrenal insufficiency (250-microg cosyntropin test), secondary adrenal insufficiency (250-microg cosyntropin test), and secondary adrenal insufficiency (1-microg cosyntropin test), respectively. The area under the curve for primary adrenal insufficiency was significantly greater than the AUC for secondary adrenal insufficiency for the high-dose cosyntropin test (P < 0.001), but AUCs for the 250-microg and 1-microg cosyntropin tests did not differ significantly (P > 0.5) for secondary adrenal insufficiency. At a specificity of 95%, summary ROC analysis for the 250-microg cosyntropin test yielded a positive likelihood ratio of 11.5 (95% CI, 8.7 to 14.2) and a negative likelihood ratio of 0.45 (CI, 0.30 to 0.60) for the diagnosis of secondary adrenal insufficiency. Cortisol response to cosyntropin varies considerably among healthy persons. The cosyntropin test performs well in patients with primary adrenal insufficiency, but the lower sensitivity in patients with secondary adrenal insufficiency necessitates use of tests involving stimulation of the hypothalamus if the pretest probability is sufficiently high. The operating characteristics of the 250-microg and 1-microg cosyntropin tests are similar.
NASA Astrophysics Data System (ADS)
Askarimarnani, Sara; Willgoose, Garry; Fityus, Stephen
2017-04-01
Coal seam gas (CSG) is a form of natural gas that occurs in some coal seams. Coal seams have natural fractures with dual-porosity systems and low permeability. In the CSG industry, hydraulic fracturing is applied to increase the permeability and extract the gas more efficiently from the coal seam. The industry claims that it can design fracking patterns. Whether this is true or not, the public (and regulators) requires assurance that once a well has been fracked that the fracking has occurred according to plan and that the fracked well is safe. Thus defensible post-fracking testing methodologies for gas generating wells are required. In 2009 a fracked well HB02, owned by AGL, near Broke, NSW, Australia was subjected to "traditional" water pump-testing as part of this assurance process. Interpretation with well Type Curves and simple single phase (i.e. only water, no gas) highlighted deficiencies in traditional water well approaches with a systemic deviation from the qualitative characteristic of well drawdown curves (e.g. concavity versus convexity of drawdown with time). Accordingly a multiphase (i.e. water and methane) model of the well was developed and compared with the observed data. This paper will discuss the results of this multiphase testing using the TOUGH2 model and its EOS7C constitutive model. A key objective was to test a methodology, based on GLUE monte-carlo calibration technique, to calibrate the characteristics of the frack using the well test drawdown curve. GLUE involves a sensitivity analysis of how changes in the fracture properties change the well hydraulics through and analysis of the drawdown curve and changes in the cone of depression. This was undertaken by changing the native coal, fracture, and gas parameters to see how changing those parameters changed the match between simulations and the observed well drawdown. Results from the GLUE analysis show how much information is contained in the well drawdown curve for estimating field scale coal and gas generation properties, the fracture geometry, and the proponent characteristics. The results with the multiphase model show a better match to the drawdown than using a single phase model but the differences between the best fit drawdowns were small, and smaller than the difference between the best fit and field data. However, the parameters derived to generate these best fits for each model were very different. We conclude that while satisfactory fits with single phase groundwater models (e.g. MODFLOW, FEFLOW) can be achieved the parameters derived will not be realistic, with potential implications for drawdowns and water yields for gas field modelling. Multiphase models are thus required and we will discuss some of the limitations of TOUGH2 for the CSG problem.
Exoplanet Classification and Yield Estimates for Direct Imaging Missions
NASA Astrophysics Data System (ADS)
Kopparapu, Ravi Kumar; Hébrard, Eric; Belikov, Rus; Batalha, Natalie M.; Mulders, Gijs D.; Stark, Chris; Teal, Dillon; Domagal-Goldman, Shawn; Mandell, Avi
2018-04-01
Future NASA concept missions that are currently under study, like the Habitable Exoplanet Imaging Mission (HabEx) and the Large Ultra-violet Optical Infra Red Surveyor, could discover a large diversity of exoplanets. We propose here a classification scheme that distinguishes exoplanets into different categories based on their size and incident stellar flux, for the purpose of providing the expected number of exoplanets observed (yield) with direct imaging missions. The boundaries of this classification can be computed using the known chemical behavior of gases and condensates at different pressures and temperatures in a planetary atmosphere. In this study, we initially focus on condensation curves for sphalerite ZnS, {{{H}}}2{{O}}, {CO}}2, and {CH}}4. The order in which these species condense in a planetary atmosphere define the boundaries between different classes of planets. Broadly, the planets are divided into rocky planets (0.5–1.0 R ⊕), super-Earths (1.0–1.75 R ⊕), sub-Neptunes (1.75–3.5 R ⊕), sub-Jovians (3.5–6.0 R ⊕), and Jovians (6–14.3 R ⊕) based on their planet sizes, and “hot,” “warm,” and “cold” based on the incident stellar flux. We then calculate planet occurrence rates within these boundaries for different kinds of exoplanets, η planet, using the community coordinated results of NASA’s Exoplanet Program Analysis Group’s Science Analysis Group-13 (SAG-13). These occurrence rate estimates are in turn used to estimate the expected exoplanet yields for direct imaging missions of different telescope diameters.
Metabolism of dinosaurs as determined from their growth.
Lee, Scott A
2015-09-01
A model based on cellular properties is used to analyze the mass growth curves of 20 dinosaurs. This analysis yields the first measurement of the average cellular metabolism of dinosaurs. The organismal metabolism is also determined. The cellular metabolism of dinosaurs is found to decrease with mass at a slower rate than is observed in extant animals. The organismal metabolism increases with the mass of the dinosaur. These results come from both the Saurischia and Ornithischia branches of Dinosauria, suggesting that the observed metabolic features were common to all dinosaurs. The results from dinosaurs are compared to data from extant placental and marsupial mammals, a monotreme, and altricial and precocial birds, reptiles, and fish. Dinosaurs had cellular and organismal metabolisms in the range observed in extant mesotherms.
Theory of Financial Risk and Derivative Pricing
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2009-01-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
Theory of Financial Risk and Derivative Pricing - 2nd Edition
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe; Potters, Marc
2003-12-01
Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.
Metabolism of dinosaurs as determined from their growth
NASA Astrophysics Data System (ADS)
Lee, Scott A.
2015-09-01
A model based on cellular properties is used to analyze the mass growth curves of 20 dinosaurs. This analysis yields the first measurement of the average cellular metabolism of dinosaurs. The organismal metabolism is also determined. The cellular metabolism of dinosaurs is found to decrease with mass at a slower rate than is observed in extant animals. The organismal metabolism increases with the mass of the dinosaur. These results come from both the Saurischia and Ornithischia branches of Dinosauria, suggesting that the observed metabolic features were common to all dinosaurs. The results from dinosaurs are compared to data from extant placental and marsupial mammals, a monotreme, and altricial and precocial birds, reptiles, and fish. Dinosaurs had cellular and organismal metabolisms in the range observed in extant mesotherms.
NASA Technical Reports Server (NTRS)
Palmer, J. M. (Principal Investigator); Slater, P. N.
1984-01-01
The newly built Caste spectropolarimeters gave satisfactory performance during tests in the solar radiometer and helicopter modes. A bandwidth normalization technique based on analysis of the moments of the spectral responsivity curves was used to analyze the spectral bands of the MSS and TM subsystems of LANDSAT 4 and 5 satellites. Results include the effective wavelength, the bandpass, the wavelength limits, and the normalized responsivity for each spectral channel. Temperature coefficients for TM PF channel 6 were also derived. The moments normalization method used yields sensor parameters whose derivation is independent of source characteristics (i.e., incident solar spectral irradiance, atmospheric transmittance, or ground reflectance). The errors expected using these parameters are lower than those expected using other normalization methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samec, R. G.; Melton, R. A.; Figg, E. R.
GSC 3355 0394 has an EB-type light curve, which is dominated by hot and cool spot activities. It displays night-to-night variations in light-curve shapes. The period study yields six new times of minimum light and the first precision ephemeris, HJD T{sub min}I = 2, 454, 408.9547 {+-} 0.0017 + 0.4621603 {+-} 0.0000008d*E. VR{sub c}I{sub c} standard magnitudes are presented. BVRI Wilson synthetic light-curve solutions are calculated for both a Mode 4 (V1010 Oph-type, semidetached, more massive component filling its Roche lobe) configuration and a Mode 3, contact configuration (fill-out 100% or critical contact). The critical contact is the lowest residualmore » solution. Four major spot regions are needed to model this binary, at least one is evidently a stream spot.« less
NASA Technical Reports Server (NTRS)
Esposito, J. J.; Zabora, R. F.
1975-01-01
Pertinent mechanical and physical properties of six high conductivity metals were determined. The metals included Amzirc, NARloy Z, oxygen free pure copper, electroformed copper, fine silver, and electroformed nickel. Selection of these materials was based on their possible use in high performance reusable rocket nozzles. The typical room temperature properties determined for each material included tensile ultimate strength, tensile yield strength, elongation, reduction of area, modulus of elasticity, Poisson's ratio, density, specific heat, thermal conductivity, and coefficient of thermal expansion. Typical static tensile stress-strain curves, cyclic stress-strain curves, and low-cycle fatigue life curves are shown. Properties versus temperature are presented in graphical form for temperatures from 27.6K (-410 F) to 810.9K (1000 F).
High-pressure melting curve of hydrogen.
Davis, Sergio M; Belonoshko, Anatoly B; Johansson, Börje; Skorodumova, Natalia V; van Duin, Adri C T
2008-11-21
The melting curve of hydrogen was computed for pressures up to 200 GPa, using molecular dynamics. The inter- and intramolecular interactions were described by the reactive force field (ReaxFF) model. The model describes the pressure-volume equation of state solid hydrogen in good agreement with experiment up to pressures over 150 GPa, however the corresponding equation of state for liquid deviates considerably from density functional theory calculations. Due to this, the computed melting curve, although shares most of the known features, yields considerably lower melting temperatures compared to extrapolations of the available diamond anvil cell data. This failure of the ReaxFF model, which can reproduce many physical and chemical properties (including chemical reactions in hydrocarbons) of solid hydrogen, hints at an important change in the mechanism of interaction of hydrogen molecules in the liquid state.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2016-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive communities. In order to address a series of issues identified by the aerospace community as being desirable to include in a next generation composite impact model, an orthotropic, macroscopic constitutive model incorporating both plasticity and damage suitable for implementation within the commercial LS-DYNA computer code is being developed. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain hardening-based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used in which a load in one direction results in a stiffness reduction in multiple material coordinate directions. A detailed analysis is carried out to ensure that the strain equivalence assumption is appropriate for the derived plasticity and damage formulations that are employed in the current model. Procedures to develop the appropriate input curves for the damage model are presented and the process required to develop an appropriate characterization test matrix is discussed
Kalachanis, Dimitrios; Manetas, Yiannis
2010-07-01
Limited evidence up to now indicates low linear photosynthetic electron flow and CO(2) assimilation rates in non-foliar chloroplasts. In this investigation, we used chlorophyll fluorescence techniques to locate possible limiting steps in photosystem function in exposed, non-stressed green fruits (both pericarps and seeds) of three species, while corresponding leaves served as controls. Compared with leaves, fruit photosynthesis was characterized by less photon trapping and less quantum yields of electron flow, while the non-photochemical quenching was higher and potentially linked to enhanced carotenoid/chlorophyll ratios. Analysis of fast chlorophyll fluorescence rise curves revealed possible limitations both in the donor (oxygen evolving complex) and the acceptor (Q(A)(-)--> intermediate carriers) sides of photosystem II (PSII) indicating innately low PSII photochemical activity. On the other hand, PSI was characterized by faster reduction of its final electron acceptors and their small pool sizes. We argue that the fast reductive saturation of final PSI electron acceptors may divert electrons back to intermediate carriers facilitating a cyclic flow around PSI, while the partial inactivation of linear flow precludes strong reduction of plastoquinone. As such, the photosynthetic attributes of fruit chloroplasts may act to replenish the ATP lost because of hypoxia usually encountered in sink organs with high diffusive resistance to gas exchange.
Measurement of radial artery contrast intensity to assess cardiac microbubble behavior.
Sosnovik, David E; Januzzi, James L; Church, Charles C; Mertsch, Judith A; Sears, Andrea L; Fetterman, Robert C; Walovitch, Richard C; Picard, Michael H
2003-12-01
We sought to determine whether analysis of the contrast signal from the radial artery is better able to reflect changes in left ventricular (LV) microbubble dynamics than the signal from the LV itself. Assessment of microbubble behavior from images of the LV may be affected by attenuation from overlying microbubbles and nonuniform background signal intensities. The signal intensity from contrast in a peripheral artery is not affected by these artifacts and may, thus, be more accurate. After injection of a contrast bolus into a peripheral vein, signal intensity was followed simultaneously in the LV and radial artery. The measurements were repeated using continuous, triggered, low and high mechanical index harmonic imaging of the LV. Peak and integrated signal intensities ranged from 25 dB and 1550 dB/s, respectively, with radial artery imaging to 5.6 dB and 471 dB/s with ventricular imaging. Although differences in microbubble behavior during the different imaging protocols could be determined from both the LV and radial artery curves, analysis of the radial artery curves yielded more consistent and robust differences. The signal from microbubbles in the radial artery is not affected by shadowing and is, thus, a more accurate reflection of microbubble behavior in the LV than the signal from the LV itself. This may have important implications for the measurement of myocardial perfusion by contrast echocardiography.
Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A
2017-12-01
In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2016-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive communities. In order to address a series of issues identified by the aerospace community as being desirable to include in a next generation composite impact model, an orthotropic, macroscopic constitutive model incorporating both plasticity and damage suitable for implementation within the commercial LS-DYNA computer code is being developed. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain hardening-based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used in which a load in one direction results in a stiffness reduction in multiple material coordinate directions. A detailed analysis is carried out to ensure that the strain equivalence assumption is appropriate for the derived plasticity and damage formulations that are employed in the current model. Procedures to develop the appropriate input curves for the damage model are presented and the process required to develop an appropriate characterization test matrix is discussed.
Rousson, Valentin; Zumbrunn, Thomas
2011-06-22
Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
2011-01-01
Background Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. Methods We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. Results We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. Conclusions We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application. PMID:21696604
Relative fission product yield determination in the USGS TRIGA Mark I reactor
NASA Astrophysics Data System (ADS)
Koehl, Michael A.
Fission product yield data sets are one of the most important and fundamental compilations of basic information in the nuclear industry. This data has a wide range of applications which include nuclear fuel burnup and nonproliferation safeguards. Relative fission yields constitute a major fraction of the reported yield data and reduce the number of required absolute measurements. Radiochemical separations of fission products reduce interferences, facilitate the measurement of low level radionuclides, and are instrumental in the analysis of low-yielding symmetrical fission products. It is especially useful in the measurement of the valley nuclides and those on the extreme wings of the mass yield curve, including lanthanides, where absolute yields have high errors. This overall project was conducted in three stages: characterization of the neutron flux in irradiation positions within the U.S. Geological Survey TRIGA Mark I Reactor (GSTR), determining the mass attenuation coefficients of precipitates used in radiochemical separations, and measuring the relative fission products in the GSTR. Using the Westcott convention, the Westcott flux, modified spectral index, neutron temperature, and gold-based cadmium ratios were determined for various sampling positions in the USGS TRIGA Mark I reactor. The differential neutron energy spectrum measurement was obtained using the computer iterative code SAND-II-SNL. The mass attenuation coefficients for molecular precipitates were determined through experiment and compared to results using the EGS5 Monte Carlo computer code. Difficulties associated with sufficient production of fission product isotopes in research reactors limits the ability to complete a direct, experimental assessment of mass attenuation coefficients for these isotopes. Experimental attenuation coefficients of radioisotopes produced through neutron activation agree well with the EGS5 calculated results. This suggests mass attenuation coefficients of molecular precipitates can be approximated using EGS5, especially in the instance of radioisotopes produced predominantly through uranium fission. Relative fission product yields were determined for three sampling positions in the USGS TRIGA Mark I reactor through radiochemical analysis. The relative mass yield distribution for valley nuclides decreases with epithermal neutrons compared to thermal neutrons. Additionally, a proportionality constant which related the measured beta activity of a fission product to the number of fissions that occur in a sample of irradiated uranium was determined for the detector used in this study and used to determine the thermal and epithermal flux. These values agree well with a previous study which used activation foils to determine the flux. The results of this project clearly demonstrate that R-values can be measured in the GSTR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, R. D., E-mail: rdguerrerom@unal.edu.co; Arango, C. A., E-mail: caarango@icesi.edu.co; Reyes, A., E-mail: areyesv@unal.edu.co
We recently proposed a Quantum Optimal Control (QOC) method constrained to build pulses from analytical pulse shapes [R. D. Guerrero et al., J. Chem. Phys. 143(12), 124108 (2015)]. This approach was applied to control the dissociation channel yields of the diatomic molecule KH, considering three potential energy curves and one degree of freedom. In this work, we utilized this methodology to study the strong field control of the cis-trans photoisomerization of 11-cis retinal. This more complex system was modeled with a Hamiltonian comprising two potential energy surfaces and two degrees of freedom. The resulting optimal pulse, made of 6 linearlymore » chirped pulses, was capable of controlling the population of the trans isomer on the ground electronic surface for nearly 200 fs. The simplicity of the pulse generated with our QOC approach offers two clear advantages: a direct analysis of the sequence of events occurring during the driven dynamics, and its reproducibility in the laboratory with current laser technologies.« less
Improved Photometry for the DASCH Pipeline
NASA Astrophysics Data System (ADS)
Tang, Sumin; Grindlay, Jonathan; Los, Edward; Servillat, Mathieu
2013-07-01
The Digital Access to a Sky Century@Harvard (DASCH) project is digitizing the ˜500,000 glass plate images obtained (full sky) by the Harvard College Observatory from 1885 to 1992. Astrometry and photometry for each resolved object are derived with photometric rms values of ˜0.15 mag for the initial photometry analysis pipeline. Here we describe new developments for DASCH photometry, applied to the Kepler field, that have yielded further improvements, including better identification of image blends and plate defects by measuring image profiles and astrometric deviations. A local calibration procedure using nearby stars in a similar magnitude range as the program star (similar to what has been done for visual photometry from the plates) yields additional improvement for a net photometric rms of ˜0.1 mag. We also describe statistical measures of light curves that are now used in the DASCH pipeline processing to identify new variables autonomously. The DASCH photometry methods described here are used in the pipeline processing for the data releases of DASCH data,5 as well as for a forthcoming paper on the long-term variables discovered by DASCH in the Kepler field.
Global preamplification simplifies targeted mRNA quantification
Kroneis, Thomas; Jonasson, Emma; Andersson, Daniel; Dolatabadi, Soheila; Ståhlberg, Anders
2017-01-01
The need to perform gene expression profiling using next generation sequencing and quantitative real-time PCR (qPCR) on small sample sizes and single cells is rapidly expanding. However, to analyse few molecules, preamplification is required. Here, we studied global and target-specific preamplification using 96 optimised qPCR assays. To evaluate the preamplification strategies, we monitored the reactions in real-time using SYBR Green I detection chemistry followed by melting curve analysis. Next, we compared yield and reproducibility of global preamplification to that of target-specific preamplification by qPCR using the same amount of total RNA. Global preamplification generated 9.3-fold lower yield and 1.6-fold lower reproducibility than target-specific preamplification. However, the performance of global preamplification is sufficient for most downstream applications and offers several advantages over target-specific preamplification. To demonstrate the potential of global preamplification we analysed the expression of 15 genes in 60 single cells. In conclusion, we show that global preamplification simplifies targeted gene expression profiling of small sample sizes by a flexible workflow. We outline the pros and cons for global preamplification compared to target-specific preamplification. PMID:28332609
Yields of southwestern pinyon-juniper woodlands
Frederick W. Smith; Thomas Schuler
1988-01-01
Site quality and growth-growing stock relations were developed for southwestern woodlands of pinyon (Pinus edulis) and one-seed juniper (Juniperus monosperma) or Utah juniper (J. osteosperma). Anamorphic height-age site index curves for pinyon were developed from a regional sample of 60 woodlands. Site index was...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, Jordan S.; Khosravani, Ali; Castillo, Andrew
Recent spherical nanoindentation protocols have proven robust at capturing the local elastic-plastic response of polycrystalline metal samples at length scales much smaller than the grain size. In this work, we extend these protocols to length scales that include multiple grains to recover microindentation stress-strain curves. These new protocols are first established in this paper and then demonstrated for Al-6061 by comparing the measured indentation stress-strain curves with the corresponding measurements from uniaxial tension tests. More specifically, the scaling factors between the uniaxial yield strength and the indentation yield strength was determined to be about 1.9, which is significantly lower thanmore » the value of 2.8 used commonly in literature. Furthermore, the reasons for this difference are discussed. Second, the benefits of these new protocols in facilitating high throughput exploration of process-property relationships are demonstrated through a simple case study.« less
Weaver, Jordan S.; Khosravani, Ali; Castillo, Andrew; ...
2016-06-14
Recent spherical nanoindentation protocols have proven robust at capturing the local elastic-plastic response of polycrystalline metal samples at length scales much smaller than the grain size. In this work, we extend these protocols to length scales that include multiple grains to recover microindentation stress-strain curves. These new protocols are first established in this paper and then demonstrated for Al-6061 by comparing the measured indentation stress-strain curves with the corresponding measurements from uniaxial tension tests. More specifically, the scaling factors between the uniaxial yield strength and the indentation yield strength was determined to be about 1.9, which is significantly lower thanmore » the value of 2.8 used commonly in literature. Furthermore, the reasons for this difference are discussed. Second, the benefits of these new protocols in facilitating high throughput exploration of process-property relationships are demonstrated through a simple case study.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logsdon, W.A.; Begley, J.A.; Gottshall, C.L.
1978-03-01
The ASME Boiler and Pressure Vessel Code, Section III, Article G-2000, requires that dynamic fracture toughness data be developed for materials with specified minimum yield strengths greater than 50 ksi to provide verification and utilization of the ASME specified minimum reference toughness K/sub IR/ curve. In order to qualify ASME SA508 Class 2a and ASME SA533 Grade A Class 2 pressure vessel steels (minimum yield strengths equal 65 kip/in./sup 2/ and 70 kip/in./sup 2/, respectively) per this requirement, dynamic fracture toughness tests were performed on these materials. All dynamic fracture toughness values of SA508 Class 2a base and HAZ material,more » SA533 Grade A Class 2 base and HAZ material, and applicable weld metals exceeded the ASME specified minimum reference toughness K/sub IR/ curve.« less
Absolute dimensions and masses of eclipsing binaries. V. IQ Persei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacy, C.H.; Frueh, M.L.
1985-08-01
New photometric and spectroscopic observations of the 1.7 day eclipsing binary IQ Persei (B8 + A6) have been analyzed to yield very accurate fundamental properties of the system. Reticon spectroscopic observations obtained at McDonald Observatory were used to determine accurate radial velocities of both stars in this slightly eccentric large light-ratio binary. A new set of VR light curves obtained at McDonald Observatory were analyzed by synthesis techniques, and previously published UBV light curves were reanalyzed to yield accurate photometric orbits. Orbital parameters derived from both sets of photometric observations are in excellent agreement. The absolute dimensions, masses, luminosities, andmore » apsidal motion period (140 yr) derived from these observations agree well with the predictions of theoretical stellar evolution models. The A6 secondary is still very close to the zero-age main sequence. The B8 primary is about one-third of the way through its main-sequence evolution. 27 references.« less
Cracking the chocolate egg problem: polymeric films coated on curved substrates
NASA Astrophysics Data System (ADS)
Brun, Pierre-Thomas; Lee, Anna; Marthelot, Joel; Balestra, Gioele; Gallaire, François; Reis, Pedro
2015-11-01
Inspired by the traditional chocolate egg recipe, we show that pouring a polymeric solution onto spherical molds yields a simple and robust path of fabrication of thin elastic curved shells. The drainage dynamics naturally leads to uniform coatings frozen in time as the polymer cures, which are subsequently peeled off their mold. We show how the polymer curing affects the drainage dynamics and eventually selects the shell thickness and sets its uniformity. To this end, we perform coating experiments using silicon based elastomers, Vinylpolysiloxane (VPS) and Polydimethylsiloxane (PDMS). These results are rationalized combining numerical simulations of the lubrication flow field to a theoretical model of the dynamics yielding an analytical prediction of the formed shell characteristics. In particular, the robustness of the coating technique and its flexibility, two critical features for providing a generic framework for future studies, are shown to be an inherent consequence of the flow field (memory loss). The shell structure is both independent of initial conditions and tailorable by changing a single experimental parameter.
Bem, Daryl; Tressoldi, Patrizio; Rabeyron, Thomas; Duggan, Michael
2015-01-01
In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual's cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10 (-10 ) with an effect size (Hedges' g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 5.1 × 10 (9), greatly exceeding the criterion value of 100 for "decisive evidence" in support of the experimental hypothesis. When DJB's original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10 (-5), and the BF value is 3,853, again exceeding the criterion for "decisive evidence." The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by intense " p-hacking"-the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of the experiments to be 0.20 for the complete database and 0.24 for the independent replications, virtually identical to the effect size of DJB's original experiments (0.22) and the closely related "presentiment" experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
Methods of Technological Forecasting,
1977-05-01
Trend Extrapolation Progress Curve Analogy Trend Correlation Substitution Analysis or Substitution Growth Curves Envelope Curve Advances in the State of...the Art Technological Mapping Contextual Mapping Matrix Input-Output Analysis Mathematical Models Simulation Models Dynamic Modelling. CHAPTER IV...Generation Interaction between Needs and Possibilities Map of the Technological Future — (‘ross- Impact Matri x Discovery Matrix Morphological Analysis
NASA Astrophysics Data System (ADS)
de Barros, Felipe P. J.; Ezzedine, Souheil; Rubin, Yoram
2012-02-01
The significance of conditioning predictions of environmental performance metrics (EPMs) on hydrogeological data in heterogeneous porous media is addressed. Conditioning EPMs on available data reduces uncertainty and increases the reliability of model predictions. We present a rational and concise approach to investigate the impact of conditioning EPMs on data as a function of the location of the environmentally sensitive target receptor, data types and spacing between measurements. We illustrate how the concept of comparative information yield curves introduced in de Barros et al. [de Barros FPJ, Rubin Y, Maxwell R. The concept of comparative information yield curves and its application to risk-based site characterization. Water Resour Res 2009;45:W06401. doi:10.1029/2008WR007324] could be used to assess site characterization needs as a function of flow and transport dimensionality and EPMs. For a given EPM, we show how alternative uncertainty reduction metrics yield distinct gains of information from a variety of sampling schemes. Our results show that uncertainty reduction is EPM dependent (e.g., travel times) and does not necessarily indicate uncertainty reduction in an alternative EPM (e.g., human health risk). The results show how the position of the environmental target, flow dimensionality and the choice of the uncertainty reduction metric can be used to assist in field sampling campaigns.
Electron impact fragmentation of adenine: partial ionization cross sections for positive fragments
NASA Astrophysics Data System (ADS)
van der Burgt, Peter J. M.; Finnegan, Sinead; Eden, Samuel
2015-07-01
Using computer-controlled data acquisition we have measured mass spectra of positive ions for electron impact on adenine, with electron energies up to 100 eV. Ion yield curves for 50 ions have been obtained and normalized by comparing their sum to the average of calculated total ionization cross sections. Appearance energies have been determined for 37 ions; for 20 ions for the first time. All appearance energies are consistent with the fragmentation pathways identified in the literature. Second onset energies have been determined for 12 fragment ions (for 11 ions for the first time), indicating the occurrence of more than one fragmentation process e.g. for 39 u (C2HN+) and 70 u (C2H4N3+). Matching ion yield shapes (118-120 u, 107-108 u, 91-92 u, and 54-56 u) provide new evidence supporting closely related fragmentation pathways and are attributed to hydrogen rearrangement immediately preceding the fragmentation. We present the first measurement of the ion yield curve of the doubly charged parent ion (67.5 u), with an appearance energy of 23.5 ± 1.0 eV. Contribution to the Topical Issue "COST Action Nano-IBCT: Nano-scale Processes Behind Ion-Beam Cancer Therapy", edited by Andrey Solov'yov, Nigel Mason, Gustavo García, Eugene Surdutovich.
Metallicity Differences in Type Ia Supernova Progenitors Inferred from Ultraviolet Spectra
NASA Astrophysics Data System (ADS)
Foley, Ryan J.; Kirshner, Robert P.
2013-05-01
Two "twin" Type Ia supernovae (SNe Ia), SNe 2011by and 2011fe, have extremely similar optical light-curve shapes, colors, and spectra, yet have different ultraviolet (UV) continua as measured in Hubble Space Telescope spectra and measurably different peak luminosities. We attribute the difference in the UV continua to significantly different progenitor metallicities. This is the first robust detection of different metallicities for SN Ia progenitors. Theoretical reasoning suggests that differences in metallicity also lead to differences in luminosity. SNe Ia with higher progenitor metallicities have lower 56Ni yields and lower luminosities for the same light-curve shape. SNe 2011by and 2011fe have different peak luminosities (ΔMV ≈ 0.6 mag), which correspond to different 56Ni yields: M_11fe(^{56}Ni) / M_11by(^{56}Ni) = 1.7^{+0.7}_{-0.5}. From theoretical models that account for different neutron-to-proton ratios in progenitors, the differences in 56Ni yields for SNe 2011by and 2011fe imply that their progenitor stars were above and below solar metallicity, respectively. Although we can distinguish progenitor metallicities in a qualitative way from UV data, the quantitative interpretation in terms of abundances is limited by the present state of theoretical models.
Elastic and Plastic Behavior of an Ultrafine-Grained Mg Reinforced with BN Nanoparticles
NASA Astrophysics Data System (ADS)
Trojanová, Zuzanka; Dash, Khushbu; Máthis, Kristián; Lukáč, Pavel; Kasakewitsch, Alla
2018-04-01
Pure microcrystalline magnesium (µMg) was reinforced with hexagonal boron nitride (hBN) nanoparticles and was fabricated by powder metallurgy process followed by hot extrusion. For comparison pure magnesium powder was consolidated by hot extrusion too. Both materials exhibited a significant fiber texture. Mg-hBN nanocomposites (nc) and pure Mg specimens were deformed between room temperature and 300 °C under tension and compression mode. The yield strength and ultimate tensile and compression strength as well as characteristic stresses were evaluated and reported. The tensile and compressive strengths of Mg-hBN nc are quiet superior in values compared to monolithic counterpart as well as Mg alloys. The compressive yield strength of µMg was recorded as 90 MPa, whereas the Mg-hBN nancomposite shows 125 MPa at 200 °C. The tensile yield strength of µMg was computed as 67 MPa which is quite lower as compared to Mg-hBN nanocomposite's value which was recorded as 157 MPa at 200 °C. Under tensile stress the true stress-strain curves are flat in nature, whereas the stress-strain curves observed in compression at temperatures up to 100 °C exhibited small local maxima at the onset of deformation followed by a significant work hardening.
Warren, Ruth M L; Thompson, Deborah; Pointon, Linda J; Hoff, Rebecca; Gilbert, Fiona J; Padhani, Anwar R; Easton, Douglas F; Lakhani, Sunil R; Leach, Martin O
2006-06-01
To evaluate prospectively the accuracy of a lesion classification system designed for use in a magnetic resonance (MR) imaging high-breast-cancer-risk screening study. All participating patients provided written informed consent. Ethics committee approval was obtained. The results of 1541 contrast material-enhanced breast MR imaging examinations were analyzed; 1441 screening examinations were performed in 638 women aged 24-51 years at high risk for breast cancer, and 100 examinations were performed in 100 women aged 23-81 years. Lesion analysis was performed in 991 breasts, which were divided into design (491 breasts) and testing (500 breasts) sets. The reference standard was histologic analysis of biopsy samples, fine-needle aspiration cytology, or minimal follow-up of 24 months. The scoring system involved the use of five features: morphology (MOR), pattern of enhancement (POE), percentage of maximal focal enhancement (PMFE), maximal signal intensity-time ratio (MITR), and pattern of contrast material washout (POCW). The system was evaluated by means of (a) assessment of interreader agreement, as expressed in kappa statistics, for 315 breasts in which both readers analyzed the same lesion, (b) assessment of the diagnostic accuracy of the scored components with receiver operating characteristic curve analysis, and (c) logistic regression analysis to determine which components of the scoring system were critical to the final score. A new simplified scoring system developed with the design set was applied to the testing set. There was moderate reader agreement regarding overall lesion outcome (ie, malignant, suspicious, or benign) (kappa=0.58) and less agreement regarding the scored components. The area under the receiver operating characteristic curve (AUC) for the overall lesion score, 0.88, was higher than the AUC for any one component. The components MOR, POE, and POCW yielded the best overall result. PMFE and MITR did not contribute to diagnostic utility. Applying a simplified scoring system to the testing set yielded a nonsignificantly (P=.2) higher AUC than did applying the original scoring system (sensitivity, 84%; specificity, 86.0%). Good diagnostic accuracy can be achieved by using simple qualitative descriptors of lesion enhancement, including POCW. In the context of screening, quantitative enhancement parameters appear to be less useful for lesion characterization. Copyright (c) RSNA, 2006.
Spallation of Cu by 500- and 1570-MeV. pi. /sup -/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Ruth, T.J.
1978-11-01
Relative yields of 36 products extending from /sup 7/Be to /sup 65/Zn have been measured for the interaction of 500- and 1570-MeV negative pions with Cu. These results are compared with calculations from the ISOBAR model, with earlier studies of Cu spallation with lower (resonance) energy pions, energetic protons, and heavy ions. Relative yield patterns at both ..pi../sup -/ energies show only slight differences when compared to spallation by protons of comparable energy. Calculations from the ISOBAR model adequately reproduce the shapes of the mass yield and charge yield of the experimental data for 500-MeV ..pi../sup -/. The calculation, however,more » overestimates the yield of neutron-rich isotopes from deep spallation. At the 1570-MeV ..pi../sup -/ energy the yield patterns, charge-dispersion, and mass-yield curves are nearly identical to those for 2-GeV proton spallation. These results suggest that pion-nucleon resonance effects probably decrease at higher energies and that limiting fragmentation and factorization concepts may be applied to understanding high-energy pion spallation.« less
Spatial derivatives of flow quantities behind curved shocks of all strengths
NASA Technical Reports Server (NTRS)
Darden, C. M.
1984-01-01
Explicit formulas in terms of shock curvature are developed for spatial derivatives of flow quantities behind a curved shock for two-dimensional inviscid steady flow. Factors which yield the equations indeterminate as the shock strength approaches 0 have been cancelled analytically so that formulas are valid for shocks of any strength. An application for the method is shown in the solution of shock coalescence when nonaxisymmetric effects are felt through derivatives in the circumferential direction. The solution of this problem requires flow derivatives behind the shock in both the axial and radial direction.
General Relativity Exactly Described by Use of Newton's Laws within a Curved Geometry
NASA Astrophysics Data System (ADS)
Savickas, David
2014-03-01
The connection between general relativity and Newtonian mechanics is shown to be much closer than generally recognized. When Newton's second law is written in a curved geometry by using the physical components of a vector as defined in tensor calculus, and by replacing distance within the momentum's velocity by the vector metric ds in a curved geometry, the second law can then be easily shown to be exactly identical to the geodesic equation of motion occurring in general relativity. By using a time whose vector direction is constant, as similarly occurs in Newtonian mechanics, this equation can be separated into two equations one of which is a curved three-dimensional equation of motion and the other is an equation for energy. For the gravitational field of an isolated particle, they yield the Schwarzschild equations. They can be used to describe gravitation for any array of masses for which the Newtonian gravitational potential is known, and is applied here to describe motion in the gravitational field of a thin mass-rod.
Hansmann, Jan; Evers, Maximilian J; Bui, James T; Lokken, R Peter; Lipnik, Andrew J; Gaba, Ron C; Ray, Charles E
2017-09-01
To evaluate albumin-bilirubin (ALBI) and platelet-albumin-bilirubin (PALBI) grades in predicting overall survival in high-risk patients undergoing conventional transarterial chemoembolization for hepatocellular carcinoma (HCC). This single-center retrospective study included 180 high-risk patients (142 men, 59 y ± 9) between April 2007 and January 2015. Patients were considered high-risk based on laboratory abnormalities before the procedure (bilirubin > 2.0 mg/dL, albumin < 3.5 mg/dL, platelet count < 60,000/mL, creatinine > 1.2 mg/dL); presence of ascites, encephalopathy, portal vein thrombus, or transjugular intrahepatic portosystemic shunt; or Model for End-Stage Liver Disease score > 15. Serum albumin, bilirubin, and platelet values were used to determine ALBI and PALBI grades. Overall survival was stratified by ALBI and PALBI grades with substratification by Child-Pugh class (CPC) and Barcelona Liver Clinic Cancer (BCLC) stage using Kaplan-Meier analysis. C-index was used to determine discriminatory ability and survival prediction accuracy. Median survival for 79 ALBI grade 2 patients and 101 ALBI grade 3 patients was 20.3 and 10.7 months, respectively (P < .0001). Median survival for 30 PALBI grade 2 and 144 PALBI grade 3 patients was 20.3 and 12.9 months, respectively (P = .0667). Substratification yielded distinct ALBI grade survival curves for CPC B (P = .0022, C-index 0.892), BCLC A (P = .0308, C-index 0.887), and BCLC C (P = .0287, C-index 0.839). PALBI grade demonstrated distinct survival curves for BCLC A (P = 0.0229, C-index 0.869). CPC yielded distinct survival curves for the entire cohort (P = .0019) but not when substratified by BCLC stage (all P > .05). ALBI and PALBI grades are accurate survival metrics in high-risk patients undergoing conventional transarterial chemoembolization for HCC. Use of these scores allows for more refined survival stratification within CPC and BCLC stage. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rest, A.; Scolnic, D.; Riess, A.
2014-11-01
We present griz {sub P1} light curves of 146 spectroscopically confirmed Type Ia supernovae (SNe Ia; 0.03 < z < 0.65) discovered during the first 1.5 yr of the Pan-STARRS1 Medium Deep Survey. The Pan-STARRS1 natural photometric system is determined by a combination of on-site measurements of the instrument response function and observations of spectrophotometric standard stars. We find that the systematic uncertainties in the photometric system are currently 1.2% without accounting for the uncertainty in the Hubble Space Telescope Calspec definition of the AB system. A Hubble diagram is constructed with a subset of 113 out of 146 SNemore » Ia that pass our light curve quality cuts. The cosmological fit to 310 SNe Ia (113 PS1 SNe Ia + 222 light curves from 197 low-z SNe Ia), using only supernovae (SNe) and assuming a constant dark energy equation of state and flatness, yields w=−1.120{sub −0.206}{sup +0.360}(Stat){sub −0.291}{sup +0.269}(Sys). When combined with BAO+CMB(Planck)+H {sub 0}, the analysis yields Ω{sub M}=0.280{sub −0.012}{sup +0.013} and w=−1.166{sub −0.069}{sup +0.072} including all identified systematics. The value of w is inconsistent with the cosmological constant value of –1 at the 2.3σ level. Tension endures after removing either the baryon acoustic oscillation (BAO) or the H {sub 0} constraint, though it is strongest when including the H {sub 0} constraint. If we include WMAP9 cosmic microwave background (CMB) constraints instead of those from Planck, we find w=−1.124{sub −0.065}{sup +0.083}, which diminishes the discord to <2σ. We cannot conclude whether the tension with flat ΛCDM is a feature of dark energy, new physics, or a combination of chance and systematic errors. The full Pan-STARRS1 SN sample with ∼three times as many SNe should provide more conclusive results.« less
NASA Astrophysics Data System (ADS)
Mazzoni, M.; Agati, G.; Troup, G. J.; Pratesi, R.
2003-09-01
The absorption spectra of bilirubins were deconvoluted by two Gaussian curves of equal width representing the exciton bands of the non-degenerate molecular system. The two bands were used to study the wavelength dependence of the (4Z, 15Z) rightarrow (4Z, 15E) configurational photoisomerization quantum yield of the bichromophoric bilirubin-IXalpha (BR-IX), the intrinsically asymmetric bile pigment associated with jaundice and the symmetrically substituted bilirubins (bilirubin-IIIalpha and mesobilirubin-XIIIalpha), when they are irradiated in aqueous solution bound to human serum albumin (HSA). The same study was performed for BR-IX in ammoniacal methanol solution (NH4OH/MeOH). The quantum yields of the configurational photoprocesses were fitted with a combination function of the two Gaussian bands normalized to the total absorption, using the proportionality coefficients and a scaling factor as parameters. The decrease of the (4Z, 15Z) rightarrow (4Z, 15E) quantum yield with increasing wavelength, which occurs for wavelengths longer than the most probable Franck-Condon transition of the molecule, did not result in a unique function of the exciton absorptions. In particular we found two ranges corresponding to different exciton interactions with different proportionality coefficients and scaling factors. The wavelength-dependent photoisomerization of bilirubins was described as an abrupt change in quantum yield as soon as the resulting excitation was strongly localized in each chromophore. The change was correlated to a variation of the interaction between the two chromophores when the short-wavelength exciton absorption became vanishingly small. With the help of the circular dichroism (CD) spectrum of BR-IX in HSA, a small band was resolved in the bilirubin absorption spectrum, delivering part of the energy required for the (4Z, 15Z) rightarrow (4Z, 15E) photoisomerization of the molecule.
NASA Astrophysics Data System (ADS)
Hargrove, W. W.; Norman, S. P.; Kumar, J.; Hoffman, F. M.
2017-12-01
National-scale polar analysis of MODIS NDVI allows quantification of degree of seasonality expressed by local vegetation, and also selects the most optimum start/end of a local "phenological year" that is empirically customized for the vegetation that is growing at each location. Interannual differences in timing of phenology make direct comparisons of vegetation health and performance between years difficult, whether at the same or different locations. By "sliding" the two phenologies in time using a Procrustean linear time shift, any particular phenological event or "completion milestone" can be synchronized, allowing direct comparison of differences in timing of other remaining milestones. Going beyond a simple linear translation, time can be "rubber-sheeted," compressed or dilated. Considering one phenology curve to be a reference, the second phenology can be "rubber-sheeted" to fit that baseline as well as possible by stretching or shrinking time to match multiple control points, which can be any recognizable phenological events. Similar to "rubber sheeting" to georectify a map inside a GIS, rubber sheeting a phenology curve also yields a warping signature that shows at every time and every location how many days the adjusted phenology is ahead or behind the phenological development of the reference vegetation. Using such temporal methods to "adjust" phenologies may help to quantify vegetation impacts from frost, drought, wildfire, insects and diseases by permitting the most commensurate quantitative comparisons with unaffected vegetation.
Bedside risk estimation of morbidly adherent placenta using simple calculator.
Maymon, R; Melcer, Y; Pekar-Zlotin, M; Shaked, O; Cuckle, H; Tovbin, J
2018-03-01
To construct a calculator for 'bedside' estimation of morbidly adherent placenta (MAP) risk based on ultrasound (US) findings. This retrospective study included all pregnant women with at least one previous cesarean delivery attending in our US unit between December 2013 and January 2017. The examination was based on a scoring system which determines the probability for MAP. The study population included 471 pregnant women, and 41 of whom (8.7%) were diagnosed with MAP. Based on ROC curve, the most effective US criteria for detection of MAP were the presence of the placental lacunae, obliteration of the utero-placental demarcation, and placenta previa. On the multivariate logistic regression analysis, US findings of placental lacunae (OR = 3.5; 95% CI, 1.2-9.5; P = 0.01), obliteration of the utero-placental demarcation (OR = 12.4; 95% CI, 3.7-41.6; P < 0.0001), and placenta previa (OR = 10.5; 95% CI, 3.5-31.3; P < 0.0001) were associated with MAP. By combining these three parameters, the receiver operating characteristic curve was calculated, yielding an area under the curve of 0.93 (95% CI, 0.87-0.97). Accordingly, we have constructed a simple calculator for 'bedside' estimation of MAP risk. The calculator is mounted on the hospital's internet website ( http://www.assafh.org/Pages/PPCalc/index.html ). The risk estimation of MAP varies between 1.5 and 87%. The present calculator enables a simple 'bedside' MAP estimation, facilitating accurate and adequate antenatal risk assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huppenkothen, D.; Heil, L. M.; Watts, A. L.
2014-11-10
Quasi-periodic oscillations (QPOs) observed in the giant flares of magnetars are of particular interest due to their potential to open up a window into the neutron star interior via neutron star asteroseismology. However, only three giant flares have been observed. We therefore make use of the much larger data set of shorter, less energetic recurrent bursts. Here, we report on a search for QPOs in a large data set of bursts from the two most burst-active magnetars, SGR 1806-20 and SGR 1900+14, observed with Rossi X-ray Timing Explorer. We find a single detection in an averaged periodogram comprising 30 burstsmore » from SGR 1806–20, with a frequency of 57 Hz and a width of 5 Hz, remarkably similar to a giant flare QPO observed from SGR 1900+14. This QPO fits naturally within the framework of global magneto-elastic torsional oscillations employed to explain giant flare QPOs. Additionally, we uncover a limit on the applicability of Fourier analysis for light curves with low background count rates and strong variability on short timescales. In this regime, standard Fourier methodology and more sophisticated Fourier analyses fail in equal parts by yielding an unacceptably large number of false-positive detections. This problem is not straightforward to solve in the Fourier domain. Instead, we show how simulations of light curves can offer a viable solution for QPO searches in these light curves.« less
Statistical modelling of suspended sediment load in small basin located at Colombian Andes
NASA Astrophysics Data System (ADS)
Javier, Montoya Luis
2016-04-01
In this study a statistical modelling for the estimate the sediment yield based on available observations of water discharge and suspended sediment concentration were done. A multivariate model was applicate to analyze the 33 years of daily suspended sediments load available at a La Garrucha gauging station. A regional analysis were conducted to find a non-dimensional sediment load duration curve. These curves were used to estimate flow and sediments regimen at other inner point at the basin where there are located the Calderas reservoir. The record of sedimentation in the reservoir were used to validate the estimate mean sediments load. A periodical flushing in the reservoir is necessary to maintain the reservoir at the best operating capacity. The non-dimensional sediment load duration curve obtaining was used to find a sediment concentration during high flow regimen (10% of time these values were met or exceeded).These sediment concentration of high flow regimen has been assumed as a concentration that allow an 'environmental flushing', because it try to reproduce the natural regimen of sediments at the river and it sends a sediment concentration that environment can withstand. The sediment transport capacity for these sediment load were verified with a 1D model in order to respect the environmental constraints downstream of the dam. Field data were collected to understand the physical phenomena involved in flushing dynamics in the reservoir and downstream of the dam. These model allow to define an operations rules for the flushing to minimize the environmental effects.
Utility of hepatic transaminases in children with concern for abuse.
Lindberg, Daniel M; Shapiro, Robert A; Blood, Emily A; Steiner, R Daryl; Berger, Rachel P
2013-02-01
Routine testing of hepatic transaminases, amylase, and lipase has been recommended for all children evaluated for physical abuse, but rates of screening are widely variable, even among abuse specialists, and data for amylase and lipase testing are lacking. A previous study of screening in centers that endorsed routine transaminase screening suggested that using a transaminase threshold of 80 IU/L could improve injury detection. Our objectives were to prospectively validate the test characteristics of the 80-IU/L threshold and to determine the utility of amylase and lipase to detect occult abdominal injury. This was a retrospective secondary analysis of the Examining Siblings To Recognize Abuse research network, a multicenter study in children younger than 10 years old who underwent subspecialty evaluation for physical abuse. We determined rates of identified abdominal injuries and results of transaminase, amylase, and lipase testing. Screening studies were compared by using basic test characteristics (sensitivity, specificity) and the area under the receiver operating characteristic curve. Abdominal injuries were identified in 82 of 2890 subjects (2.8%; 95% confidence interval: 2.3%-3.5%). Hepatic transaminases were obtained in 1538 (53%) subjects. Hepatic transaminases had an area under the receiver operating characteristic curve of 0.87. A threshold of 80 IU/L yielded sensitivity of 83.8% and specificity of 83.1%. The areas under the curve for amylase and lipase were 0.67 and 0.72, respectively. Children evaluated for physical abuse with transaminase levels >80 IU/L should undergo definitive testing for abdominal injury.
Ning, Hui; Tao, Hong; Weng, Zhanping; Zhao, Xingbo
2016-12-01
Fatty acid-binding protein 4 (FABP4) is mainly expressed in adipocytes and macrophages and is demonstrated to be elevated in diabetes patients. The aim of this study was to evaluate the possible role of FABP4 in the diagnosis of GDM and to investigate the relationship between FABP4 and overweight, insulin resistance and inflammatory marker TNF-α. A total of 46 women with GDM and 55 age-matched pregnant women without GDM (non-GDM) were eligible for the study. Demographic and biochemical parameters and fasting venous blood samples of two groups were collected from all cases. Serum concentrations of FABP4 were determined using enzyme-linked immunosorbent assay (ELISA). The predictive value of Serum FABP4 level was evaluated using receiver operating characteristic curve (ROC curve) analysis. We found that the serum FABP4 levels were significantly higher in GDM compared to the non-GDM group. The area under the ROC curve assay yielded a satisfactory result of 0.94 (95 % confidence interval 0.90-0.98; p < 0.001). The best compromise between 86.96 % specificity and 89.09 % sensitivity was obtained with a cutoff value of 1.96 ng/mL for GDM diagnosis. Moreover, a significant positive correlation was observed between FABP4 and overweight, insulin resistance and TNF-α in pregnant women with GDM. These results suggest that serum FABP4 may potentially serve as a novel biomarker for the prediction of GDM.
A Study of Poisson's Ratio in the Yield Region
NASA Technical Reports Server (NTRS)
Gerard, George; Wildhorn, Sorrel
1952-01-01
In the yield region of the stress-strain curve the variation in Poisson's ratio from the elastic to the plastic value is most pronounced. This variation was studied experimentally by a systematic series of tests on several aluminum alloys. The tests were conducted under simple tensile and compressive loading along three orthogonal axes. A theoretical variation of Poisson's ratio for an orthotropic solid was obtained from dilatational considerations. The assumptions used in deriving the theory were examined by use of the test data and were found to be in reasonable agreement with experimental evidence.
Static Properties of Fibre Metal Laminates
NASA Astrophysics Data System (ADS)
Hagenbeek, M.; van Hengel, C.; Bosker, O. J.; Vermeeren, C. A. J. R.
2003-07-01
In this article a brief overview of the static properties of Fibre Metal Laminates is given. Starting with the stress-strain relation, an effective calculation tool for uniaxial stress-strain curves is given. The method is valid for all Glare types. The Norris failure model is described in combination with a Metal Volume Fraction approach leading to a useful tool to predict allowable blunt notch strength. The Volume Fraction approach is also useful in the case of the shear yield strength of Fibre Metal Laminates. With the use of the Iosipescu test shear yield properties are measured.
Gagg, Graham; Ghassemieh, Elaheh; Wiria, Florencia E
2013-09-01
A set of cylindrical porous titanium test samples were produced using the three-dimensional printing and sintering method with samples sintered at 900 °C, 1000 °C, 1100 °C, 1200 °C or 1300 °C. Following compression testing, it was apparent that the stress-strain curves were similar in shape to the curves that represent cellular solids. This is despite a relative density twice as high as what is considered the threshold for defining a cellular solid. As final sintering temperature increased, the compressive behaviour developed from being elastic-brittle to elastic-plastic and while Young's modulus remained fairly constant in the region of 1.5 GPa, there was a corresponding increase in 0.2% proof stress of approximately 40-80 MPa. The cellular solid model consists of two equations that predict Young's modulus and yield or proof stress. By fitting to experimental data and consideration of porous morphology, appropriate changes to the geometry constants allow modification of the current models to predict with better accuracy the behaviour of porous materials with higher relative densities (lower porosity).
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) Manual v.1.2 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Design of experiments for validating probability of detection capability of nondestructive evaluation (NDE) systems (DOEPOD) is a methodology that is implemented via software to serve as a diagnostic tool providing detailed analysis of POD test data, guidance on establishing data distribution requirements, and resolving test issues. DOEPOD demands utilization of observance of occurrences. The DOEPOD capability has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit-Miss or signal amplitude testing. DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. DOEPOD applications for supporting inspector qualifications is included.
Spectroscopic monitoring of active Galactic nuclei from CTIO. 1: NGC 3227
NASA Technical Reports Server (NTRS)
Winge, Claudia; Peterson, Bradley M.; Horne, Keith; Pogge, Richard W.; Pastoriza, Miriani G.; Storchi-Bergmann, Thaisa
1995-01-01
The results of a five-month monitoring campaign on the Seyfert 1.5 galaxy NGC 3227 are presented. Variability was detected in the continuum and in the broad emission lines. Cross correlations of the 4200 A continuum light curve with the H beta and He II wavelength 4686 emission-line light curves indicate delays of 18 +/- 5 and 16 +/- 2 days, respectively, between the continuum variations and the response of the lines. We apply a maximum entropy method to solve for the transfer function that relates the H beta and He II wavelength 4686 lines and 4200 A continuum variability and the result of this analysis suggests that there is a deficit of emission-line response due to gas along the line of sight to the continuum source for both lines. Using a composite off-nuclear spectrum, we synthesize the bulge stellar population, which is found to be mainly old (77% with age greater than 10 Gyr) with a metallicity twice the solar value. The synthesis also yields an internal color excess E(B - V) approximately equal 0.04. The mean contribution of the stellar population to the inner 5 sec x 10 sec spectra during the campaign was approximately equal 40%.
Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.
Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo
2012-01-01
The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.
Assessing the potential for luminescence dating of basalts
Tsukamoto, S.; Duller, G.A.T.; Wintle, A.G.; Muhs, D.
2011-01-01
The possibility of dating basalt using luminescence was tested on four samples with independent age control from Cima volcanic field, California, with the ultimate aim of assessing whether the technique could be used to date sediments on the surface of Mars. Previous analysis of these samples had demonstrated that the infrared stimulated luminescence (IRSL) signal is most suitable for dating as it showed the lowest fading rate among various luminescence signals. In this study, changes in equivalent dose as a function of preheat are described. The ages for the two youngest Cima samples agree with the independent ages based on cosmogenic nuclide measurements (12.0 ?? 0.8 ka). In the two older samples (dated to 320 and 580 ka by K-Ar), the luminescence behaviour is more complex and the form of the IRSL decay curve is seen to vary with dose. Mathematical fitting is used to isolate two components and their intensities are used to produce dose response curves. The slower component yields a larger equivalent dose. However, even using this component and after correction for fading, the ages obtained for the older samples are younger than the K-Ar ages. ?? 2010 Elsevier B.V.
Optimization of vehicle-trailer connection systems
NASA Astrophysics Data System (ADS)
Sorge, F.
2016-09-01
The three main requirements of a vehicle-trailer connection system are: en route stability, over- or under-steering restraint, minimum off-tracking along curved path. Linking the two units by four-bar trapeziums, wider stability margins may be attained in comparison with the conventional pintle-hitch for both instability types, divergent or oscillating. The stability maps are traced applying the Hurwitz method or the direct analysis of the characteristic equation at the instability threshold. Several types of four-bar linkages may be quickly tested, with the drawbars converging towards the trailer or the towing unit. The latter configuration appears preferable in terms of self-stability and may yield high critical speeds by optimising the geometrical and physical properties. Nevertheless, the system stability may be improved in general by additional vibration dampers in parallel with the connection linkage. Moreover, the four-bar connection may produce significant corrections of the under-steering or over-steering behaviour of the vehicle-train after a steering command from the driver. The off- tracking along the curved paths may be also optimized or kept inside prefixed margins of acceptableness. Activating electronic stability systems if necessary, fair results are obtainable for both the steering conduct and the off-tracking.
Brownian motion curve-based textural classification and its application in cancer diagnosis.
Mookiah, Muthu Rama Krishnan; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K
2011-06-01
To develop an automated diagnostic methodology based on textural features of the oral mucosal epithelium to discriminate normal and oral submucous fibrosis (OSF). A total of 83 normal and 29 OSF images from histopathologic sections of the oral mucosa are considered. The proposed diagnostic mechanism consists of two parts: feature extraction using Brownian motion curve (BMC) and design ofa suitable classifier. The discrimination ability of the features has been substantiated by statistical tests. An error back-propagation neural network (BPNN) is used to classify OSF vs. normal. In development of an automated oral cancer diagnostic module, BMC has played an important role in characterizing textural features of the oral images. Fisher's linear discriminant analysis yields 100% sensitivity and 85% specificity, whereas BPNN leads to 92.31% sensitivity and 100% specificity, respectively. In addition to intensity and morphology-based features, textural features are also very important, especially in histopathologic diagnosis of oral cancer. In view of this, a set of textural features are extracted using the BMC for the diagnosis of OSF. Finally, a textural classifier is designed using BPNN, which leads to a diagnostic performance with 96.43% accuracy. (Anal Quant
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
EVEREST: Pixel Level Decorrelation of K2 Light Curves
NASA Astrophysics Data System (ADS)
Luger, Rodrigo; Agol, Eric; Kruse, Ethan; Barnes, Rory; Becker, Andrew; Foreman-Mackey, Daniel; Deming, Drake
2016-10-01
We present EPIC Variability Extraction and Removal for Exoplanet Science Targets (EVEREST), an open-source pipeline for removing instrumental noise from K2 light curves. EVEREST employs a variant of pixel level decorrelation to remove systematics introduced by the spacecraft’s pointing error and a Gaussian process to capture astrophysical variability. We apply EVEREST to all K2 targets in campaigns 0-7, yielding light curves with precision comparable to that of the original Kepler mission for stars brighter than {K}p≈ 13, and within a factor of two of the Kepler precision for fainter targets. We perform cross-validation and transit injection and recovery tests to validate the pipeline, and compare our light curves to the other de-trended light curves available for download at the MAST High Level Science Products archive. We find that EVEREST achieves the highest average precision of any of these pipelines for unsaturated K2 stars. The improved precision of these light curves will aid in exoplanet detection and characterization, investigations of stellar variability, asteroseismology, and other photometric studies. The EVEREST pipeline can also easily be applied to future surveys, such as the TESS mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets. The EVEREST light curves and the source code used to generate them are freely available online.
Daily Monitoring of TeV Gamma-Ray Emission from Mrk 421, Mrk 501, and the Crab Nebula with HAWC
NASA Astrophysics Data System (ADS)
Abeysekara, A. U.; Albert, A.; Alfaro, R.; Alvarez, C.; Álvarez, J. D.; Arceo, R.; Arteaga-Velázquez, J. C.; Avila Rojas, D.; Ayala Solares, H. A.; Barber, A. S.; Bautista-Elivar, N.; Becerra Gonzalez, J.; Becerril, A.; Belmont-Moreno, E.; BenZvi, S. Y.; Bernal, A.; Braun, J.; Brisbois, C.; Caballero-Mora, K. S.; Capistrán, T.; Carramiñana, A.; Casanova, S.; Castillo, M.; Cotti, U.; Cotzomi, J.; Coutiño de León, S.; De León, C.; De la Fuente, E.; Diaz Hernandez, R.; Dingus, B. L.; DuVernois, M. A.; Díaz-Vélez, J. C.; Ellsworth, R. W.; Engel, K.; Fiorino, D. W.; Fraija, N.; García-González, J. A.; Garfias, F.; Gerhardt, M.; González Muñoz, A.; González, M. M.; Goodman, J. A.; Hampel-Arias, Z.; Harding, J. P.; Hernandez, S.; Hernandez-Almada, A.; Hona, B.; Hui, C. M.; Hüntemeyer, P.; Iriarte, A.; Jardin-Blicq, A.; Joshi, V.; Kaufmann, S.; Kieda, D.; Lara, A.; Lauer, R. J.; Lee, W. H.; Lennarz, D.; León Vargas, H.; Linnemann, J. T.; Longinotti, A. L.; Raya, G. Luis; Luna-García, R.; López-Coto, R.; Malone, K.; Marinelli, S. S.; Martinez, O.; Martinez-Castellanos, I.; Martínez-Castro, J.; Matthews, J. A.; Miranda-Romagnoli, P.; Moreno, E.; Mostafá, M.; Nellen, L.; Newbold, M.; Nisa, M. U.; Noriega-Papaqui, R.; Pretz, J.; Pérez-Pérez, E. G.; Ren, Z.; Rho, C. D.; Rivière, C.; Rosa-González, D.; Rosenberg, M.; Ruiz-Velasco, E.; Salesa Greus, F.; Sandoval, A.; Schneider, M.; Schoorlemmer, H.; Sinnis, G.; Smith, A. J.; Springer, R. W.; Surajbali, P.; Taboada, I.; Tibolla, O.; Tollefson, K.; Torres, I.; Ukwatta, T. N.; Vianello, G.; Weisgarber, T.; Westerhoff, S.; Wisher, I. G.; Wood, J.; Yapici, T.; Younk, P. W.; Zepeda, A.; Zhou, H.
2017-06-01
We present results from daily monitoring of gamma-rays in the energy range from ˜0.5 to ˜100 TeV with the first 17 months of data from the High Altitude Water Cherenkov (HAWC) Observatory. Its wide field of view of 2 steradians and duty cycle of > 95% are unique features compared to other TeV observatories that allow us to observe every source that transits over HAWC for up to ˜6 hr each sidereal day. This regular sampling yields unprecedented light curves from unbiased measurements that are independent of seasons or weather conditions. For the Crab Nebula as a reference source, we find no variability in the TeV band. Our main focus is the study of the TeV blazars Markarian (Mrk) 421 and Mrk 501. A spectral fit for Mrk 421 yields a power-law index {{Γ }}=2.21+/- {0.14}{stat}+/- {0.20}{sys} and an exponential cut-off {E}0=5.4+/- {1.1}{stat}+/- {1.0}{sys} TeV. For Mrk 501, we find an index {{Γ }}=1.60+/- {0.30}{stat}+/- {0.20}{sys} and exponential cut-off {E}0=5.7+/- {1.6}{stat}+/- {1.0}{sys} TeV. The light curves for both sources show clear variability and a Bayesian analysis is applied to identify changes between flux states. The highest per-transit fluxes observed from Mrk 421 exceed the Crab Nebula flux by a factor of approximately five. For Mrk 501, several transits show fluxes in excess of three times the Crab Nebula flux. In a comparison to lower energy gamma-ray and X-ray monitoring data with comparable sampling, we cannot identify clear counterparts for the most significant flaring features observed by HAWC.
Tracking spatial variation in river load from Andean highlands to inter-Andean valleys
NASA Astrophysics Data System (ADS)
Tenorio, Gustavo E.; Vanacker, Veerle; Campforts, Benjamin; Álvarez, Lenín; Zhiminaicela, Santiago; Vercruysse, Kim; Molina, Armando; Govers, Gerard
2018-05-01
Mountains play an important role in the denudation of continents and transfer erosion and weathering products to lowlands and oceans. The rates at which erosion and weathering processes take place in mountain regions have a substantial impact on the morphology and biogeochemistry of downstream reaches and lowlands. The controlling factors of physical erosion and chemical weathering and the coupling between the two processes are not yet fully understood. In this study, we report physical erosion and chemical weathering rates for five Andean catchments located in the southern Ecuadorian Andes and investigate their mutual interaction. During a 4-year monitoring period, we sampled river water at biweekly intervals, and we analyzed water samples for major ions and suspended solids. We derived the total annual dissolved, suspended sediment, and ionic loads from the flow frequency curves and adjusted rating curves and used the dissolved and suspended sediment yields as proxies for chemical weathering and erosion rates. In the 4-year period of monitoring, chemical weathering exceeds physical erosion in the high Andean catchments. Whereas physical erosion rates do not exceed 30 t km-2 y-1 in the relict glaciated morphology, chemical weathering rates range between 22 and 59 t km-2 y-1. The variation in chemical weathering is primarily controlled by intrinsic differences in bedrock lithology. Land use has no discernible impact on the weathering rate but leads to a small increase in base cation concentrations because of fertilizer leaching in surface water. When extending our analysis with published data on dissolved and suspended sediment yields from the northern and central Andes, we observe that the river load composition strongly changes in the downstream direction, indicating large heterogeneity of weathering processes and rates within large Andean basins.
A mobile platform for automated screening of asthma and chronic obstructive pulmonary disease.
Chamberlain, Daniel B; Kodgule, Rahul; Fletcher, Richard Ribon
2016-08-01
Chronic Obstructive Pulmonary Disease (COPD) and asthma each represent a large proportion of the global disease burden; COPD is the third leading cause of death worldwide and asthma is one of the most prevalent chronic diseases, afflicting over 300 million people. Much of this burden is concentrated in the developing world, where patients lack access to physicians trained in the diagnosis of pulmonary disease. As a result, these patients experience high rates of underdiagnosis and misdiagnosis. To address this need, we present a mobile platform capable of screening for Asthma and COPD. Our solution is based on a mobile smart phone and consists of an electronic stethoscope, a peak flow meter application, and a patient questionnaire. This data is combined with a machine learning algorithm to identify patients with asthma and COPD. To test and validate the design, we collected data from 119 healthy and sick participants using our custom mobile application and ran the analysis on a PC computer. For comparison, all subjects were examined by an experienced pulmonologist using a full pulmonary testing laboratory. Employing a two-stage logistic regression model, our algorithms were first able to identify patients with either asthma or COPD from the general population, yielding an ROC curve with an AUC of 0.95. Then, after identifying these patients, our algorithm was able to distinguish between patients with asthma and patients with COPD, yielding an ROC curve with AUC of 0.97. This work represents an important milestone towards creating a self-contained mobile phone-based platform that can be used for screening and diagnosis of pulmonary disease in many parts of the world.
Physical limits on ground motion at Yucca Mountain
Andrews, D.J.; Hanks, T.C.; Whitney, J.W.
2007-01-01
Physical limits on possible maximum ground motion at Yucca Mountain, Nevada, the designated site of a high-level radioactive waste repository, are set by the shear stress available in the seismogenic depth of the crust and by limits on stress change that can propagate through the medium. We find in dynamic deterministic 2D calculations that maximum possible horizontal peak ground velocity (PGV) at the underground repository site is 3.6 m/sec, which is smaller than the mean PGV predicted by the probabilistic seismic hazard analysis (PSHA) at annual exceedance probabilities less than 10-6 per year. The physical limit on vertical PGV, 5.7 m/sec, arises from supershear rupture and is larger than that from the PSHA down to 10-8 per year. In addition to these physical limits, we also calculate the maximum ground motion subject to the constraint of known fault slip at the surface, as inferred from paleoseismic studies. Using a published probabilistic fault displacement hazard curve, these calculations provide a probabilistic hazard curve for horizontal PGV that is lower than that from the PSHA. In all cases the maximum ground motion at the repository site is found by maximizing constructive interference of signals from the rupture front, for physically realizable rupture velocity, from all parts of the fault. Vertical PGV is maximized for ruptures propagating near the P-wave speed, and horizontal PGV is maximized for ruptures propagating near the Rayleigh-wave speed. Yielding in shear with a Mohr-Coulomb yield condition reduces ground motion only a modest amount in events with supershear rupture velocity, because ground motion consists primarily of P waves in that case. The possibility of compaction of the porous unsaturated tuffs at the higher ground-motion levels is another attenuating mechanism that needs to be investigated.
HST/COS Detection of the Spectrum of the Subdwarf Companion of KOI-81
NASA Astrophysics Data System (ADS)
Matson, Rachel A.; Gies, Douglas R.; Guo, Zhao; Quinn, Samuel N.; Buchhave, Lars A.; Latham, David W.; Howell, Steve B.; Rowe, Jason F.
2015-06-01
KOI-81 is a totally eclipsing binary discovered by the Kepler mission that consists of a rapidly rotating B-type star and a small, hot companion. The system was forged through large-scale mass transfer that stripped the mass donor of its envelope and spun up the mass gainer star. We present an analysis of UV spectra of KOI-81 that were obtained with the Cosmic Origins Spectrograph on the Hubble Space Telescope that reveal for the first time the spectral features of the faint, hot companion. We present a double-lined spectroscopic orbit for the system that yields mass estimates of 2.92 {{M}⊙ } and 0.19 {{M}⊙ } for the B-star and hot subdwarf, respectively. We used a Doppler tomography algorithm to reconstruct the UV spectra of the components, and a comparison of the reconstructed and model spectra yields effective temperatures of 12 and 19-27 kK for the B-star and hot companion, respectively. The B-star is pulsating, and we identified a number of peaks in the Fourier transform of the light curve, including one that may indicate an equatorial rotation period of 11.5 hr. The B-star has an equatorial velocity that is 74% of the critical velocity where centrifugal and gravitational accelerations balance at the equator, and we fit the transit light curve by calculating a rotationally distorted model for the photosphere of the B-star. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program #12288.
Salem, A A
2007-03-01
A newly developed method for determining three phenoxy acids and one carbamate herbicide in water and soil samples using gas chromatography with mass spectrometric detection is developed. Phenoxy acids are derivatized through a condensation reaction with a suitable aromatic amine. 1,1-Carbonyldiimidazole is used as a condensation reagent. Derivatization conditions are optimized with respect to the amount of analyte, amine, solvent, and derivatization reagent. The optimum derivatization yield is accomplished in acetonitrile. 4-Methoxy aniline is used as a derivatizing agent. Obtained derivatives are stable indefinitely. Enhancement in sensitivity is achieved by using the single-ion monitoring mass spectrometric mode. The effectiveness of the developed method is tested by determining investigated compounds in water and soil samples. Analytes are concentrated from water samples using liquid-phase extraction and solid-phase extraction. Soil samples are extracted using methanol. Detection limits of 1.00, 50.00, 100.00, and 1.00 ng/mL are obtained for 2-(1-methylethoxy)phenyl methylcarbamate (Baygon), 2-(3-chlorophenoxy)-propionic acid (Cloprop), 2,4,5-trichlorophenoxyacetic acid, and 4-(2,4-dichlorophenoxy)butyric acid, respectively. LPE for spiked water samples yields recoveries in the range of 60.6-95.7%, with relative standard deviation (RSD) values of 1.07-7.85% using single component calibration curves. Recoveries of 44.8-275.5%, with RSD values ranging from 1.43% to 8.61% were obtained using a mixed component calibration curves. SPE from water samples and soil samples showed low recoveries. The reason is attributed to the weak sorption capabilities of soil and Al(2)O(3).
NASA Astrophysics Data System (ADS)
Calvo-Rathert, M.; Bogalo, M.; Gogichaishvili, A.; Vegas-Tubia, N.; Sologashvili, J.; Villalain, J.
2009-05-01
A paleomagnetic, rock-magnetic and paleointensity study was carried out on 21 basaltic lava flows belonging to four different sequences of late Pliocene age from southern Georgia (Caucasus): Diliska (5 flows), Kvemo Orozmani (5 flows), Dmanisi (11 flows) and Zemo Karabulaki (3 flows). Paleomagnetic analysis generally showed the presence of a single component (mainly in the Dmanisi sequence) but also two more or less superimposed components in several other cases. All sites except one clearly displayed a normal-polarity characteristic component. Susceptibility-versus-temperature curves measured in argon atmosphere on whole- rock powdered samples yielded low-Ti titanomagnetite as main carrier of remanence, although a lower Tc- component (300-400C) was also observed in several cases. Both reversible and non-reversible k-T curves were measured. A pilot paleointensity study was performed with the Coe method on two samples of each of those sites considered suitable after interpretation of rock-magnetic and paleomagnetic results. The pilot study showed that reliable paleointensity results were mainly obtained from sites of the Dmanisi sequence. This thick sequence of basaltic lava flows records the upper end of the normal-polarity Olduvai subchron, a fact confirmed by 40Ar/39Ar dating of the uppermost lava flow and overlying volcanogenic ashes, which yields ages of 1.8 to 1.85 My. A new paleointensity experiment was carried out only on samples belonging to the Dmanisi sequence. Although this work is still in progress, first results show that paleointensities are low, their values lying between 10 and 20 µT in many cases, and not being higher than 30 µT. For comparison, present day field is 47 µT.
Steer, Penelope A.; Kirkpatrick, Naomi C.; O'Rourke, Denise; Noormohammadi, Amir H.
2009-01-01
Identification of fowl adenovirus (FAdV) serotypes is of importance in epidemiological studies of disease outbreaks and the adoption of vaccination strategies. In this study, real-time PCR and subsequent high-resolution melting (HRM)-curve analysis of three regions of the hexon gene were developed and assessed for their potential in differentiating 12 FAdV reference serotypes. The results were compared to previously described PCR and restriction enzyme analyses of the hexon gene. Both HRM-curve analysis of a 191-bp region of the hexon gene and restriction enzyme analysis failed to distinguish a number of serotypes used in this study. In addition, PCR of the region spanning nucleotides (nt) 144 to 1040 failed to amplify FAdV-5 in sufficient quantities for further analysis. However, HRM-curve analysis of the region spanning nt 301 to 890 proved a sensitive and specific method of differentiating all 12 serotypes. All melt curves were highly reproducible, and replicates of each serotype were correctly genotyped with a mean confidence value of more than 99% using normalized HRM curves. Sequencing analysis revealed that each profile was related to a unique sequence, with some sequences sharing greater than 94% identity. Melting-curve profiles were found to be related mainly to GC composition and distribution throughout the amplicons, regardless of sequence identity. The results presented in this study show that the closed-tube method of PCR and HRM-curve analysis provides an accurate, rapid, and robust genotyping technique for the identification of FAdV serotypes and can be used as a model for developing genotyping techniques for other pathogens. PMID:19036935
Performance of Koyna dam based on static and dynamic analysis
NASA Astrophysics Data System (ADS)
Azizan, Nik Zainab Nik; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Maity, Damodar
2017-10-01
This paper discusses the performance of Koyna dam based on static pushover analysis (SPO) and incremental dynamic analysis (IDA). The SPO in this study considered two type of lateral load which is inertial load and hydrodynamic load. The structure was analyse until the damage appears on the structure body. The IDA curves were develop based on 7 ground motion, where the characteristic of the ground motions: i) the distance from the epicenter is less than 15km, (ii) the magnitude is equal to or greater than 5.5 and (iii) the PGA is equal to or greater than 0.15g. All the ground motions convert to respond spectrum and scaled according to the developed elastic respond spectrum in order to match the characteristic of the ground motion to the soil type. Elastic respond spectrum developed based on soil type B by using Eurocode 8. By using SPO and IDA method are able to determine the limit states of the dam. The limit state proposed in this study are yielding and ultimate state which is identified base on crack pattern perform on the structure model. The comparison of maximum crest displacement for both methods is analysed to define the limit state of the dam. The displacement of yielding state for Koyna dam is 23.84mm and 44.91mm for the ultimate state. The results are able to be used as a guideline to monitor Koyna dam under seismic loadings which are considering static and dynamic.
NASA Astrophysics Data System (ADS)
Ibrahim, F. A.; El-Yazbi, A. F.; Wagih, M. M.; Barary, M. A.
2017-09-01
Two highly sensitive, simple and selective spectrophotometric and spectrofluorimetric assays have been investigated for the analysis of ezogabine, levetiracetam and topiramate in their pure and in pharmaceutical dosage forms. The suggested methods depend on the condensation of the primary amino-groups in the three drugs with acetylacetone and formaldehyde according to Hantzsch reaction yielding highly fluorescent yellow colored dihydropyridine derivatives. The reaction products of ezogabine, levetiracetam and topiramate were measured spectrophotometrically at 418, 390 and 380 nm or spectrofluorimetrically at λem/ex of 495/425 nm, 490/415 nm and 488/410 nm, respectively. Various experimental conditions have been carefully studied to maximize the reaction yield. At the optimum reaction conditions, the calibration curves were rectilinear over the concentration ranges of 8-25, 60-180 and 80-200 μg/mL spectrophotometrically and 0.02-0.2, 0.2-1.2 and 0.2-1.5 μg/mL spectrofluorimetrically for ezogabine, levetiracetam and topiramate, respectively with good correlation coefficients. The suggested methods were applied successfully for the analysis of ezogabine, levetiracetam and topiramate in their commercial tablets with high percentage recoveries and negligible interference from various excipients in pharmaceutical dosage forms. The results were statistically analyzed and showed the absence of any significant difference between both developed and published methods. The procedures were validated and evaluated by the ICH guidelines revealing good reproducibility and accuracy. Therefore, the two proposed methods may be considered of high interest for practical and reliable analysis of ezogabine, levetiracetam and topiramate in pharmaceutical dosage forms.
NASA Astrophysics Data System (ADS)
Shi, Guo-Jie; Wang, Jin-Guo; Hou, Zhao-Yang; Wang, Zhen; Liu, Rang-Su
2017-09-01
The mechanical properties and deformation mechanisms of Au nanowire during the tensile processes at different strain rates are revealed by the molecular dynamics method. It is found that the Au nanowire displays three distinct types of mechanical behaviors when tensioning at low, medium and high strain rates, respectively. At the low strain rate, the stress-strain curve displays a periodic zigzag increase-decrease feature, and the plastic deformation is resulted from the slide of dislocation. The dislocations nucleate, propagate, and finally annihilate in every decreasing stages of stress, and the nanowire always can recover to FCC-ordered structure. At the medium strain rate, the stress-strain curve gently decreases during the plastic process, and the deformation is contributed from sliding and twinning. The dislocations formed in the yield stage do not fully propagate and further escape from the nanowire. At the high strain rate, the stress-strain curve wave-like oscillates during the plastic process, and the deformation is resulted from amorphization. The FCC atoms quickly transform into disordered amorphous structure in the yield stage. The relative magnitude between the loading velocity of strain and the propagation velocity of phonons determines the different deformation mechanisms. The mechanical behavior of Au nanowire is similar to Ni, Cu and Pt nanowires, but their deformation mechanisms are not completely identical with each other.
Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.
Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2010-11-01
Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.
Thermoluminescence glow curve analysis and CGCD method for erbium doped CaZrO{sub 3} phosphor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiwari, Ratnesh, E-mail: 31rati@gmail.com; Chopra, Seema
2016-05-06
The manuscript report the synthesis, thermoluminescence study at fixed concentration of Er{sup 3+} (1 mol%) doped CaZrO{sub 3} phosphor. The phosphors were prepared by modified solid state reaction method. The powder sample was characterized by thermoluminescence (TL) glow curve analysis. In TL glow curve the optimized concentration in 1mol% for UV irradiated sample. The kinetic parameters were calculated by computerized glow curve deconvolution (CGCD) techniaue. Trapping parameters gives the information of dosimetry loss in prepared phosphor and its usability in environmental monitoring and for personal monitoring. CGCD is the advance tool for analysis of complicated TL glow curves.
Doppler interpretation of quasar red shifts.
Zapolsky, H S
1966-08-05
The hypothesis that the quasistellar sources (quasars) are local objects moving with velocities close to the speed of light is examined. Provided there is no observational cutoff on apparent bolometric magnitude for the quasars, the transverse Doppler effect leads to the expectation of fewer blue shifts than red shifts for an isotropic distribution of velocities. Such a distribution also yields a function N(z), the number of objects with red shift less than z which is not inconsistent with the present data. On the basis of two extreme assumptions concerning the origin of such rapidly moving sources, we computed curves of red shift plotted against magnitude. In particular, the curve obtained on the assumption that the quasars originated from an explosion in or nearby our own galaxy is in as good agreement with the observations as the curve of cosmological red shift plotted against magnitude.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Relation between ground water and surface water in Brandywine Creek basin, Pennsylvania
Olmsted, F.H.; Hely, A.G.
1962-01-01
The relation between ground water and surface water was studied in Brandywine Creek basin, an area of 287 square miles in the Piedmont physiographic province in southeastern Pennsylvania. Most of the basin is underlain by crystalline rocks that yield only small to moderate supplies of water to wells, but the creek has an unusually well-sustained base flow. Streamflow records for the Chadds Ford, Pa., gaging station were analyzed; base flow recession curves and hydrographs of base flow were defined for the calendar years 1928-31 and 1952-53. Water budgets calculated for these two periods indicate that about two-thirds of the runoff of Brandywine Creek is base flow--a significantly higher proportion of base flow than in streams draining most other types of consolidated rocks in the region and almost as high as in streams in sandy parts of the Coastal Plain province in New Jersey and Delaware. Ground-water levels in 16 observation wells were compared with the base flow of the creek for 1952-53. The wells are assumed to provide a reasonably good sample of average fluctuations of the water table and its depth below the land surface. Three of the wells having the most suitable records were selected as index wells to use in a more detailed analysis. A direct, linear relation between the monthly average ground-water stage in the index wells and the base flow of the creek in winter months was found. The average ground-water discharge in the basin for 1952-53 was 489 cfs (316 mgd), of which slightly less than one-fourth was estimated to be loss by evapotranspiration. However, the estimated evapotranspiration from ground water, and consequently the estimated total ground-water discharge, may be somewhat high. The average gravity yield (short-term coefficient of storage) of the zone of water-table fluctuation was calculated by two methods. The first method, based on the ratio of change in ground-water storage as calculated from a witner base-flow recession curve is seasonal change in ground-water stage in the observation wells, gave values of about 7 percent using 16 wells) and 7 1/2 percent (using 3 index wells). The second method, in which the change in ground water storage is based on a hypothetical base-flow recession curve (derived from the observed linear relation between ground-water stage in the index wells and base flow), gave a value of about 10 1/2 percent. The most probable value of gravity yield is between 7 1/2 and 10 percent, but this estimate may require modification when more information on the average magnitude of water-table fluctuation and the sources of base flow of the creek become available. Rough estimates were made of the average coefficient of transmissibility of the rocks in the basin by use of the estimated total ground-water discharge for the period 1952-53, approximate values of length of discharge areas, and average water-table gradients adjacent to the discharge areas. The estimated average coefficient of transmissibility for 1952-53 is roughly 1,000 gpd per foot. The transmissibility is variable, decreasing with decreasing ground-water stage. The seeming inconsistency between the small to moderate ground-water yield to wells and the high yield to streams is explained in terms of the deep permeable soils, the relatively high gravity yield of the zone of water-table fluctuation, the steep water-table gradients toward the streams, the relatively low transmissibility of the rocks, and the rapid decreases in gravity yield below the lower limit of water-table fluctuation. It is concluded that no simple relation exists between the amount of natural ground-water discharge in an area and all the proportion of this discharge that can be diverted to wells.
Stephen R. Shifley; Hong S. He; Heike Lischke; Wen J. Wang; Wenchi Jin; Eric J. Gustafson; Jonathan R. Thompson; Frank R. Thompson; William D. Dijak; Jian Yang
2017-01-01
Context. Quantitative models of forest dynamics have followed a progression toward methods with increased detail, complexity, and spatial extent. Objectives. We highlight milestones in the development of forest dynamics models and identify future research and application opportunities. Methods. We reviewed...
Daniel M. Bishop; Floyd A. Johnson
1958-01-01
The increasing commercial importance of red alder (Alnus rubra) in the Pacific Northwest has created a demand for research on this species. Noting the lack of information on growth of alder, the Puget Sound Research Center Advisory Committee established a subcommittee in January 1956 to undertake construction of alder yield tables. Through the...
Oeckl, Patrick; Steinacker, Petra; von Arnim, Christine A F; Straub, Sarah; Nagl, Magdalena; Feneberg, Emily; Weishaupt, Jochen H; Ludolph, Albert C; Otto, Markus
2014-11-07
The impairment of the ubiquitin-proteasome system (UPS) is thought to be an early event in neurodegeneration, and monitoring UPS alterations might serve as a disease biomarker. Our aim was to establish an alternate method to antibody-based assays for the selective measurement of free monoubiquitin in cerebrospinal fluid (CSF). Free monoubiquitin was measured with liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MS/MS) in CSF of patients with Alzheimer's disease (AD), amyotrophic lateral sclerosis (ALS), behavioral variant of frontotemporal dementia (bvFTD), Creutzfeldt-Jakob disease (CJD), Parkinson's disease (PD), primary progressive aphasia (PPA), and progressive supranuclear palsy (PSP). The LC-MS/MS method showed excellent intra- and interassay precision (4.4-7.4% and 4.9-10.3%) and accuracy (100-107% and 100-106%). CSF ubiquitin concentration was increased compared with that of controls (33.0 ± 9.7 ng/mL) in AD (47.5 ± 13.1 ng/mL, p < 0.05) and CJD patients (171.5 ± 103.5 ng/mL, p < 0.001) but not in other neurodegenerative diseases. Receiver operating characteristic curve (ROC) analysis of AD vs control patients revealed an area under the curve (AUC) of 0.832, and the specificity and sensitivity were 75 and 75%, respectively. ROC analysis of AD and FTLD patients yielded an AUC of 0.776, and the specificity and sensitivity were 53 and 100%, respectively. In conclusion, our LC-MS/MS method may facilitate ubiquitin determination to a broader community and might help to discriminate AD, CJD, and FTLD patients.
Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L
2016-02-01
Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.
Yoon, Jun Sik; Lee, Yu Rim; Kweon, Young-Oh; Tak, Won Young; Jang, Se Young; Park, Soo Young; Hur, Keun; Park, Jung Gil; Lee, Hye Won; Chun, Jae Min; Han, Young Seok; Lee, Won Kee
2018-05-23
To compare the clinical value of acoustic radiation force impulse (ARFI) elastography and transient elastography (TE) for hepatocellular carcinoma (HCC) recurrence prediction after radiofrequency ablation (RFA) and to investigate other predictors of HCC recurrence. Between 2011 and 2016, 130 patients with HCC who underwent ARFI elastography and TE within 6 months before curative RFA were prospectively enrolled. Independent predictors of HCC recurrence were analyzed separately using ARFI elastography and TE. ARFI elastography and TE accuracy to predict HCC recurrence was determined by receiver operating characteristic curve analysis. Of all included patients (91 men; mean age, 63.5 years; range: 43-84 years), 51 (42.5%) experienced HCC recurrence during the follow-up period (median, 21.9 months). In multivariable analysis using ARFI velocity, serum albumin and ARFI velocity [hazard ratios: 2.873; 95% confidence interval (CI): 1.806-4.571; P<0.001] were independent predictors of recurrence, and in multivariable analysis using TE value, serum albumin and TE value (hazard ratios: 1.028; 95% CI: 1.013-1.043; P<0.001) were independent predictors of recurrence. The area under the receiver operating characteristic curve of ARFI elastography (0.821; 95% CI: 0.747-0.895) was not statistically different from that of TE (0.793; 95% CI: 0.712-0.874) for predicting HCC recurrence (P=0.827). The optimal ARFI velocity and TE cutoff values were 1.6 m/s and 14 kPa, respectively. ARFI elastography and TE yield comparable predictors of HCC recurrence after RFA.
de Heer, K; Kok, M G M; Fens, N; Weersink, E J M; Zwinderman, A H; van der Schee, M P C; Visser, C E; van Oers, M H J; Sterk, P J
2016-03-01
Currently, there is no noninvasive test that can reliably diagnose early invasive pulmonary aspergillosis (IA). An electronic nose (eNose) can discriminate various lung diseases through an analysis of exhaled volatile organic compounds. We recently published a proof-of-principle study showing that patients with prolonged chemotherapy-induced neutropenia and IA have a distinct exhaled breath profile (or breathprint) that can be discriminated with an eNose. An eNose is cheap and noninvasive, and it yields results within minutes. We determined whether Aspergillus fumigatus colonization may also be detected with an eNose in cystic fibrosis (CF) patients. Exhaled breath samples of 27 CF patients were analyzed with a Cyranose 320. Culture of sputum samples defined the A. fumigatus colonization status. eNose data were classified using canonical discriminant analysis after principal component reduction. Our primary outcome was cross-validated accuracy, defined as the percentage of correctly classified subjects using the leave-one-out method. The P value was calculated by the generation of 100,000 random alternative classifications. Nine of the 27 subjects were colonized by A. fumigatus. In total, 3 subjects were misclassified, resulting in a cross-validated accuracy of the Cyranose detecting IA of 89% (P = 0.004; sensitivity, 78%; specificity, 94%). Receiver operating characteristic (ROC) curve analysis showed an area under the curve (AUC) of 0.89. The results indicate that A. fumigatus colonization leads to a distinctive breathprint in CF patients. The present proof-of-concept data merit external validation and monitoring studies. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Lee, Jiyeong; Joo, Eun-Jeong; Lim, Hee-Joung; Park, Jong-Moon; Lee, Kyu Young; Park, Arum; Seok, AeEun
2015-01-01
Objective Currently, there are a few biological markers to aid in the diagnosis and treatment of depression. However, it is not sufficient for diagnosis. We attempted to identify differentially expressed proteins during depressive moods as putative diagnostic biomarkers by using quantitative proteomic analysis of serum. Methods Blood samples were collected twice from five patients with major depressive disorder (MDD) at depressive status before treatment and at remission status during treatment. Samples were individually analyzed by liquid chromatography-tandem mass spectrometry for protein profiling. Differentially expressed proteins were analyzed by label-free quantification. Enzyme-linked immunosorbent assay (ELISA) results and receiver-operating characteristic (ROC) curves were used to validate the differentially expressed proteins. For validation, 8 patients with MDD including 3 additional patients and 8 matched normal controls were analyzed. Results The quantitative proteomic studies identified 10 proteins that were consistently upregulated or downregulated in 5 MDD patients. ELISA yielded results consistent with the proteomic analysis for 3 proteins. Expression levels were significantly different between normal controls and MDD patients. The 3 proteins were ceruloplasmin, inter-alpha-trypsin inhibitor heavy chain H4 and complement component 1qC, which were upregulated during the depressive status. The depressive status could be distinguished from the euthymic status from the ROC curves for these proteins, and this discrimination was enhanced when all 3 proteins were analyzed together. Conclusion This is the first proteomic study in MDD patients to compare intra-individual differences dependent on mood. This technique could be a useful approach to identify MDD biomarkers, but requires additional proteomic studies for validation. PMID:25866527
A framework for the probabilistic analysis of meteotsunamis
Geist, Eric L.; ten Brink, Uri S.; Gove, Matthew D.
2014-01-01
A probabilistic technique is developed to assess the hazard from meteotsunamis. Meteotsunamis are unusual sea-level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation is proposed for the probabilistic analysis, based on previous frameworks established for both tsunamis and storm surges, incorporating different sources and source parameters of meteotsunamis. Parameterization of atmospheric disturbances and numerical modeling is performed for the computation of maximum meteotsunami wave amplitudes near the coast. A historical record of pressure disturbances is used to establish a continuous analytic distribution of each parameter as well as the overall Poisson rate of occurrence. A demonstration study is presented for the northeast U.S. in which only isolated atmospheric pressure disturbances from squall lines and derechos are considered. For this study, Automated Surface Observing System stations are used to determine the historical parameters of squall lines from 2000 to 2013. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of squall lines is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a numerical hydrodynamic model. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using multiple synthetic catalogs, resampled from the parent parameter distributions, yield mean and quantile hazard curves. Further refinements and improvements for probabilistic analysis of meteotsunamis are discussed.
CONSTRAINING RELATIVISTIC BOW SHOCK PROPERTIES IN ROTATION-POWERED MILLISECOND PULSAR BINARIES.
Wadiasingh, Zorawar; Harding, Alice K; Venter, Christo; Böttcher, Markus; Baring, Matthew G
2017-04-20
Multiwavelength followup of unidentified Fermi sources has vastly expanded the number of known galactic-field "black widow" and "redback" millisecond pulsar binaries. Focusing on their rotation-powered state, we interpret the radio to X-ray phenomenology in a consistent framework. We advocate the existence of two distinct modes differing in their intrabinary shock orientation, distinguished by the phase-centering of the double-peaked X-ray orbital modulation originating from mildly-relativistic Doppler boosting. By constructing a geometric model for radio eclipses, we constrain the shock geometry as functions of binary inclination and shock stand-off R 0 . We develop synthetic X-ray synchrotron orbital light curves and explore the model parameter space allowed by radio eclipse constraints applied on archetypal systems B1957+20 and J1023+0038. For B1957+20, from radio eclipses the stand-off is R 0 ~ 0.15-0.3 fraction of binary separation from the companion center, depending on the orbit inclination. Constructed X-ray light curves for B1957+20 using these values are qualitatively consistent with those observed, and we find occultation of the shock by the companion as a minor influence, demanding significant Doppler factors to yield double peaks. For J1023+0038, radio eclipses imply R 0 ≲ 0.4 while X-ray light curves suggest 0.1 ≲ R 0 ≲ 0.3 (from the pulsar). Degeneracies in the model parameter space encourage further development to include transport considerations. Generically, the spatial variation along the shock of the underlying electron power-law index should yield energy-dependence in the shape of light curves motivating future X-ray phase-resolved spectroscopic studies to probe the unknown physics of pulsar winds and relativistic shock acceleration therein.
CONSTRAINING RELATIVISTIC BOW SHOCK PROPERTIES IN ROTATION-POWERED MILLISECOND PULSAR BINARIES
Wadiasingh, Zorawar; Harding, Alice K.; Venter, Christo; Böttcher, Markus; Baring, Matthew G.
2018-01-01
Multiwavelength followup of unidentified Fermi sources has vastly expanded the number of known galactic-field “black widow” and “redback” millisecond pulsar binaries. Focusing on their rotation-powered state, we interpret the radio to X-ray phenomenology in a consistent framework. We advocate the existence of two distinct modes differing in their intrabinary shock orientation, distinguished by the phase-centering of the double-peaked X-ray orbital modulation originating from mildly-relativistic Doppler boosting. By constructing a geometric model for radio eclipses, we constrain the shock geometry as functions of binary inclination and shock stand-off R0. We develop synthetic X-ray synchrotron orbital light curves and explore the model parameter space allowed by radio eclipse constraints applied on archetypal systems B1957+20 and J1023+0038. For B1957+20, from radio eclipses the stand-off is R0 ~ 0.15–0.3 fraction of binary separation from the companion center, depending on the orbit inclination. Constructed X-ray light curves for B1957+20 using these values are qualitatively consistent with those observed, and we find occultation of the shock by the companion as a minor influence, demanding significant Doppler factors to yield double peaks. For J1023+0038, radio eclipses imply R0 ≲ 0.4 while X-ray light curves suggest 0.1 ≲ R0 ≲ 0.3 (from the pulsar). Degeneracies in the model parameter space encourage further development to include transport considerations. Generically, the spatial variation along the shock of the underlying electron power-law index should yield energy-dependence in the shape of light curves motivating future X-ray phase-resolved spectroscopic studies to probe the unknown physics of pulsar winds and relativistic shock acceleration therein. PMID:29651167
Constraining Relativistic Bow Shock Properties in Rotation-Powered Millisecond Pulsar Binaries
NASA Technical Reports Server (NTRS)
Wadiasingh, Zorawar; Harding, Alice K.; Venter, Christo; Bottcher, Markus; Baring, Matthew G.
2017-01-01
Multiwavelength follow-up of unidentified Fermi sources has vastly expanded the number of known galactic-field "black widow" and "redback" millisecond pulsar binaries. Focusing on their rotation-powered state, we interpret the radio to X-ray phenomenology in a consistent framework. We advocate the existence of two distinct modes differing in their intrabinary shock orientation, distinguished by the phase-centering of the double-peaked X-ray orbital modulation originating from mildly-relativistic Doppler boosting. By constructing a geometric model for radio eclipses, we constrain the shock geometry as functions of binary inclination and shock stand-off R(sub 0). We develop synthetic X-ray synchrotron orbital light curves and explore the model parameter space allowed by radio eclipse constraints applied on archetypal systems B1957+20 and J1023+0038. For B1957+20, from radio eclipses the stand-off is R(sub 0) approximately 0:15 - 0:3 fraction of binary separation from the companion center, depending on the orbit inclination. Constructed X-ray light curves for B1957+20 using these values are qualitatively consistent with those observed, and we find occultation of the shock by the companion as a minor influence, demanding significant Doppler factors to yield double peaks. For J1023+0038, radio eclipses imply R(sub 0) is approximately less than 0:4 while X-ray light curves suggest 0:1 is approximately less than R(sub 0) is approximately less than 0:3 (from the pulsar). Degeneracies in the model parameter space encourage further development to include transport considerations. Generically, the spatial variation along the shock of the underlying electron power-law index should yield energy-dependence in the shape of light curves motivating future X-ray phase-resolved spectroscopic studies to probe the unknown physics of pulsar winds and relativistic shock acceleration therein.
STRONG LENS TIME DELAY CHALLENGE. II. RESULTS OF TDC1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Kai; Treu, Tommaso; Marshall, Phil
2015-02-10
We present the results of the first strong lens time delay challenge. The motivation, experimental design, and entry level challenge are described in a companion paper. This paper presents the main challenge, TDC1, which consisted of analyzing thousands of simulated light curves blindly. The observational properties of the light curves cover the range in quality obtained for current targeted efforts (e.g., COSMOGRAIL) and expected from future synoptic surveys (e.g., LSST), and include simulated systematic errors. Seven teams participated in TDC1, submitting results from 78 different method variants. After describing each method, we compute and analyze basic statistics measuring accuracy (ormore » bias) A, goodness of fit χ{sup 2}, precision P, and success rate f. For some methods we identify outliers as an important issue. Other methods show that outliers can be controlled via visual inspection or conservative quality control. Several methods are competitive, i.e., give |A| < 0.03, P < 0.03, and χ{sup 2} < 1.5, with some of the methods already reaching sub-percent accuracy. The fraction of light curves yielding a time delay measurement is typically in the range f = 20%-40%. It depends strongly on the quality of the data: COSMOGRAIL-quality cadence and light curve lengths yield significantly higher f than does sparser sampling. Taking the results of TDC1 at face value, we estimate that LSST should provide around 400 robust time-delay measurements, each with P < 0.03 and |A| < 0.01, comparable to current lens modeling uncertainties. In terms of observing strategies, we find that A and f depend mostly on season length, while P depends mostly on cadence and campaign duration.« less
NASA Technical Reports Server (NTRS)
Luecke, William E.; Ma, Li; Graham, Stephen M.; Adler, Matthew A.
2010-01-01
Ten commercial laboratories participated in an interlaboratory study to establish the repeatability and reproducibility of compression strength tests conducted according to ASTM International Standard Test Method E9. The test employed a cylindrical aluminum AA2024-T351 test specimen. Participants measured elastic modulus and 0.2 % offset yield strength, YS(0.2 % offset), using an extensometer attached to the specimen. The repeatability and reproducibility of the yield strength measurement, expressed as coefficient of variations were cv(sub r)= 0.011 and cv(sub R)= 0.020 The reproducibility of the test across the laboratories was among the best that has been reported for uniaxial tests. The reported data indicated that using diametrically opposed extensometers, instead of a single extensometer doubled the precision of the test method. Laboratories that did not lubricate the ends of the specimen measured yield stresses and elastic moduli that were smaller than those measured in laboratories that lubricated the specimen ends. A finite element analysis of the test specimen deformation for frictionless and perfect friction could not explain the discrepancy, however. The modulus measured from stress-strain data were reanalyzed using a technique that finds the optimal fit range, and applies several quality checks to the data. The error in modulus measurements from stress-strain curves generally increased as the fit range decreased to less than 40 % of the stress range.
Bellasio, Chandra; Beerling, David J; Griffiths, Howard
2016-06-01
Combined photosynthetic gas exchange and modulated fluorometres are widely used to evaluate physiological characteristics associated with phenotypic and genotypic variation, whether in response to genetic manipulation or resource limitation in natural vegetation or crops. After describing relatively simple experimental procedures, we present the theoretical background to the derivation of photosynthetic parameters, and provide a freely available Excel-based fitting tool (EFT) that will be of use to specialists and non-specialists alike. We use data acquired in concurrent variable fluorescence-gas exchange experiments, where A/Ci and light-response curves have been measured under ambient and low oxygen. From these data, the EFT derives light respiration, initial PSII (photosystem II) photochemical yield, initial quantum yield for CO2 fixation, fraction of incident light harvested by PSII, initial quantum yield for electron transport, electron transport rate, rate of photorespiration, stomatal limitation, Rubisco (ribulose 1·5-bisphosphate carboxylase/oxygenase) rate of carboxylation and oxygenation, Rubisco specificity factor, mesophyll conductance to CO2 diffusion, light and CO2 compensation point, Rubisco apparent Michaelis-Menten constant, and Rubisco CO2 -saturated carboxylation rate. As an example, a complete analysis of gas exchange data on tobacco plants is provided. We also discuss potential measurement problems and pitfalls, and suggest how such empirical data could subsequently be used to parameterize predictive photosynthetic models. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Riddin, T. L.; Gericke, M.; Whiteley, C. G.
2006-07-01
Fusarium oxysporum fungal strain was screened and found to be successful for the inter- and extracellular production of platinum nanoparticles. Nanoparticle formation was visually observed, over time, by the colour of the extracellular solution and/or the fungal biomass turning from yellow to dark brown, and their concentration was determined from the amount of residual hexachloroplatinic acid measured from a standard curve at 456 nm. The extracellular nanoparticles were characterized by transmission electron microscopy. Nanoparticles of varying size (10-100 nm) and shape (hexagons, pentagons, circles, squares, rectangles) were produced at both extracellular and intercellular levels by the Fusarium oxysporum. The particles precipitate out of solution and bioaccumulate by nucleation either intercellularly, on the cell wall/membrane, or extracellularly in the surrounding medium. The importance of pH, temperature and hexachloroplatinic acid (H2PtCl6) concentration in nanoparticle formation was examined through the use of a statistical response surface methodology. Only the extracellular production of nanoparticles proved to be statistically significant, with a concentration yield of 4.85 mg l-1 estimated by a first-order regression model. From a second-order polynomial regression, the predicted yield of nanoparticles increased to 5.66 mg l-1 and, after a backward step, regression gave a final model with a yield of 6.59 mg l-1.
Mechanical Properties, Short Time Creep, and Fatigue of an Austenitic Steel
Brnic, Josip; Turkalj, Goran; Canadija, Marko; Lanc, Domagoj; Krscanski, Sanjin; Brcic, Marino; Li, Qiang; Niu, Jitai
2016-01-01
The correct choice of a material in the process of structural design is the most important task. This study deals with determining and analyzing the mechanical properties of the material, and the material resistance to short-time creep and fatigue. The material under consideration in this investigation is austenitic stainless steel X6CrNiTi18-10. The results presenting ultimate tensile strength and 0.2 offset yield strength at room and elevated temperatures are displayed in the form of engineering stress-strain diagrams. Besides, the creep behavior of the steel is presented in the form of creep curves. The material is consequently considered to be creep resistant at temperatures of 400 °C and 500 °C when subjected to a stress which is less than 0.9 of the yield strength at the mentioned temperatures. Even when the applied stress at a temperature of 600 °C is less than 0.5 of the yield strength, the steel may be considered as resistant to creep. Cyclic tensile fatigue tests were carried out at stress ratio R = 0.25 using a servo-pulser machine and the results were recorded. The analysis shows that the stress level of 434.33 MPa can be adopted as a fatigue limit. The impact energy was also determined and the fracture toughness assessed. PMID:28773424
The Effect of Artificial Aging on the Tensile Properties of Alclad 24S-T and 24S-T Aluminum Alloy
NASA Technical Reports Server (NTRS)
Kotanchik, Joseph N.; Woods, Walter; Zender, George W.
1943-01-01
An experimental study was made to determine the effect of artificial aging on the tensile properties of alclad 24S-T and 24S-T aluminum-alloy sheet material. The results of the tests show that certain combinations of aging time and temperature cause a marked increase in the yield strength and a small increase in the ultimate strength; these increases are accompanied by a very large decrease in elongation. A curve is presented that shows the maximum yield strengths that can be obtained by aging this material at various combinations of time and temperature. The higher values of yield stress are obtained in material aged at relatively longer times and lower temperatures.
Does technology acceleration equate to mask cost acceleration?
NASA Astrophysics Data System (ADS)
Trybula, Walter J.; Grenon, Brian J.
2003-06-01
The technology acceleration of the ITRS Roadmap has many implications on both the semiconductor sup-plier community and the manufacturers. INTERNATIONAL SEMATECH has revaluated the projected cost of advanced technology masks. Building on the methodology developed in 1996 for mask costs, this work provided a critical review of mask yields and factors relating to the manufacture of photolithography masks. The impact of the yields provided insight into the learning curve for leading edge mask manufac-turing. The projected mask set cost was surprising, and the ability to provide first and second year cost estimates provided additional information on technology introduction. From this information, the impact of technology acceleration can be added to the projected yields to evaluate the impact on mask costs.
NASA Technical Reports Server (NTRS)
Starkey, D.; Gehrels, Cornelis; Horne, Keith; Fausnaugh, M. M.; Peterson, B. M.; Bentz, M. C.; Kochanek, C. S.; Denney, K. D.; Edelson, R.; Goad, M. R.;
2017-01-01
We conduct a multi-wavelength continuum variability study of the Seyfert 1 galaxy NGC 5548 to investigate the temperature structure of its accretion disk. The 19 overlapping continuum light curves (1158 Angstrom to 9157 Angstrom) combine simultaneous Hubble Space Telescope, Swift, and ground-based observations over a 180 day period from 2014 January to July. Light-curve variability is interpreted as the reverberation response of the accretion disk to irradiation by a central time-varying point source. Our model yields the disk inclination i = 36deg +/- 10deg, temperature T(sub 1) = (44+/-6) times 10 (exp 3)K at 1 light day from the black hole, and a temperature radius slope (T proportional to r (exp -alpha)) of alpha = 0.99 +/- 0.03. We also infer the driving light curve and find that it correlates poorly with both the hard and soft X-ray light curves, suggesting that the X-rays alone may not drive the ultraviolet and optical variability over the observing period. We also decompose the light curves into bright, faint, and mean accretion-disk spectra. These spectra lie below that expected for a standard blackbody accretion disk accreting at L/L(sub Edd) = 0.1.
Induction of micronuclei in human fibroblasts across the Bragg curve of energetic heavy ions
NASA Technical Reports Server (NTRS)
Hada, Megumi; Rusek, Adam; Cucinotta, Francis A.; Wu, Honglu
2006-01-01
The space environment consists of a varying field of radiation particles including high energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-L or X-rays, the presence of shielding does not always reduce the radiation risks for energetic charged particle exposure. Although the dose delivered by the charged particle increases sharply as the particle approaches the Bragg peak, the Bragg curve does not necessarily represent the biological damage along the particle traversal. The "biological Bragg curve" is dependent on the energy and the type of the primary particle, and may vary for different biological endpoints. To investigate "biological Bragg curves", we analyzed micronuclei (MN) induction along the particle traversal of Si and Fe ions at incident energies of 300 MeV/nucleon and 1 GeV/nucleon. A quantitative biological response curve did not reveal an increased yield of MN at the location of the Bragg peak. However, the ratio of mono-to bi-nucleated cells, which indicates inhibition in cell progression, increased at the Bragg peak location. These results confirm the hypothesis that "over kill" at the Bragg peak will affect the outcome of other biological endpoints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, K.R.; DeBusk, W.F.
Seasonal growth characteristics and biomass yield potential of 3 floating aquatic macrophytes cultured in nutrient nonlimiting conditions were evaluated in central Florida's climatic conditions. Growth cycle (growth curve) of the plants was found to be complete when maximum plant density was reached and no additional increase in growth was recorded. Biomass yield per unit area and time was found to be maximum in the linear phase of the growth curve; plant density in this phase was defined as ''operational plant density,'' a density range in which a biomass production system is operated to obtain the highest possible yields. Biomass yieldsmore » were found to be 106, 72, and 41 t(dry wt) ha/sup -1/yr/sup -1/, respectively, for water hyacinth (Eichhornia crassipes), water lettuce (Pistia stratiotes), and pennywort (Hydrocotyle umbellata). Operational plant density was found to be in the range of 500-2000 g dry wt m/sup -2/ for water hyacinth, 200-700 g dry wt m/sup -2/ for water lettuce, and 250-650 g dry wt/sup -2/ for pennywort. Seasonality was observed in growth rates but not in operational plant density. Specific growth rate (% increase per day) was found to maximum at low plant densities and decreased as the plant density increased. Results show that water hyacinth and water lettuce can be successfully grown for a period of about 10 mo, while pennywort, a cool season plant, can be integrated into water hyacinth/water lettuce biomass production system to obtain high yields in the winter.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, K.R.; DeBusk, W.F.
Seasonal growth characteristics and biomass yield potential of 3 floating aquatic macrophytes cultured in nutrient nonlimiting conditions were evaluated in central Florida's climatic conditions. Growth cycle (growth curve) of the plants was found to be complete when maximum plant density was reached and no additional increase in growth was recorded. Biomass yield per unit area and time was found to be maximum in the linear phase of the growth curve; plant density in this phase was defined as operational plant density, a density range in which a biomass production system is operated to obtain the highest possible yields. Biomass yieldsmore » were found to be 106, 72, and 41 t (dry wt) ha/sup -1/ yr/sup -1/, respectively, for water hyacinth (Eichhornia crassipes), water lettuce (Pistia stratiotes), and pennywort (Hydrocotyle umbellata). Operational plant density was found to be in the range of 500-2,000 g dry wt m/sup -2/ for water hyacinth, 200-700 g dry wt m/sup -2/ for water lettuce, and 250-650 g dry wt m/sup -2/ for pennywort. Seasonality was observed in growth rates but not in operational plant density. Specific growth rate (% increase per day) was found to maximum at low plant densities and decreased as the plant density increased. Results show that water hyacinth and water lettuce can be successfully grown for a period of about 10 mo, while pennywort, a cool season plant, can be integrated into water hyacinth/water lettuce biomass production system to obtain high yields in the winter.« less
NASA Astrophysics Data System (ADS)
Mitra, B.; Basu, S.; Bereznyakov, D.; Pereira, A.; Naithani, K. J.
2015-12-01
Drought across different agro-climatic regions of the world has the capacity to drastically impact the yield potential of rice. Consequently, there is growing interest in developing drought tolerant rice varieties with high yield. We parameterized two photosynthesis models based on light and CO2 response curves for seven different rice genotypes with different drought survival mechanisms: sensitive (Nipponbar, TEJ), resistance (Bengal, TRJ), avoidance by osmotic adjustment (Kaybonnet, TRJ; IRAT177, TRJ; N22, Aus; Vandana, Aus; and O Glabberrima, 316603). All rice genotypes were grown in greenhouse conditions (24 °C ± 3°C air temperature and ~ 600 μmol m-2 s-1 light intensity) with light/dark cycles of 10/14 h in water filled trays simulating flooded conditions. Measurements were conducted on fully grown plants (35 - 60 days old) under simulated flooded and drought conditions. Preliminary results have shown that the drought sensitive genotype, Nipponbare has the lowest photosynthetic carboxylation capacity (Vcmax) and a similar electron transport rate (Jmax) compared to the drought resistant genotype IRAT 177. Mitochondrial respiration (Rd) of all the genotypes were similar while quantum yield of the drought sensitive genotype was greater than that of the drought resistant genotypes. While both drought tolerant and drought sensitive rice genotypes have the same photosynthetic yield, from an irrigation perspective the former would require less 'drop per grain'. This has enormous economic and management implications on account of dwindling water resources across the world due to drought.
NASA Technical Reports Server (NTRS)
Hu, Shoufeng; Bark, Jong S.; Nairn, John A.
1993-01-01
A variational analysis of the stress state in microcracked cross-ply laminates has been used to investigate the phenomenon of curved microcracking in /(S)/90n/s laminates. Previous investigators proposed that the initiation and orientation of curved microcracks are controlled by local maxima and stress trajectories of the principal stresses. We have implemented a principal stress model using a variational mechanics stress analysis and we were able to make predictions about curved microcracks. The predictions agree well with experimental observations and therefore support the assertion that the variational analysis gives an accurate stress state that is useful for modeling the microcracking properties of cross-ply laminates. An important prediction about curved microcracks is that they are a late stage of microcracking damage. They occur only when the crack density of straight microcracks exceeds the critical crack density for curved microcracking. The predicted critical crack density for curved microcracking agrees well with experimental observations.