Sample records for results average values

  1. Profile of Prospective Physics Teachers on Assessment Literacy

    NASA Astrophysics Data System (ADS)

    Efendi, R.; Rustaman, N. Y.; Kaniawati, I.

    2017-02-01

    A study about assessment literacy of prospective Physics teachers was conducted with the involvement of 45 prospective physics teachers. Data collected by using test consisted of seven competencies. The profile of prospective physics teachers on assessment literacy determined in descriptive statistics, in the form of respondent average values. Research finding shows that prospective physics teachers were weak at all competency areas. The average values of the Choosing assessment methods appropriate for instructional decisions is the highest average values and the average values of the communicating assessment results to students, parents, other lay audiences, and other educators is the lowest average values. In depth study to detect the reason underlined the results was still in progress so far, as another aspect was planned to be administered on the next semester.

  2. Improved simulation of group averaged CO2 surface concentrations using GEOS-Chem and fluxes from VEGAS

    NASA Astrophysics Data System (ADS)

    Chen, Z. H.; Zhu, J.; Zeng, N.

    2013-01-01

    CO2 measurements have been combined with simulated CO2 distributions from a transport model in order to produce the optimal estimates of CO2 surface fluxes in inverse modeling. However one persistent problem in using model-observation comparisons for this goal relates to the issue of compatibility. Observations at a single site reflect all underlying processes of various scales that usually cannot be fully resolved by model simulations at the grid points nearest the site due to lack of spatial or temporal resolution or missing processes in models. In this article we group site observations of multiple stations according to atmospheric mixing regimes and surface characteristics. The group averaged values of CO2 concentration from model simulations and observations are used to evaluate the regional model results. Using the group averaged measurements of CO2 reduces the noise of individual stations. The difference of group averaged values between observation and modeled results reflects the uncertainties of the large scale flux in the region where the grouped stations are. We compared the group averaged values between model results with two biospheric fluxes from the model Carnegie-Ames-Stanford-Approach (CASA) and VEgetation-Global-Atmosphere-Soil (VEGAS) and observations to evaluate the regional model results. Results show that the modeling group averaged values of CO2 concentrations in all regions with fluxes from VEGAS have significant improvements for most regions. There is still large difference between two model results and observations for grouped average values in North Atlantic, Indian Ocean, and South Pacific Tropics. This implies possible large uncertainties in the fluxes there.

  3. Occupational exposure assessment of magnetic fields generated by induction heating equipment-the role of spatial averaging.

    PubMed

    Kos, Bor; Valič, Blaž; Kotnik, Tadej; Gajšek, Peter

    2012-10-07

    Induction heating equipment is a source of strong and nonhomogeneous magnetic fields, which can exceed occupational reference levels. We investigated a case of an induction tempering tunnel furnace. Measurements of the emitted magnetic flux density (B) were performed during its operation and used to validate a numerical model of the furnace. This model was used to compute the values of B and the induced in situ electric field (E) for 15 different body positions relative to the source. For each body position, the computed B values were used to determine their maximum and average values, using six spatial averaging schemes (9-285 averaging points) and two averaging algorithms (arithmetic mean and quadratic mean). Maximum and average B values were compared to the ICNIRP reference level, and E values to the ICNIRP basic restriction. Our results show that in nonhomogeneous fields, the maximum B is an overly conservative predictor of overexposure, as it yields many false positives. The average B yielded fewer false positives, but as the number of averaging points increased, false negatives emerged. The most reliable averaging schemes were obtained for averaging over the torso with quadratic averaging, with no false negatives even for the maximum number of averaging points investigated.

  4. Appraisal of application possibilities of smoothed splines to designation of the average values of terrain curvatures measured after the termination of hard coal exploitation conducted at medium depth

    NASA Astrophysics Data System (ADS)

    Orwat, J.

    2018-01-01

    In paper were presented results of average values calculations of terrain curvatures measured after the termination of subsequent exploitation stages in the 338/2 coal bed located at medium depth. The curvatures were measured on the neighbouring segments of measuring line No. 1 established perpendicularly to the runways of four longwalls No. 001, 002, 005 and 007. The average courses of measured curvatures were designated based on average courses of measured inclinations. In turn, the average values of observed inclinations were calculated on the basis of measured subsidence average values. In turn, they were designated on the way of average-square approximation, which was done by the use of smoothed splines, in reference to the theoretical courses determined by the S. Knothe’s and J. Bialek’s formulas. Here were used standard parameters values of a roof rocks subsidence a, an exploitation rim Aobr and an angle of the main influences range β. The values of standard deviations between the average and measured curvatures σC and the variability coefficients of random scattering of curvatures MC were calculated. They were compared with values appearing in the literature and based on this, a possibility appraisal of the use of smooth splines to designation of average course of observed curvatures of mining area was conducted.

  5. How long the singular value decomposed entropy predicts the stock market? - Evidence from the Dow Jones Industrial Average Index

    NASA Astrophysics Data System (ADS)

    Gu, Rongbao; Shao, Yanmin

    2016-07-01

    In this paper, a new concept of multi-scales singular value decomposition entropy based on DCCA cross correlation analysis is proposed and its predictive power for the Dow Jones Industrial Average Index is studied. Using Granger causality analysis with different time scales, it is found that, the singular value decomposition entropy has predictive power for the Dow Jones Industrial Average Index for period less than one month, but not for more than one month. This shows how long the singular value decomposition entropy predicts the stock market that extends Caraiani's result obtained in Caraiani (2014). On the other hand, the result also shows an essential characteristic of stock market as a chaotic dynamic system.

  6. Average of delta: a new quality control tool for clinical laboratories.

    PubMed

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  7. Relating illness complexity to reimbursement in CKD patients.

    PubMed

    Bessette, Russell W; Carter, Randy L

    2011-01-01

    Despite significant investments of federal and state dollars to transition patient medical records to an all-electronic system, a chasm still exists between health care quality and payment for it. A major reason for this gap is the difficulty in evaluating health care outcomes based on claims data. Since both payers and patients may not appreciate how illness complexity impacts treatment outcomes, it is difficult to determine fair provider compensation. Chronic kidney disease (CKD) typifies these problems and is often associated with comorbidities that impact cost, health, and work productivity. Thus, the objective of this study was to evaluate an illness complexity score (ICS) based on a linear regression of select blood values that might assist in predicting average monthly reimbursements in CKD patients. A second objective was to compare the results of this ICS prediction to results obtained by prediction of average monthly reimbursement using CKD stage. A third objective was to analyze the relationship between the change in ICS, estimated glomerular filtration rate (eGFR), and CKD stage over time to average monthly reimbursement. We calculated parsimonious values for select variables associated with CKD patients and compared the ICS to ordinal staging of renal disease. Data from 177 de-identified patients over 13 months was collected, which included 15 blood chemistry observations along with complete claims data for all medical expenses. To test for the relationship between average blood chemistry values, stages of CKD, age, and average monthly reimbursement, we modeled an association through a linear regression function of age, eGFR, and the Z-scores calculated from average monthly values of phosphorus, parathyroid hormone, glucose, hemoglobin, bicarbonate, albumin, creatinine, blood urea nitrogen, potassium, calcium, sodium, alkaline phosphatase, alanine aminotransferase, and white blood cells. The results of our study demonstrated that the association between average ICS values throughout the entire study period predicted average monthly reimbursements with an R(2) value of 0.41. Comparing that value to the association between the average CKD stage and average monthly reimbursement demonstrated an R(2) value of 0.08. Thus, ICS offers five times greater sensitivity over CKD staging as a measure of illness complexity. Sorting the patient population by changes in CKD stage or ICS over the entire study period revealed significant differences between the two scoring methods. Groups scored by ICS demonstrated greater sensitivity by capturing dysfunction in other organ systems and had a better association with reimbursement than groups scored by CKD staging.

  8. Extreme values in the Chinese and American stock markets based on detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Zhang, Minjia

    2015-10-01

    This paper focuses on the comparative analysis of extreme values in the Chinese and American stock markets based on the detrended fluctuation analysis (DFA) algorithm using the daily data of Shanghai composite index and Dow Jones Industrial Average. The empirical results indicate that the multifractal detrended fluctuation analysis (MF-DFA) method is more objective than the traditional percentile method. The range of extreme value of Dow Jones Industrial Average is smaller than that of Shanghai composite index, and the extreme value of Dow Jones Industrial Average is more time clustering. The extreme value of the Chinese or American stock markets is concentrated in 2008, which is consistent with the financial crisis in 2008. Moreover, we investigate whether extreme events affect the cross-correlation between the Chinese and American stock markets using multifractal detrended cross-correlation analysis algorithm. The results show that extreme events have nothing to do with the cross-correlation between the Chinese and American stock markets.

  9. Deciphering Seasonal Variations of Diet and Water in Modern White-Tailed Deer by In Situ Analysis of Osteons in Cortical Bone

    NASA Astrophysics Data System (ADS)

    Larson, T. E.; Longstaffe, F. J.

    2004-12-01

    In situ stable carbon and oxygen isotope compositions of biogenic apatite were obtained from longitudinally-cut sections of cortical bone from femurs of modern domesticated sheep and free-range White-Tailed deer, using an IR-laser and a GC-continuous flow interface. Ablation pits averaged 200x50 microns, making it possible to analyze individual osteons. Since cortical bone is remodelled along osteons throughout a mammal's lifetime, isotopic data at this resolution provides information about seasonal variations in diet and drinking water. The O-isotope results were calibrated using laser analyses of NBS-18 and NBS-19, which produced a value of 26.39±0.46 permil (n=27) for WS-1 calcite (accepted value, 26.25 permil). C-isotope results were calibrated using a CO2 reference gas, producing a value of 0.76±0.40permil (n=27) for WS-1, also in excellent agreement with its accepted value of 0.74 permil. Average O- and C-isotope values for a local domestic sheep (southwestern Ontario, Canada) were 12.20±0.58 and -15.70±0.35 permil (n=27), respectively. No isotopic trend occurred along or across individual osteons. This pattern is consistent with the sheep's relatively unchanging food and water sources. The free-range White-Tailed deer came from Pinery Provincial Park (PPP), southwestern Ontario. Its O- and C-isotope compositions varied systematically across individual osteons and were negatively correlated (R2=0.56). O-isotope values ranged from 13.4 to 15.5 permil; the highest values correlated with summer and the lowest values, with winter. The O-isotope compositions of the main water source (Old Ausable River Channel) varied similarly during the deer's lifetime: winter average, -10.7±0.5 permil; summer average, -8.6±0.4 permil. The C-isotope results for the deer osteons varied from -19.7 to -15.9 permil. This variation can be explained by changes in food sources. Summer diets of deer in PPP consist mainly of leafy fractions of C3 vegetation, especially sumac, cedar, oak and pine (average leaf C-isotope value, -28.4±0.8 permil). During winter, when leafy material is unavailable and deep snow inhibits access to vegetation in general, deer strip bark from vegetation (average bark C-isotope value, -25.6±0.8 permil). Certain C4 grasses (little bluestem and sandreed grass, average C-isotope value, -12.7±0.2 permil), which are abundant in unforested dune areas of PPP, commonly stand above the snow cover, and hence are also available for consumption. Deer may also range more widely in the winter, feeding on corn stalks and husks that escaped both harvest and snow cover (average C-isotope value, -11.3±0.2 permil).

  10. How we remember the emotional intensity of past musical experiences

    PubMed Central

    Schäfer, Thomas; Zimmermann, Doreen; Sedlmeier, Peter

    2014-01-01

    Listening to music usually elicits emotions that can vary considerably in their intensity over the course of listening. Yet, after listening to a piece of music, people are easily able to evaluate the music's overall emotional intensity. There are two different hypotheses about how affective experiences are temporally processed and integrated: (1) all moments' intensities are integrated, resulting in an averaged value; (2) the overall evaluation is built from specific single moments, such as the moments of highest emotional intensity (peaks), the end, or a combination of these. Here we investigated what listeners do when building an overall evaluation of a musical experience. Participants listened to unknown songs and provided moment-to-moment ratings of experienced intensity of emotions. Subsequently, they evaluated the overall emotional intensity of each song. Results indicate that participants' evaluations were predominantly influenced by their average impression but that, in addition, the peaks and end emotional intensities contributed substantially. These results indicate that both types of processes play a role: All moments are integrated into an averaged value but single moments might be assigned a higher value in the calculation of this average. PMID:25177311

  11. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  12. Trading in markets with noisy information: an evolutionary analysis

    NASA Astrophysics Data System (ADS)

    Bloembergen, Daan; Hennes, Daniel; McBurney, Peter; Tuyls, Karl

    2015-07-01

    We analyse the value of information in a stock market where information can be noisy and costly, using techniques from empirical game theory. Previous work has shown that the value of information follows a J-curve, where averagely informed traders perform below market average, and only insiders prevail. Here we show that both noise and cost can change this picture, in several cases leading to opposite results where insiders perform below market average, and averagely informed traders prevail. Moreover, we investigate the effect of random explorative actions on the market dynamics, showing how these lead to a mix of traders being sustained in equilibrium. These results provide insight into the complexity of real marketplaces, and show under which conditions a broad mix of different trading strategies might be sustainable.

  13. Incremental Value of Repeated Risk Factor Measurements for Cardiovascular Disease Prediction in Middle-Aged Korean Adults: Results From the NHIS-HEALS (National Health Insurance System-National Health Screening Cohort).

    PubMed

    Cho, In-Jeong; Sung, Ji Min; Chang, Hyuk-Jae; Chung, Namsik; Kim, Hyeon Chang

    2017-11-01

    Increasing evidence suggests that repeatedly measured cardiovascular disease (CVD) risk factors may have an additive predictive value compared with single measured levels. Thus, we evaluated the incremental predictive value of incorporating periodic health screening data for CVD prediction in a large nationwide cohort with periodic health screening tests. A total of 467 708 persons aged 40 to 79 years and free from CVD were randomly divided into development (70%) and validation subcohorts (30%). We developed 3 different CVD prediction models: a single measure model using single time point screening data; a longitudinal average model using average risk factor values from periodic screening data; and a longitudinal summary model using average values and the variability of risk factors. The development subcohort included 327 396 persons who had 3.2 health screenings on average and 25 765 cases of CVD over 12 years. The C statistics (95% confidence interval [CI]) for the single measure, longitudinal average, and longitudinal summary models were 0.690 (95% CI, 0.682-0.698), 0.695 (95% CI, 0.687-0.703), and 0.752 (95% CI, 0.744-0.760) in men and 0.732 (95% CI, 0.722-0.742), 0.735 (95% CI, 0.725-0.745), and 0.790 (95% CI, 0.780-0.800) in women, respectively. The net reclassification index from the single measure model to the longitudinal average model was 1.78% in men and 1.33% in women, and the index from the longitudinal average model to the longitudinal summary model was 32.71% in men and 34.98% in women. Using averages of repeatedly measured risk factor values modestly improves CVD predictability compared with single measurement values. Incorporating the average and variability information of repeated measurements can lead to great improvements in disease prediction. URL: https://www.clinicaltrials.gov. Unique identifier: NCT02931500. © 2017 American Heart Association, Inc.

  14. Modeling the oxidative capacity of the atmosphere of the south coast air basin of California. 1. Ozone formation metrics.

    PubMed

    Griffin, Robert J; Revelle, Meghan K; Dabdub, Donald

    2004-02-01

    Metrics associated with ozone (O3) formation are investigated using the California Institute of Technology (CIT) three-dimensional air-quality model. Variables investigated include the O3 production rate (P(O3)), O3 production efficiency (OPE), and total reactivity (the sum of the reactivity of carbon monoxide (CO) and all organic gases that react with the hydroxyl radical). Calculations are spatially and temporally resolved; surface-level and vertically averaged results are shown for September 9, 1993 for three Southern California locations: Central Los Angeles, Azusa, and Riverside. Predictions indicate increasing surface-level O3 concentrations with distance downwind, in line with observations. Surface-level and vertically averaged P(O3) values peak during midday and are highest downwind; surface P(O3) values are greater than vertically averaged values. Surface OPEs generally are highest downwind and peak during midday in downwind locations. In contrast, peaks occur in early morning and late afternoon in the vertically averaged case. Vertically averaged OPEs tend to be greater than those for the surface. Total reactivities are highest in upwind surface locations and peak during rush hours; vertically averaged reactivities are smaller and tend to be more uniform temporally and spatially. Total reactivity has large contributions from CO, alkanes, alkenes, aldehydes, unsubstituted monoaromatics, and secondary organics. Calculations using estimated emissions for 2010 result in decreases in P(O3) values and reactivities but increases in OPEs.

  15. The behavior of antibiotic resistance genes and arsenic influenced by biochar during different manure composting.

    PubMed

    Cui, Erping; Wu, Ying; Jiao, Yanan; Zuo, Yiru; Rensing, Christopher; Chen, Hong

    2017-06-01

    The effect of two different biochar types, rice straw biochar (RSB) and mushroom biochar (MB), on chicken manure composting was previously examined by monitoring the fate of antibiotic resistance genes (ARGs) and arsenic. The behavior of ARGs and arsenic in other kinds of manure composting with the same biochar types had not been examined. In this study, we added either RSB or MB to pig and duck manure composts to study the behavior of ARGs (tet genes, sul genes, and chloramphenicol resistance genes) and arsenic under the same experimental condition. The results showed that the average removal values of selected ARGs were respectively 2.56 and 2.09 log units in duck and pig manure compost without the addition of biochar. The effect of biochar addition on the average removal value of ARGs depended on the type of biochar and manure. For instance, in pig manure compost, MB addition increased the average removal value of ARGs, while RSB addition decreased. And both biochar additions had a negative influence on the average removal value of ARGs in duck manure compost. Analytical results also demonstrated that MB addition reduced total arsenic and the percentage of bioavailable arsenic more than RSB.

  16. Spatial averaging of fields from half-wave dipole antennas and corresponding SAR calculations in the NORMAN human voxel model between 65 MHz and 2 GHz.

    PubMed

    Findlay, R P; Dimbylow, P J

    2009-04-21

    If an antenna is located close to a person, the electric and magnetic fields produced by the antenna will vary in the region occupied by the human body. To obtain a mean value of the field for comparison with reference levels, the Institute of Electrical and Electronic Engineers (IEEE) and International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommend spatially averaging the squares of the field strength over the height the body. This study attempts to assess the validity and accuracy of spatial averaging when used for half-wave dipoles at frequencies between 65 MHz and 2 GHz and distances of lambda/2, lambda/4 and lambda/8 from the body. The differences between mean electric field values calculated using ten field measurements and that of the true averaged value were approximately 15% in the 600 MHz to 2 GHz range. The results presented suggest that the use of modern survey equipment, which takes hundreds rather than tens of measurements, is advisable to arrive at a sufficiently accurate mean field value. Whole-body averaged and peak localized SAR values, normalized to calculated spatially averaged fields, were calculated for the NORMAN voxel phantom. It was found that the reference levels were conservative for all whole-body SAR values, but not for localized SAR, particularly in the 1-2 GHz region when the dipole was positioned very close to the body. However, if the maximum field is used for normalization of calculated SAR as opposed to the lower spatially averaged value, the reference levels provide a conservative estimate of the localized SAR basic restriction for all frequencies studied.

  17. Average and recommended half-life values for two neutrino double beta decay: Upgrade-2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barabash, A. S.

    2013-12-30

    All existing positive results on two neutrino double beta decay in different nuclei were analyzed. Using the procedure recommended by the Particle Data Group, weighted average values for half-lives of {sup 48}Ca, {sup 76}Ge, {sup 82}Se, {sup 96}Zr, {sup 100}Mo, {sup 100}Mo−{sup 100}Ru (0{sub 1}{sup +}), {sup 116}Cd, {sup 130}Te, {sup 136}Xe, {sup 150}Nd, {sup 150}Nd−{sup 150}Sm (0{sub 1}{sup +}) and {sup 238}U were obtained. Existing geochemical data were analyzed and recommended values for half-lives of {sup 128}Te and {sup 130}Ba are proposed. I recommend the use of these results as the most currently reliable values for half-lives.

  18. Body density differences between negro and caucasian professional football players

    PubMed Central

    Adams, J.; Bagnall, K. M.; McFadden, K. D.; Mottola, M.

    1981-01-01

    Other workers have shown that the bone density for the average negro is greater than for the average caucasian. This would lead to greater values of body density for the average negro but it is confused because the average negro has a different body form (and consequently different proportions of body components) compared with the average caucasian. This study of body density of a group of professional Canadian football players investigates whether or not to separate negroes from caucasians when considering the formation of regression equations for prediction of body density. Accordingly, a group of 7 negroes and 7 caucasians were matched somatotypically and a comparison was made of their body density values obtained using a hydrostatic weighing technique and a closed-circuit helium dilution technique for measuring lung volumes. The results show that if somatotype is taken into account then no significant difference in body density values is found between negro and caucasian professional football players. The players do not have to be placed in separate groups but it remains to be seen whether or not these results apply to general members of the population. ImagesFigure 1 PMID:7317724

  19. The Effect of Fuel Quality on Carbon Dioxide and Nitrogen Oxide Emissions, While Burning Biomass and RDF

    NASA Astrophysics Data System (ADS)

    Kalnacs, J.; Bendere, R.; Murasovs, A.; Arina, D.; Antipovs, A.; Kalnacs, A.; Sprince, L.

    2018-02-01

    The article analyses the variations in carbon dioxide emission factor depending on parameters characterising biomass and RDF (refuse-derived fuel). The influence of moisture, ash content, heat of combustion, carbon and nitrogen content on the amount of emission factors has been reviewed, by determining their average values. The options for the improvement of the fuel to result in reduced emissions of carbon dioxide and nitrogen oxide have been analysed. Systematic measurements of biomass parameters have been performed, by determining their average values, seasonal limits of variations in these parameters and their mutual relations. Typical average values of RDF parameters and limits of variations have been determined.

  20. 75 FR 14569 - Polyethylene Retail Carrier Bags from Taiwan: Final Determination of Sales at Less Than Fair Value

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-26

    ... excludes (1) polyethylene bags that are not printed with logos or store names and that are closeable with... comparison methodology to TCI's targeted sales and the average-to-average comparison methodology to TCI's non... average-to-average comparison method does not account for such price differences and results in the...

  1. Analyzing Patients' Values by Applying Cluster Analysis and LRFM Model in a Pediatric Dental Clinic in Taiwan

    PubMed Central

    Lin, Shih-Yen; Liu, Chih-Wei

    2014-01-01

    This study combines cluster analysis and LRFM (length, recency, frequency, and monetary) model in a pediatric dental clinic in Taiwan to analyze patients' values. A two-stage approach by self-organizing maps and K-means method is applied to segment 1,462 patients into twelve clusters. The average values of L, R, and F excluding monetary covered by national health insurance program are computed for each cluster. In addition, customer value matrix is used to analyze customer values of twelve clusters in terms of frequency and monetary. Customer relationship matrix considering length and recency is also applied to classify different types of customers from these twelve clusters. The results show that three clusters can be classified into loyal patients with L, R, and F values greater than the respective average L, R, and F values, while three clusters can be viewed as lost patients without any variable above the average values of L, R, and F. When different types of patients are identified, marketing strategies can be designed to meet different patients' needs. PMID:25045741

  2. Analyzing patients' values by applying cluster analysis and LRFM model in a pediatric dental clinic in Taiwan.

    PubMed

    Wu, Hsin-Hung; Lin, Shih-Yen; Liu, Chih-Wei

    2014-01-01

    This study combines cluster analysis and LRFM (length, recency, frequency, and monetary) model in a pediatric dental clinic in Taiwan to analyze patients' values. A two-stage approach by self-organizing maps and K-means method is applied to segment 1,462 patients into twelve clusters. The average values of L, R, and F excluding monetary covered by national health insurance program are computed for each cluster. In addition, customer value matrix is used to analyze customer values of twelve clusters in terms of frequency and monetary. Customer relationship matrix considering length and recency is also applied to classify different types of customers from these twelve clusters. The results show that three clusters can be classified into loyal patients with L, R, and F values greater than the respective average L, R, and F values, while three clusters can be viewed as lost patients without any variable above the average values of L, R, and F. When different types of patients are identified, marketing strategies can be designed to meet different patients' needs.

  3. Upscaled soil-water retention using van Genuchten's function

    USGS Publications Warehouse

    Green, T.R.; Constantz, J.E.; Freyberg, D.L.

    1996-01-01

    Soils are often layered at scales smaller than the block size used in numerical and conceptual models of variably saturated flow. Consequently, the small-scale variability in water content within each block must be homogenized (upscaled). Laboratory results have shown that a linear volume average (LVA) of water content at a uniform suction is a good approximation to measured water contents in heterogeneous cores. Here, we upscale water contents using van Genuchten's function for both the local and upscaled soil-water-retention characteristics. The van Genuchten (vG) function compares favorably with LVA results, laboratory experiments under hydrostatic conditions in 3-cm cores, and numerical simulations of large-scale gravity drainage. Our method yields upscaled vG parameter values by fitting the vG curve to the LVA of water contents at various suction values. In practice, it is more efficient to compute direct averages of the local vG parameter values. Nonlinear power averages quantify a feasible range of values for each upscaled vG shape parameter; upscaled values of N are consistently less than the harmonic means, reflecting broad pore-size distributions of the upscaled soils. The vG function is useful for modeling soil-water retention at large scales, and these results provide guidance for its application.

  4. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  5. Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; et al.

    2012-07-01

    This article reports world averages of measurements of b-hadron, c-hadron, and tau-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through the end of 2011. In some cases results available in the early part of 2012 are included. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, CP violation parameters, parameters of semileptonic decays and CKM matrix elements.

  6. Deviation Value for Conventional X-ray in Hospitals in South Sulawesi Province from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Bachtiar, Ilham; Abdullah, Bualkar; Tahir, Dahlan

    2018-03-01

    This paper describes the conventional X-ray machine parameters tested in the region of South Sulawesi from 2014 to 2016. The objective of this research is to know deviation of every parameter of conventional X-ray machine. The testing parameters were analyzed by using quantitative methods with participatory observational approach. Data collection was performed by testing the output of conventional X-ray plane using non-invasive x-ray multimeter. The test parameters include tube voltage (kV) accuracy, radiation output linearity, reproducibility and radiation beam value (HVL) quality. The results of the analysis show four conventional X-ray test parameters have varying deviation spans, where the tube voltage (kV) accuracy has an average value of 4.12%, the average radiation output linearity is 4.47% of the average reproducibility of 0.62% and the averaged of the radiation beam (HVL) is 3.00 mm.

  7. A diameter distribution approach to estimating average stand dominant height in Appalachian hardwoods

    Treesearch

    John R. Brooks

    2007-01-01

    A technique for estimating stand average dominant height based solely on field inventory data is investigated. Using only 45.0919 percent of the largest trees per acre in the diameter distribution resulted in estimates of average dominant height that were within 4.3 feet of the actual value, when averaged over stands of very different structure and history. Cubic foot...

  8. [Study on the content and carbon isotopic composition of water dissolved inorganic carbon from rivers around Xi'an City].

    PubMed

    Guo, Wei; Li, Xiang-Zhong; Liu, Wei-Guo

    2013-04-01

    In this study, the content and isotopic compositions of water dissolved inorganic carbon (DIC) from four typical rivers (Chanhe, Bahe, Laohe and Heihe) around Xi'an City were studied to trace the possible sources of DIC. The results of this study showed that the content of DIC in the four rivers varied from 0.34 to 5.66 mmol x L(-1) with an average value of 1.23 mmol x L(-1). In general, the content of DIC increased from the headstream to the river mouth. The delta13C(DIC) of four rivers ranged from -13.3 per thousand to -7.2 per thousand, with an average value of -10.1 per thousand. The delta13C(DIC) values of river water were all negative (average value of -12.6 per thousand) at the headstream of four rivers, but the delta13C(DIC) values of downstream water were more positive (with an average value of -9.4 per thousand). In addition, delta13C(DIC) of river water showed relatively negative values (the average value of delta13C(DIC) was -10.5 per thousand) near the estuary of the rivers. The variation of the DIC content and its carbon isotope suggested that the DIC sources of the rivers varied from the headstream to the river mouth. The negative delta13C(DIC) value indicated that the DIC may originate from the soil CO2 at the headstream of the rivers. On the other hand, the delta13C(DIC) values of river water at the middle and lower reaches of rivers were more positive, and it showed that soil CO2 produced by respiration of the C4 plants (like corn) and soil carbonates with positive delta13C values may be imported into river water. Meanwhile, the input of pollutants with low delta13C(DIC) values may result in a decrease of delta13C(DIC) values in the rivers. The study indicated that the DIC content and carbon isotope may be used to trace the sources of DIC in rivers around Xi'an City. Our study may provide some basic information for tracing the sources of DIC of rivers in the small watershed area in the Loess Plateau of China.

  9. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    NASA Astrophysics Data System (ADS)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-09-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  10. Investigation of Acoustic Structure Quantification in the Diagnosis of Thyroiditis.

    PubMed

    Park, Jisang; Hong, Hyun Sook; Kim, Chul-Hee; Lee, Eun Hye; Jeong, Sun Hye; Lee, A Leum; Lee, Heon

    2016-03-01

    The objective of this study was to evaluate the ability of acoustic structure quantification (ASQ) to diagnose thyroiditis. The echogenicity of 439 thyroid lobes, as determined using ASQ, was quantified and analyzed retrospectively. Thyroiditis was categorized into five subgroups. The results were presented in a modified chi-square histogram as the mode, average, ratio, blue mode, and blue average. We determined the cutoff values of ASQ from ROC analysis to detect and differentiate thyroiditis from a normal thyroid gland. We obtained data on the sensitivity and specificity of the cutoff values to distinguish between euthyroid patients with thyroiditis and patients with a normal thyroid gland. The mean ASQ values for patients with thyroiditis were statistically significantly greater than those for patients with a normal thyroid gland (p < 0.001). The AUCs were as follows: 0.93 for the ratio, 0.91 for the average, 0.90 for the blue average, 0.87 for the mode, and 0.87 for the blue mode. For the diagnosis of thyroiditis, the cutoff values were greater than 0.27 for the ratio, greater than 116.7 for the mean, and greater than 130.7 for the blue average. The sensitivities and specificities were as follows: 84.0% and 96.6% for the ratio, 85.3% and 83.0%, for the average, and 79.1% and 93.2% for the blue average, respectively. The ASQ parameters were successful in distinguishing patients with thyroiditis from patients with a normal thyroid gland, with likelihood ratios of 24.7 for the ratio, 5.0 for the average, and 11.6 for the blue average. With the use of the aforementioned cutoff values, the sensitivities and specificities for distinguishing between patients with thyroiditis and euthyroid patients without thyroiditis were 77.05% and 94.92% for the ratio, 85.25% and 82.20% for the average, and 77.05% and 92.37% for the blue average, respectively. ASQ can provide objective and quantitative analysis of thyroid echogenicity. ASQ parameters were successful in distinguishing between patients with thyroiditis and individuals without thyroiditis, with likelihood ratios of 24.7 for the ratio, 5.0 for the average, and 11.6 for the blue average.

  11. Comparison of Cerebral Oximeter and Pulse Oximeter Values in the First 72 Hours in Premature, Asphyctic and Healthy Newborns

    PubMed Central

    Kaya, A; Okur, M; Sal, E; Peker, E; Köstü, M; Tuncer, O; Kirimi, E

    2014-01-01

    ABSTRACT Aim: The monitoring of oxygenation is essential for providing patient safety and optimal results. We aimed to determine brain oxygen saturation values in healthy, asphyctic and premature newborns and to compare cerebral oximeter and pulse oximeter values in the first 72 hours of life in neonatal intensive care units. Methods: This study was conducted at the neonatal intensive care unit (NICU) of Van Yüzüncü Yil University Research and Administration Hospital. Seventy-five neonatal infants were included in the study (28 asphyxia, 24 premature and 23 mature healthy infants for control group). All infants were studied within the first 72 hours of life. We used a Somanetics 5100C cerebral oximeter (INVOS cerebral/somatic oximeter, Troy, MI, USA). The oxygen saturation information was collected by a Nellcor N-560 pulse oximeter (Nellcor-Puriton Bennet Inc, Pleasanton, CA, USA). Results: In the asphyxia group, the cerebral oximeter average was 76.85 ± 14.1, the pulse oximeter average was 91.86 ± 5.9 and the heart rate average was 139.91 ± 22.3. Among the premature group, the cerebral oximeter average was 79.08 ± 9.04, the pulse oximeter average was 92.01 ± 5.3 and the heart rate average was 135.35 ± 17.03. In the control group, the cerebral oximeter average was 77.56 ± 7.6, the pulse oximeter average was 92.82 ± 3.8 and the heart rate average was 127.04 ± 19.7. Conclusion: Cerebral oximeter is a promising modality in bedside monitoring in neonatal intensive care units. It is complementary to pulse oximeter. It may be used routinely in neonatal intensive care units. PMID:25867556

  12. SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Supanich, MP

    2015-06-15

    Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in themore » central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.« less

  13. [Similarities and differences in absorption characteristics and composition of CDOM between Taihu Lake and Chaohu Lake].

    PubMed

    Shi, Kun; Li, Yun-mei; Wang, Qiao; Yang, Yu; Jin, Xin; Wang, Yan-fei; Zhang, Hong; Yin, Bin

    2010-05-01

    Field experiments are conducted separately in Taihu Lake and Chaohu Lake on Apr. and Jun. 2009. The changes in absorption spectra of chromophoric dissolved organic matter (CDOM) characteristics are analyzed using spectral differential analysis technology. According the spectral differential characteristic of absorption coefficient; absorption coefficient from 240 to 450 nm is divided into different stages, and the value of spectral slope S is calculated in each stage. In Stage A, S value of CDOM in Taihu Lake and Chaohu Lake are 0.0166-0.0102 nm(-1) [average (0.0132 +/- 0.0017) nm(-1)], 0.029-0.017 nm(-1) [average (0.0214 +/- 0.0024) nm(-1)]. In Stage B, S values are 0.0187-0.0148 nm(-1) [average (0.0169 +/- 0.001) nm(-1)], 0.0179-0.0055 nm(-1) [average (0.0148 +/- 0.002) nm(-1)]. In Stage C, S values are 0.0208-0.0164 nm(-1) [average (0.0186 +/- 0.0009) nm(-1)], 0.0253-0.0161 nm(-1) [average (0.0197 +/- 0.002) nm(-1)]. The results can be concluded as: (1) Absorption coefficient of water in Taihu Lake, and its contribution to absorption of each component is less than that of water in Chaohu Lake, however the standardized absorption coefficient is larger than that in Chaohu Lake. (2) Both in Taihu Lake and Chaohu Lake, derivative spectra of CDOM absorption coefficient reached valley at 260nm, then rise to top at 290 nm, CDOM absorption coefficient can be delivered into three stages. (3) Generally speaking, content of CDOM in Taihu Lake is less than in Chaohu Lake. (4) pectrum slope (S value) of CDOM is related to composition of CDOM, when content of humic acid in CDOM gets higher, S value of Stage B is the most sensitive value, then is the S value of Stage C. Oppositely, S value of Stage B gets the most sensitive value, then is the S value of Stage A; the least sensitive value is in Stage B.

  14. Study on relationship of nitric oxide, oxidation, peroxidation, lipoperoxidation with chronic chole-cystitis

    PubMed Central

    Zhou, Jun-Fu; Cai, Dong; Zhu, You-Gen; Yang, Jin-Lu; Peng, Cheng-Hong; Yu, Yang-Hai

    2000-01-01

    AIM: To study relationship of injury induced by nitric oxide, oxidation, peroxidation, lipoperoxidation with chronic cholecystitis. METHODS: The values of plasma nitric oxide (P-NO), plasma vitamin C (P-VC), plasma vitamin E (P-VE), plasma β-carotene (P-β-CAR), plasma lipoperoxides (P-LPO), erythrocyte superoxide dismutase (E-SOD), erythrocyte catalase (E-CAT), erythrocyte glutathione peroxidase (E-GSH-Px) activities and erythrocyte lipoperoxides (E-LPO) level in 77 patients with chro nic cholecystitis and 80 healthy control subjects were determined, differences of the above average values between t he patient group and the control group and differences of the average values bet ween preoperative and postoperative patients were analyzed and compared, linear regression and correlation of the disease course with the above determination values as well as the stepwise regression and correlation of the course with th e values were analyzed. RESULTS: Compared with the control group, the average values of P-NO, P-LPO, E-LPO were significantly increased (P < 0.01), and of P-VC, P-VE, P-β-CAR, E-SOD, E-CAT and E-GSH-Px decreased (P < 0.01) in the patient group. The analysis of the lin ear regression and correlation s howed that with prolonging of the course, the values of P-NO, P-LPO and E-LPO in the patients were gradually ascended and the values of P-VC, P-VE, P-β-CAR, E-SOD, E-CAT and E-GSH-Px descended (P < 0.01). The analysis of the stepwise regression and correlation indicated that the correlation of the course with P-NO, P-VE and P-β-CAR values was the closest. Compared with the preoperative patients, the average values of P-NO, P-LPO and E-LPO were significantly decre ased (P < 0.01) and the average values of P-VC, E-SOD, E-CAT and E-GSH-Px in postoperative pa tients increased (P < 0.01) in postoperative patients. But there was no signif icant difference in the average values of P-VE, P-β-CAR preope rative and postoperative patients. CONCLUSION: Chronic cholecystitis could induce the increase of nitric oxide, oxidation, peroxidation and lipoperoxidation. PMID:11819637

  15. Assessment of natural radioactivity levels in soil samples from some areas in Assiut, Egypt.

    PubMed

    El-Gamal, Hany; Farid, M El-Azab; Abdel Mageed, A I; Hasabelnaby, M; Hassanien, Hassanien M

    2013-12-01

    The natural radioactivity of soil samples from Assiut city, Egypt, was studied. The activity concentrations of 28 samples were measured with a NaI(Tl) detector. The radioactivity concentrations of (226)Ra, (232)Th, and (40)K showed large variations, so the results were classified into two groups (A and B) to facilitate the interpretation of the results. Group A represents samples collected from different locations in Assiut and characterized by low activity concentrations with average values of 46.15 ± 9.69, 30.57 ± 4.90, and 553.14 ± 23.19 for (226)Ra, (232)Th, and (40)K, respectively. Group B represents samples mainly collected from the area around Assiut Thermal Power Plant and characterized by very high activity concentrations with average values of 3,803 ± 145, 1,782 ± 98, and 1,377 ± 78 for (226)Ra, (232)Th, and (40)K, respectively. In order to evaluate the radiological hazard of the natural radioactivity, the radium equivalent activity (Raeq), the absorbed dose rate (D), the annual effective dose rate (E), the external hazard index (H ex), and the annual gonadal dose equivalent (AGDE) have been calculated and compared with the internationally approved values. For group A, the calculated averages of these parameters are in good agreement with the international recommended values except for the absorbed dose rate and the AGDE values which are slightly higher than the international recommended values. However, for group B, all obtained averages of these parameters are much higher by several orders of magnitude than the international recommended values. The present work provides a background of radioactivity concentrations in the soil of Assiut.

  16. Resonance behaviour of whole-body averaged specific energy absorption rate (SAR) in the female voxel model, NAOMI

    NASA Astrophysics Data System (ADS)

    Dimbylow, Peter

    2005-09-01

    Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.

  17. Resonance behaviour of whole-body averaged specific energy absorption rate (SAR) in the female voxel model, NAOMI.

    PubMed

    Dimbylow, Peter

    2005-09-07

    Finite-difference time-domain (FDTD) calculations have been performed of the whole-body averaged specific energy absorption rate (SAR) in a female voxel model, NAOMI, under isolated and grounded conditions from 10 MHz to 3 GHz. The 2 mm resolution voxel model, NAOMI, was scaled to a height of 1.63 m and a mass of 60 kg, the dimensions of the ICRP reference adult female. Comparison was made with SAR values from a reference male voxel model, NORMAN. A broad SAR resonance in the NAOMI values was found around 900 MHz and a resulting enhancement, up to 25%, over the values for the male voxel model, NORMAN. This latter result confirmed previously reported higher values in a female model. The effect of differences in anatomy was investigated by comparing values for 10-, 5- and 1-year-old phantoms rescaled to the ICRP reference values of height and mass which are the same for both sexes. The broad resonance in the NAOMI child values around 1 GHz is still a strong feature. A comparison has been made with ICNIRP guidelines. The ICNIRP occupational reference level provides a conservative estimate of the whole-body averaged SAR restriction. The linear scaling of the adult phantom using different factors in longitudinal and transverse directions, in order to match the ICRP stature and weight, does not exactly reproduce the anatomy of children. However, for public exposure the calculations with scaled child models indicate that the ICNIRP reference level may not provide a conservative estimate of the whole-body averaged SAR restriction, above 1.2 GHz for scaled 5- and 1-year-old female models, although any underestimate is by less than 20%.

  18. Do the pollution related to high-traffic roads in urbanised areas pose a significant threat to the local population?

    PubMed

    Kobza, Joanna; Geremek, Mariusz

    2017-01-01

    Many large neighbourhoods are located near heavy-traffic roads; therefore, it is necessary to control the levels of air pollution near road exposure. The primary air pollutants emitted by motor vehicles are CO, NO 2 and PM. Various investigations identify key health outcomes to be consistently associated with NO 2 and CO. The objective of this study was the measurement-based assessment for determining whether by high-traffic roads, such as motorways and express ways, and the concentrations of CO and NO 2 are within normal limits and do not pose threat to the local population. Average daily values (arithmetic values calculated for 1-h values within 24 h or less, depending on result availability) were measured for concentrations of NO 2 and CO by automatic stations belonging to the Voivodship Environmental Protection Inspectorate in Katowice, in areas with similar dominant source of pollutant emission. The measurements were made in three sites: near the motorway and expressway, where the average daily traffic intensity is 100983 and 35414 of vehicles relatively. No evidence was found of exceeding average daily values equal to the maximum allowable NO 2 concentration due to the protection of human health in the measurement area of the stations. No daily average values exceeding the admissible CO concentration (8-h moving average) were noted in the examined period. The results clearly show lack of hazards for general population health in terms of increased concentrations of CO and NO 2 compounds that are closely related to high intensity car traffic found on selected motorways and speedways located near the city centres.

  19. Study of the effect of electromagnetic fields on indoor and outdoor radon concentrations

    NASA Astrophysics Data System (ADS)

    Haider, Lina M.; Shareef, N. R.; Darwoysh, H. H.; Mansour, H. L.

    2018-05-01

    In the present work, the effect of electromagnetic fields produced by high voltage power lines(132kV) and indoor equipments on the indoor and outdoor average radon concentrations in Al-Kazaliya and Hay Al-Adil regions in Baghdad city were studied using CR-39 track detectors and a gauss-meter.Results of measurements of the present study, have shown that the highest value for the indoor average radon concentration (76.56± 8.44 Bq / m3) was recorded for sample A1(Hay Al-Adel) at a distance of (20 m) from the high voltage power lines, while the lowest value for the indoor average radon concentration (30.46 ± 8.44 Bq / m3) was recorded for sample A3 (Hay Al-Adil) at a distance of (50 m) from the high voltage power lines. The indoor gaussmeter measurements were found to be ranged from (30.2 mG) to (38.5 mG). The higest value for outdoor average radon concentration and the highest gaussmeter measurements were found for sample (1), with values (92.63 ±11.2 Bq / m3) and (87.24 ± 2.85 mG), directly under the high voltage power lines respectively, while the lowest outdoor average radon concentration and the lowest gaussmeter measurements were found in sample (4),with values (34.19 ± 6.33 Bq / m3) and (1.16 ± 0.14 Bq / m3),), at a distance of (120 m) from the high voltage power lines respectively. The results of the present work have shown that there might be an influence of the electromagnetic field on radon concentrations in areas which were close to high voltage power lines and houses which have used many electric equipment for a long period of time.

  20. Implementation of ICARE learning model using visualization animation on biotechnology course

    NASA Astrophysics Data System (ADS)

    Hidayat, Habibi

    2017-12-01

    ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.

  1. Measurement and interpretation of skin prick test results.

    PubMed

    van der Valk, J P M; Gerth van Wijk, R; Hoorn, E; Groenendijk, L; Groenendijk, I M; de Jong, N W

    2015-01-01

    There are several methods to read skin prick test results in type-I allergy testing. A commonly used method is to characterize the wheal size by its 'average diameter'. A more accurate method is to scan the area of the wheal to calculate the actual size. In both methods, skin prick test (SPT) results can be corrected for histamine-sensitivity of the skin by dividing the results of the allergic reaction by the histamine control. The objectives of this study are to compare different techniques of quantifying SPT results, to determine a cut-off value for a positive SPT for histamine equivalent prick -index (HEP) area, and to study the accuracy of predicting cashew nut reactions in double-blind placebo-controlled food challenge (DBPCFC) tests with the different SPT methods. Data of 172 children with cashew nut sensitisation were used for the analysis. All patients underwent a DBPCFC with cashew nut. Per patient, the average diameter and scanned area of the wheal size were recorded. In addition, the same data for the histamine-induced wheal were collected for each patient. The accuracy in predicting the outcome of the DBPCFC using four different SPT readings (i.e. average diameter, area, HEP-index diameter, HEP-index area) were compared in a Receiver-Operating Characteristic (ROC) plot. Characterizing the wheal size by the average diameter method is inaccurate compared to scanning method. A wheal average diameter of 3 mm is generally considered as a positive SPT cut-off value and an equivalent HEP-index area cut-off value of 0.4 was calculated. The four SPT methods yielded a comparable area under the curve (AUC) of 0.84, 0.85, 0.83 and 0.83, respectively. The four methods showed comparable accuracy in predicting cashew nut reactions in a DBPCFC. The 'scanned area method' is theoretically more accurate in determining the wheal area than the 'average diameter method' and is recommended in academic research. A HEP-index area of 0.4 is determined as cut-off value for a positive SPT. However, in clinical practice, the 'average diameter method' is also useful, because this method provides similar accuracy in predicting cashew nut allergic reactions in the DBPCFC. Trial number NTR3572.

  2. The national survey of natural radioactivity in concrete produced in Israel.

    PubMed

    Kovler, Konstantin

    2017-03-01

    The main goal of the current survey was to collect the results of the natural radiation tests of concrete produced in the country, to analyze the results statistically and make recommendations for further regulation on the national scale. Totally 109 concrete mixes produced commercially during the years 2012-2014 by concrete plants in Israel were analyzed. The average concentrations of NORM did not exceed the values recognized in the EU and were close to the values obtained from the Mediterranean countries such as Greece, Spain and Italy. It was also found that although the average value of the radon emanation coefficient of concrete containing coal fly ash (FA) was lower, than that of concrete mixes without FA, there was no significant difference between the indexes of both total radiation (addressing gamma radiation and radon together), and gamma radiation only, of the averages of the two sub-populations of concrete mixes: with and without FA. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. The production of calibration specimens for impact testing of subsize Charpy specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander, D.J.; Corwin, W.R.; Owings, T.D.

    1994-09-01

    Calibration specimens have been manufactured for checking the performance of a pendulum impact testing machine that has been configured for testing subsize specimens, both half-size (5.0 {times} 5.0 {times} 25.4 mm) and third-size (3.33 {times} 3.33 {times} 25.4 mm). Specimens were fabricated from quenched-and-tempered 4340 steel heat treated to produce different microstructures that would result in either high or low absorbed energy levels on testing. A large group of both half- and third-size specimens were tested at {minus}40{degrees}C. The results of the tests were analyzed for average value and standard deviation, and these values were used to establish calibration limitsmore » for the Charpy impact machine when testing subsize specimens. These average values plus or minus two standard deviations were set as the acceptable limits for the average of five tests for calibration of the impact testing machine.« less

  4. Averages of $b$-hadron, $c$-hadron, and $$\\tau$$-lepton properties as of summer 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; et al.

    2014-12-23

    This article reports world averages of measurements ofmore » $b$-hadron, $c$-hadron, and $$\\tau$$-lepton properties obtained by the Heavy Flavor Averaging Group (HFAG) using results available through summer 2014. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters, $CP$ violation parameters, parameters of semileptonic decays and CKM matrix elements.« less

  5. Proxy-based reconstruction of erythemal UV doses over Estonia for 1955 2004

    NASA Astrophysics Data System (ADS)

    Eerme, K.; Veismann, U.; Lätt, S.

    2006-08-01

    A proxy-based reconstruction of the erythemally-weighted UV doses for 1955-2004 has been performed for the Tartu-Tõravere Meteorological Station (58°16' N, 26°28' E, 70 m a.s.l.) site. The pyrheliometer-measured daily sum of direct irradiance on partly cloudy and clear days, and the pyranometer-measured daily sum of global irradiance on overcast days were used as the cloudiness influence related proxies. The TOMS ozone data have been used for detecting the daily deviations from the climatic value (averaged annual cycle). In 1998-2004, the biases between the measured and reconstructed daily doses in 55.5% of the cases were within ±10% and in 83.5% of the cases within ±20%, on average. In the summer half-year these amounts were 62% and 88%, respectively. In most years the results for longer intervals did not differ significantly, if no correction was made for the daily deviations of total ozone from its climatic value. The annual and summer half-yearly erythemal doses (contributing, on average, 89% of the annual value) agreed within ±2%, except for the years after major volcanic eruptions and one extremely fine weather year (2002). Using the daily relative sunshine duration as a proxy without detailed correction for atmospheric turbidity results in biases of 2-4% in the summer half-yearly dose in the years after major volcanic eruptions and a few other years of high atmospheric turbidity. The year-to-year variations of the summer half-yearly erythemal dose in 1955-2004 were found to be within 92-111% relative to their average value. Exclusion of eight extreme years reduces this range for the remaining to 95-105.5%. Due to the quasi-periodic alternation of wet and dry periods, the interval of cloudy summers 1976-1993 regularly manifests summer half-yearly erythemal dose values lower than the 1955-2004 average. Since 1996/1997 midwinters have been darker than on average.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Chu-Lin; Perfect, Edmund; Kang, Misun

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line.more » Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.« less

  7. Predicting charmonium and bottomonium spectra with a quark harmonic oscillator

    NASA Technical Reports Server (NTRS)

    Norbury, J. W.; Badavi, F. F.; Townsend, L. W.

    1986-01-01

    The nonrelativistic quark model is applied to heavy (nonrelativistic) meson (two-body) systems to obtain sufficiently accurate predictions of the spin-averaged mass levels of the charmonium and bottomonium spectra as an example of the three-dimensional harmonic oscillator. The present calculations do not include any spin dependence, but rather, mass values are averaged for different spins. Results for a charmed quark mass value of 1500 MeV/c-squared show that the simple harmonic oscillator model provides good agreement with experimental values for 3P states, and adequate agreement for the 3S1 states.

  8. North Atlantic Basin Tropical Cyclone Activity in Relation to Temperature and Decadal- Length Oscillation Patterns

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2009-01-01

    Yearly frequencies of North Atlantic basin tropical cyclones, their locations of origin, peak wind speeds, average peak wind speeds, lowest pressures, and average lowest pressures for the interval 1950-2008 are examined. The effects of El Nino and La Nina on the tropical cyclone parametric values are investigated. Yearly and 10-year moving average (10-yma) values of tropical cyclone parameters are compared against those of temperature and decadal-length oscillation, employing both linear and bi-variate analysis, and first differences in the 10-yma are determined. Discussion of the 2009 North Atlantic basin hurricane season, updating earlier results, is given.

  9. Assessment of natural radioactivity and radiation hazard indices in soil samples of East Khasi Hills District, Meghalaya, India

    NASA Astrophysics Data System (ADS)

    Lyngkhoi, B.; Nongkynrih, P.

    2018-04-01

    The Activity Concentrations of naturally occurring radionuclides such as 40K, 238U and 232Th were determined from 20 (twenty) villages of East Khasi Hills District of Meghalaya, India using gamma-ray spectroscopy. This District is adjacent to the South-West Khasi Hills District located in the same state where heavy deposit of uranium has been identified [1]. The measured activities of 40K, 238U and 232Th were found ranging from 93.4 to 606.3, 23.2 to 140.9 and 25.1 to 158.9 Bq kg-1 with their average values of 207.9, 45.6 and 63.8 Bq kg-1, respectively. The obtained value of activity concentration for 40K is lower than the world average value 400.0 Bq kg-1 while for 238U and 232Th, the average concentrations are above the world average values 35.0 and 30.0 Bq kg-1, respectively. The calculated Absorbed Dose Rate gamma-radiation of the natural radionuclides ranged from 37.4 to 186.5 nGy h-1 with an average of 71.3 nGy h-1. The outdoor Annual Effective Dose Rate received by an individual ranged from 50.0-230.0 µSv y-1 with an average value of 87.5 µSv y-1. The physical and chemical properties of the soil have no effects on the naturally occurring radionuclides concentrations. This has been revealed by the results obtained as there is no positive correlation between physical/chemical parameters and the radionuclides concentrations in the soil samples [2]. It is observed that good positive correlations among the radionuclides concentrations and with the measured dose rate prevail. The findings show that the values of external and internal hazard indices resulting from the measured activity concentrations of natural radionuclides in soil from the collected sampling areas are less than the International Recommended safety limits of 1 (unity) with the exception of Mylliem (1.12) where the External hazard index is slightly higher.

  10. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    PubMed

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  11. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  12. Vesta surface thermal properties map

    USGS Publications Warehouse

    Capria, Maria Teresa; Tosi, F.; De Santis, Maria Cristina; Capaccioni, F.; Ammannito, E.; Frigeri, A.; Zambon, F; Fonte, S.; Palomba, E.; Turrini, D.; Titus, T.N.; Schroder, S.E.; Toplis, M.J.; Liu, J.Y.; Combe, J.-P.; Raymond, C.A.; Russell, C.T.

    2014-01-01

    The first ever regional thermal properties map of Vesta has been derived from the temperatures retrieved by infrared data by the mission Dawn. The low average value of thermal inertia, 30 ± 10 J m−2 s−0.5 K−1, indicates a surface covered by a fine regolith. A range of thermal inertia values suggesting terrains with different physical properties has been determined. The lower thermal inertia of the regions north of the equator suggests that they are covered by an older, more processed surface. A few specific areas have higher than average thermal inertia values, indicative of a more compact material. The highest thermal inertia value has been determined on the Marcia crater, known for its pitted terrain and the presence of hydroxyl in the ejecta. Our results suggest that this type of terrain can be the result of soil compaction following the degassing of a local subsurface reservoir of volatiles.

  13. Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system

    NASA Astrophysics Data System (ADS)

    Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye

    2017-12-01

    In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.

  14. Real Diffusion-Weighted MRI Enabling True Signal Averaging and Increased Diffusion Contrast

    PubMed Central

    Eichner, Cornelius; Cauley, Stephen F; Cohen-Adad, Julien; Möller, Harald E; Turner, Robert; Setsompop, Kawin; Wald, Lawrence L

    2015-01-01

    This project aims to characterize the impact of underlying noise distributions on diffusion-weighted imaging. The noise floor is a well-known problem for traditional magnitude-based diffusion-weighted MRI (dMRI) data, leading to biased diffusion model fits and inaccurate signal averaging. Here, we introduce a total-variation-based algorithm to eliminate shot-to-shot phase variations of complex-valued diffusion data with the intention to extract real-valued dMRI datasets. The obtained real-valued diffusion data are no longer superimposed by a noise floor but instead by a zero-mean Gaussian noise distribution, yielding dMRI data without signal bias. We acquired high-resolution dMRI data with strong diffusion weighting and, thus, low signal-to-noise ratio. Both the extracted real-valued and traditional magnitude data were compared regarding signal averaging, diffusion model fitting and accuracy in resolving crossing fibers. Our results clearly indicate that real-valued diffusion data enables idealized conditions for signal averaging. Furthermore, the proposed method enables unbiased use of widely employed linear least squares estimators for model fitting and demonstrates an increased sensitivity to detect secondary fiber directions with reduced angular error. The use of phase-corrected, real-valued data for dMRI will therefore help to clear the way for more detailed and accurate studies of white matter microstructure and structural connectivity on a fine scale. PMID:26241680

  15. Aged Riverine Particulate Organic Carbon in Four UK Catchments

    NASA Astrophysics Data System (ADS)

    Adams, Jessica; Tipping, Edward; Bryant, Charlotte; Helliwell, Rachel; Toberman, Hannah; Quinton, John

    2016-04-01

    The riverine transport of particulate organic matter (POM) is a significant flux in the carbon cycle, and affects macronutrients and contaminants. We used radiocarbon to characterise POM at 9 riverine sites of four UK catchments (Avon, Conwy, Dee, Ribble) over a one-year period. High-discharge samples were collected on three or four occasions at each site. Suspended particulate matter (SPM) was obtained by centrifugation, and the samples were analysed for carbon isotopes. Concentrations of SPM and SPM organic carbon (OC) contents were also determined, and were found to have a significant negative correlation. For the 7 rivers draining predominantly rural catchments, PO14C values, expressed as percent modern carbon absolute (pMC), varied little among samplings at each site, and there was no significant difference in the average values among the sites. The overall average PO14C value for the 7 sites of 91.2 pMC corresponded to an average age of 680 14C years, but this value arises from the mixing of differently-aged components, and therefore significant amounts of organic matter older than the average value are present in the samples. Although topsoil erosion is probably the major source of the riverine POM, the average PO14C value is appreciably lower than topsoil values (which are typically 100 pMC). This is most likely explained by inputs of older subsoil OC from bank erosion, or the preferential loss of high-14C topsoil organic matter by mineralisation during riverine transport. The significantly lower average PO14C of samples from the River Calder (76.6 pMC), can be ascribed to components containing little or no radiocarbon, derived either from industrial sources or historical coal mining, and this effect is also seen in the River Ribble, downstream of its confluence with the Calder. At the global scale, the results significantly expand available information for PO14C in rivers draining catchments with low erosion rates.

  16. Channel Characterization for Free-Space Optical Communications

    DTIC Science & Technology

    2012-07-01

    parameters. From the path- average parameters, a 2nC profile model, called the HAP model, was constructed so that the entire channel from air to ground...SR), both of which are required to estimate the Power in the Bucket (PIB) and Power in the Fiber (PIF) associated with the FOENEX data beam. UCF was...of the path-average values of 2nC , the resulting HAP 2nC profile model led to values of ground level 2 nC that compared very well with actual

  17. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    DOE PAGES

    Amhis, Y.; Banerjee, Sw.; Ben-Haim, E.; ...

    2017-12-21

    Here, this article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters,more » $$C\\!P$$  violation parameters, parameters of semileptonic decays, and Cabbibo–Kobayashi–Maskawa matrix elements.« less

  18. Averages of b-hadron, c-hadron, and τ-lepton properties as of summer 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amhis, Y.; Banerjee, Sw.; Ben-Haim, E.

    Here, this article reports world averages of measurements of b-hadron, c-hadron, and τ-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters,more » $$C\\!P$$  violation parameters, parameters of semileptonic decays, and Cabbibo–Kobayashi–Maskawa matrix elements.« less

  19. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  20. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  1. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  2. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  3. 9 CFR 439.10 - Criteria for obtaining accreditation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... absolute value of the average standardized difference must not exceed the following: (i) For food chemistry... samples must be less than 5.0. A result will have a large deviation measure equal to zero when the absolute value of the result's standardized difference, (d), is less than 2.5 and otherwise a measure equal...

  4. Wear resistance of Polymethyl Methacrylate (PMMA) with the Addition of Bone Ash, Hydroxylapatite and Keratin

    NASA Astrophysics Data System (ADS)

    Emre, G.; Akkus, A.; Karamış, M. B.

    2018-01-01

    In this study mechanichal and tribological properties of keratin, bone ash and hydroxylapatite by adding to PMMA ( known as the main prosthesis material) were investigated. Hydroxylapatite, bone ash, and keratin materials were added as PMMA in to the content of PMMA, in the proportions of %1, %3 and %5, respectively. The resulting mixtures were put into the molds and solidified in order to obtain samples to be used in the wear experiments. Each experiment was conducted by preparing three experimental samples. The wear data were compared according to the average values of the experimental samples. In the wear test, the results were also evaluated according to the average values obtained from each group and the results of the control group. It was observed that, the wear resistance of the PMMA including 3%, 5% bone ash and PMMA including 5% keratin flour were higher than the values of the control group.

  5. Numerical investigation of the relationship between magnetic stiffness and minor loop size in the HTS levitation system

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Li, Chengshan

    2017-10-01

    The effect of minor loop size on the magnetic stiffness has not been paid attention to by most researchers in experimental and theoretical studies about the high temperature superconductor (HTS) magnetic levitation system. In this work, we numerically investigate the average magnetic stiffness obtained by the minor loop traverses Δz (or Δx) varying from 0.1 mm to 2 mm in zero field cooling and field cooling regimes, respectively. The approximate values of the magnetic stiffness with zero traverse are obtained using the method of linear extrapolation. Compared with the average magnetic stiffness gained by any minor loop traverse, these approximate values are Not always close to the average magnetic stiffness produced by the smallest size of minor loops. The relative deviation ranges of average magnetic stiffness gained by the usually minor loop traverse (1 or 2 mm) are presented by the ratios of approximate values to average stiffness for different moving processes and two typical cooling conditions. The results show that most of average magnetic stiffness are remarkably influenced by the sizes of minor loop, which indicates that the magnetic stiffness obtained by a single minor loop traverse Δ z or Δ x, for example, 1 or 2 mm, can be generally caused a large deviation.

  6. High order cell-centered scheme totally based on cell average

    NASA Astrophysics Data System (ADS)

    Liu, Ze-Yu; Cai, Qing-Dong

    2018-05-01

    This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.

  7. The Research of Feature Extraction Method of Liver Pathological Image Based on Multispatial Mapping and Statistical Properties

    PubMed Central

    Liu, Huiling; Xia, Bingbing; Yi, Dehui

    2016-01-01

    We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407

  8. [Impact of smoking ban at indoor public places on indoor air quality].

    PubMed

    Bilir, Nazmi; Özcebe, Hilal

    2012-01-01

    This study aims at evaluation of the effect of smoke-free policy at hospitality workplaces on indoor air quality. Study includes 151 hospitality venues (restaurants, cafes, bars and tea-houses) at eight provinces of Turkey. PM2.5 measurements were done at each of the venues three months prior to, and 4-5 months after the implementation of smoking ban at the same venues. Measurements were done using SidePak 2.5 by two engineers. During the 30 minutes of measurement, the device takes multiple samples, measures PM2.5 particles, and calculates the average value and standard deviation of the measurements. Using the measurement results two kinds of evaluation were done: in each province, increase/decrease after implementation for each of the venues included in the study was evaluated, and average PM2.5 values were calculated for provinces using the PM2.5 values of the venues in the province. The average PM2.5 values before the implementation were higher than the post implementation values in general. Nevertheless, in some provinces higher values were found during the second measurements, particularly at the restaurants. Therefore, there is need to enforce the smoking ban at the hospitality workplaces.

  9. Physical and mechanical properties of spinach for whole-surface online imaging inspection

    NASA Astrophysics Data System (ADS)

    Tang, Xiuying; Mo, Chang Y.; Chan, Diane E.; Peng, Yankun; Qin, Jianwei; Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin

    2011-06-01

    The physical and mechanical properties of baby spinach were investigated, including density, Young's modulus, fracture strength, and friction coefficient. The average apparent density of baby spinach leaves was 0.5666 g/mm3. The tensile tests were performed using parallel, perpendicular, and diagonal directions with respect to the midrib of each leaf. The test results showed that the mechanical properties of spinach are anisotropic. For the parallel, diagonal, and perpendicular test directions, the average values for the Young's modulus values were found to be 2.137MPa, 1.0841 MPa, and 0.3914 MPa, respectively, and the average fracture strength values were 0.2429 MPa, 0.1396 MPa, and 0.1113 MPa, respectively. The static and kinetic friction coefficient between the baby spinach and conveyor belt were researched, whose test results showed that the average coefficients of kinetic and maximum static friction between the adaxial (front side) spinach leaf surface and conveyor belt were 1.2737 and 1.3635, respectively, and between the abaxial (back side) spinach leaf surface and conveyor belt were 1.1780 and 1.2451 respectively. These works provide the basis for future development of a whole-surface online imaging inspection system that can be used by the commercial vegetable processing industry to reduce food safety risks.

  10. A bone marrow toxicity model for 223Ra alpha-emitter radiopharmaceutical therapy

    NASA Astrophysics Data System (ADS)

    Hobbs, Robert F.; Song, Hong; Watchman, Christopher J.; Bolch, Wesley E.; Aksnes, Anne-Kirsti; Ramdahl, Thomas; Flux, Glenn D.; Sgouros, George

    2012-05-01

    Ra-223, an α-particle emitting bone-seeking radionuclide, has recently been used in clinical trials for osseous metastases of prostate cancer. We investigated the relationship between absorbed fraction-based red marrow dosimetry and cell level-dosimetry using a model that accounts for the expected localization of this agent relative to marrow cavity architecture. We show that cell level-based dosimetry is essential to understanding potential marrow toxicity. The GEANT4 software package was used to create simple spheres representing marrow cavities. Ra-223 was positioned on the trabecular bone surface or in the endosteal layer and simulated for decay, along with the descendants. The interior of the sphere was divided into cell-size voxels and the energy was collected in each voxel and interpreted as dose cell histograms. The average absorbed dose values and absorbed fractions were also calculated in order to compare those results with previously published values. The absorbed dose was predominantly deposited near the trabecular surface. The dose cell histogram results were used to plot the percentage of cells that received a potentially toxic absorbed dose (2 or 4 Gy) as a function of the average absorbed dose over the marrow cavity. The results show (1) a heterogeneous distribution of cellular absorbed dose, strongly dependent on the position of the cell within the marrow cavity; and (2) that increasing the average marrow cavity absorbed dose, or equivalently, increasing the administered activity resulted in only a small increase in potential marrow toxicity (i.e. the number of cells receiving more than 4 or 2 Gy), for a range of average marrow cavity absorbed doses from 1 to 20 Gy. The results from the trabecular model differ markedly from a standard absorbed fraction method while presenting comparable average dose values. These suggest that increasing the amount of radioactivity may not substantially increase the risk of toxicity, a result unavailable to the absorbed fraction method of dose calculation.

  11. Assessment of indoor radon, thoron concentrations, and their relationship with seasonal variation and geology of Udhampur district, Jammu & Kashmir, India.

    PubMed

    Kumar, Ajay; Sharma, Sumit; Mehra, Rohit; Narang, Saurabh; Mishra, Rosaline

    2017-07-01

    Background The inhalation doses resulting from the exposure to radon, thoron, and their progeny are important quantities in estimating the radiation risk for epidemiological studies as the average global annual effective dose due to radon and its progeny is 1.3 mSv as compared to that of 2.4 mSv due to all other natural sources of ionizing radiation. Objectives The annual inhalation dose has been assessed with an aim of investigating the health risk to the inhabitants of the studied region. Methods Time integrated deposition based 222 Rn/ 220 Rn sensors have been used to measure concentrations in 146 dwellings of Udhampur district, Jammu and Kashmir. An active smart RnDuo monitor has also been used for comparison purposes. Results The range of indoor radon/thoron concentrations is found to vary from 11 to 58 Bqm -3 with an average value of 29 ± 9 Bqm -3 and from 25 to 185 Bqm -3 with an average value of 83 ± 32 Bqm -3 , respectively. About 10.7% dwellings have higher values than world average of 40 Bqm -3 prescribed by UNSCEAR. The relationship of indoor radon and thoron levels with different seasons, ventilation conditions, and different geological formations have been discussed. Conclusions The observed values of concentrations and average annual effective dose due to radon, thoron, and its progeny in the study area have been found to be below the recommended level of ICRP. The observed concentrations of 222 Rn and 220 Rn measured with active and passive techniques are found to be in good agreement.

  12. Occupational exposure to electric fields and induced currents associated with 400 kV substation tasks from different service platforms.

    PubMed

    Korpinen, Leena H; Elovaara, Jarmo A; Kuisti, Harri A

    2011-01-01

    The aim of the study was to investigate the occupational exposure to electric fields, average current densities, and average total contact currents at 400 kV substation tasks from different service platforms (main transformer inspection, maintenance of operating device of disconnector, maintenance of operating device of circuit breaker). The average values are calculated over measured periods (about 2.5 min). In many work tasks, the maximum electric field strengths exceeded the action values proposed in the EU Directive 2004/40/EC, but the average electric fields (0.2-24.5 kV/m) were at least 40% lower than the maximum values. The average current densities were 0.1-2.3 mA/m² and the average total contact currents 2.0-143.2 µA, that is, clearly less than the limit values of the EU Directive. The average values of the currents in head and contact currents were 16-68% lower than the maximum values when we compared the average value from all cases in the same substation. In the future it is important to pay attention to the fact that the action and limit values of the EU Directive differ significantly. It is also important to take into account that generally, the workers' exposure to the electric fields, current densities, and total contact currents are obviously lower if we use the average values from a certain measured time period (e.g., 2.5 min) than in the case where exposure is defined with only the help of the maximum values. © 2010 Wiley-Liss, Inc.

  13. A comparative statistical study of long-term agroclimatic conditions affecting the growth of US winter wheat: Distributions of regional monthly average precipitation on the Great Plains and the state of Maryland and the effect of agroclimatic conditions on yield in the state of Kansas

    NASA Technical Reports Server (NTRS)

    Welker, J.

    1981-01-01

    A histogram analysis of average monthly precipitation over 30 and 84 year periods for both Maryland and Kansas was made and the results compared. A second analysis, a statistical assessment of the effect of average monthly precipitation on Kansas winter wheat yield was made. The data sets covered the three periods of 1941-1970, 1887-1970, and 1887-1921. Analyses of the limited data sets used (only the average monthly precipitation and temperature were correlated against yield) indicated that fall precipitation values, especially those of September and October, were more important to winter wheat yield than were spring values, particularly for the period 1941-1970.

  14. Symphysis pubis width and unaffected hip joint width in patients with slipped upper femoral epiphysis: widening compared with normal values.

    PubMed

    Tins, Bernhard; Cassar-Pullicino, Victor; Haddaway, Mike

    2010-04-01

    The exact pathomechanism of slipped upper femoral epiphysis (SUFE) remains elusive. This paper suggests a generalised abnormality of the development or maturation of cartilage as a possible cause. It is proposed that SUFE is part of a generalised abnormality of the cartilage formation or maturation resulting in abnormal measurements of cartilaginous joint structures. Radiographs of SUFE patients were assessed for the width of the unaffected hip joint and the symphysis pubis. Comparison with previously published normal values was made. Fifty-one patients were assessed, 35 male, 16 female. The average age was 12 years and 11 months combined for both sexes, 13 years 8 months for boys, 11 years 4 months for girls. Width of the symphysis pubis was assessed on 46 datasets, and comparison with normal values was performed using the Wilcoxon paired rank test. Statistical significance was set as p < 0.05. The average expected width was 5.8 mm (5.4-6.2 mm), the average measured width was 7.3 mm (3.5-12 mm), median value 7.0 mm, and the difference is statistically significant. Cartilage thickness of the uninvolved hip joint could be assessed in 46 cases, and comparison using the Wilcoxon paired rank test resulted in a statistically significant difference (significance set as p < 0.05). The average expected width was 4.9 mm (3.6-6.5 mm), the average measured width was 5.5 mm (4-8 mm), and median 5.3 mm. The results indicate that SUFE patients display a generalised increased width of joint cartilage for their age. This could be due to increased cartilage formation or decreased maturation or a combination of the two, and could explain the increased mechanical vulnerability of these children to normal or abnormal stresses, despite histologically normal organisation of the physis as shown in previous studies.

  15. Experimental studies on cycling stable characteristics of inorganic phase change material CaCl2·6H2O-MgCl2·6H2O modified with SrCl2·6H2O and CMC

    NASA Astrophysics Data System (ADS)

    He, Meizhi; Yang, Luwei; Zhang, Zhentao

    2018-01-01

    By means of mass ratio method, binary eutectic hydrated salts inorganic phase change thermal energy storage system CaCl2·6H2O-20wt% MgCl2·6H2O was prepared, and through adding nucleating agent 1wt% SrCl2·6H2O and thickening agent 0.5wt% carboxy methyl cellulose (CMC), inoganic phase change material (PCM) modified was obtained. With recording cooling-melting curves simultaneously, this PCM was frozen and melted for 100 cycles under programmable temperature control. After per 10 cycles, the PCM was charaterized by differential scanning calorimeter (DSC), X-ray diffraction (XRD) and density meter, then analysing variation characteristics of phase change temperature, supercooling degree, superheat degree, latent heat, crystal structure and density with the increase of cycle index. The results showed that the average values of average phase change temperature for cooling and heating process were 25.70°C and 27.39°C respectively with small changes. The average values of average supercooling and superheat degree were 0.59°C and 0.49°C respectively, and the maximum value was 1.10°C. The average value and standard deviation of latent heat of fusion were 120.62 J/g and 1.90 J/g respectively. Non-molten white solid sediments resulted from phase separation were tachyhydrite (CaMg2Cl6·12H2O), which was characterized by XRD. Measuring density of the PCM after per 10 cycles, and the results suggested that the total mass of tachyhydrite was limited. In summary, such modified inoganic PCM CaCl2·6H2O-20wt% MgCl2·6H2O-1wt% SrCl2·6H2O-0.5wt% CMC could stay excellent circulation stability within 100 cycles, and providing reference value in practical use.

  16. Verification of MCNP simulation of neutron flux parameters at TRIGA MK II reactor of Malaysia.

    PubMed

    Yavar, A R; Khalafi, H; Kasesaz, Y; Sarmani, S; Yahaya, R; Wood, A K; Khoo, K S

    2012-10-01

    A 3-D model for 1 MW TRIGA Mark II research reactor was simulated. Neutron flux parameters were calculated using MCNP-4C code and were compared with experimental results obtained by k(0)-INAA and absolute method. The average values of φ(th),φ(epi), and φ(fast) by MCNP code were (2.19±0.03)×10(12) cm(-2)s(-1), (1.26±0.02)×10(11) cm(-2)s(-1) and (3.33±0.02)×10(10) cm(-2)s(-1), respectively. These average values were consistent with the experimental results obtained by k(0)-INAA. The findings show a good agreement between MCNP code results and experimental results. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Comparison of the Effects of Walking with and without Nordic Pole on Upper Extremity and Lower Extremity Muscle Activation.

    PubMed

    Shim, Je-Myung; Kwon, Hae-Yeon; Kim, Ha-Roo; Kim, Bo-In; Jung, Ju-Hyeon

    2013-12-01

    [Purpose] The aim of this study was to assess the effect of Nordic pole walking on the electromyographic activities of upper extremity and lower extremity muscles. [Subjects and Methods] The subjects were randomly divided into two groups as follows: without Nordic pole walking group (n=13) and with Nordic pole walking group (n=13). The EMG data were collected by measurement while the subjects walking on a treadmill for 30 minutes by measuring from one heel strike to the next. [Results] Both the average values and maximum values of the muscle activity of the upper extremity increased in both the group that used Nordic poles and the group that did not use Nordic poles, and the values showed statistically significant differences. There was an increase in the average value for muscle activity of the latissimus dorsi, but the difference was not statistically significant, although there was a statistically significant increase in its maximum value. The average and maximum values for muscle activity of the lower extremity did not show large differences in either group, and the values did not show any statistically significant differences. [Conclusion] The use of Nordic poles by increased muscle activity of the upper extremity compared with regular walking but did not affect the lower extremity.

  18. Comparison of the Effects of Walking with and without Nordic Pole on Upper Extremity and Lower Extremity Muscle Activation

    PubMed Central

    Shim, Je-myung; Kwon, Hae-yeon; Kim, Ha-roo; Kim, Bo-in; Jung, Ju-hyeon

    2014-01-01

    [Purpose] The aim of this study was to assess the effect of Nordic pole walking on the electromyographic activities of upper extremity and lower extremity muscles. [Subjects and Methods] The subjects were randomly divided into two groups as follows: without Nordic pole walking group (n=13) and with Nordic pole walking group (n=13). The EMG data were collected by measurement while the subjects walking on a treadmill for 30 minutes by measuring from one heel strike to the next. [Results] Both the average values and maximum values of the muscle activity of the upper extremity increased in both the group that used Nordic poles and the group that did not use Nordic poles, and the values showed statistically significant differences. There was an increase in the average value for muscle activity of the latissimus dorsi, but the difference was not statistically significant, although there was a statistically significant increase in its maximum value. The average and maximum values for muscle activity of the lower extremity did not show large differences in either group, and the values did not show any statistically significant differences. [Conclusion] The use of Nordic poles by increased muscle activity of the upper extremity compared with regular walking but did not affect the lower extremity. PMID:24409018

  19. [Geographical distribution of the Serum creatinine reference values of healthy adults].

    PubMed

    Wei, De-Zhi; Ge, Miao; Wang, Cong-Xia; Lin, Qian-Yi; Li, Meng-Jiao; Li, Peng

    2016-11-20

    To explore the relationship between serum creatinine (Scr) reference values in healthy adults and geographic factors and provide evidence for establishing Scr reference values in different regions. We collected 29 697 Scr reference values from healthy adults measured by 347 medical facilities from 23 provinces, 4 municipalities and 5 autonomous regions. We chose 23 geographical factors and analyzed their correlation with Scr reference values to identify the factors correlated significantly with Scr reference values. According to the Principal component analysis and Ridge regression analysis, two predictive models were constructed and the optimal model was chosen after comparison of the two model's fitting degree of predicted results and measured results. The distribution map of Scr reference values was drawn using the Kriging interpolation method. Seven geographic factors, including latitude, annual sunshine duration, annual average temperature, annual average relative humidity, annual precipitation, annual temperature range and topsoil (silt) cation exchange capacity were found to correlate significantly with Scr reference values. The overall distribution of Scr reference values featured a pattern that the values were high in the south and low in the north, varying consistently with the latitude change. The data of the geographic factors in a given region allows the prediction of the Scr values in healthy adults. Analysis of these geographical factors can facilitate the determination of the reference values specific to a region to improve the accuracy for clinical diagnoses.

  20. The retinal nerve fibre layer thickness in glaucomatous hydrophthalmic eyes assessed by scanning laser polarimetry with variable corneal compensation in comparison with age-matched healthy children.

    PubMed

    Hložánek, Martin; Ošmera, Jakub; Ležatková, Pavlína; Sedláčková, Petra; Filouš, Aleš

    2012-12-01

    To compare the thickness of the retinal nerve fibre layer (RNFL) in hydrophthalmic glaucomatous eyes in children with age-matched healthy controls using scanning laser polarimetry with variable corneal compensation (GDxVCC). Twenty hydrophthalmic eyes of 20 patients with the mean age of 10.64 ± 3.02 years being treated for congenital or infantile glaucoma were included in the analysis. Evaluation of RNFL thickness measured by GDxVCC in standard Temporal-Superior-Nasal-Inferior-Temporal (TSNIT) parameters was performed. The results were compared to TSNIT values of an age-matched control group of 120 healthy children published recently as referential values. The correlation between horizontal corneal diameter and RNFL thickness in hydrophthalmic eyes was also investigated. The mean ± SD values in TSNIT Average, Superior Average, Inferior Average and TSNIT SD in hydrophthalmic eyes were 52.3 ± 11.4, 59.7 ± 17.1, 62.0 ± 15.6 and 20.0 ± 7.8 μm, respectively. All these values were significantly lower compared to referential TSNIT parameters of age-matched healthy eyes (p = 0.021, p = 0.001, p = 0.003 and p = 0.018, respectively). A substantial number of hydrophthalmic eyes laid below the level of 5% probability of normality in respective TSNIT parameters: 30% of the eyes in TSNIT average, 50% of the eyes in superior average, 30% of the eyes in inferior average and 45% of the eyes in TSNIT SD. No significant correlation between enlarged corneal diameter and RNFL thickness was found. The mean values of all standard TSNIT parameters assessed using GDxVCC in hydrophthalmic glaucomatous eyes in children were significantly lower in comparison with referential values of healthy age-matched children. © 2011 The Authors. Acta Ophthalmologica © 2011 Acta Ophthalmologica Scandinavica Foundation.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, P; Wang, J; Zhong, H

    Purpose: To evaluate the reproducibility of radiomics features by repeating computed tomographic (CT) scans in rectal cancer. To choose stable radiomics features for rectal cancer. Methods: 40 rectal cancer patients were enrolled in this study, each of whom underwent two CT scans within average 8.7 days (5 days to 17 days), before any treatment was delivered. The rectal gross tumor volume (GTV) was distinguished and segmented by an experienced oncologist in both CTs. Totally, more than 2000 radiomics features were defined in this study, which were divided into four groups (I: GLCM, II: GLRLM III: Wavelet GLCM and IV: Waveletmore » GLRLM). For each group, five types of features were extracted (Max slice: features from the largest slice of target images, Max value: features from all slices of target images and choose the maximum value, Min value: minimum value of features for all slices, Average value: average value of features for all slices, Matrix sum: all slices of target images translate into GLCM and GLRLM matrices and superpose all matrices, then extract features from the superposed matrix). Meanwhile a LOG (Laplace of Gauss) filter with different parameters was applied to these images. Concordance correlation coefficients (CCC) and inter-class correlation coefficients (ICC) were calculated to assess the reproducibility. Results: 403 radiomics features were extracted from each type of patients’ medical images. Features of average type are the most reproducible. Different filters have little effect for radiomics features. For the average type features, 253 out of 403 features (62.8%) showed high reproducibility (ICC≥0.8), 133 out of 403 features (33.0%) showed medium reproducibility (0.8≥ICC≥0.5) and 17 out of 403 features (4.2%) showed low reproducibility (ICC≥0.5). Conclusion: The average type radiomics features are the most stable features in rectal cancer. Further analysis of these features of rectal cancer can be warranted for treatment monitoring and prognosis prediction.« less

  2. [Distribution of Urban Soil Heavy Metal and Pollution Evaluation in Different Functional Zones of Yinchuan City].

    PubMed

    Wang, You-qi; Bai, Yi-ru; Wang, Jian-yu

    2016-02-15

    Surface soil samples (0-20 cm) from eight different functional areas in Yinchuan city were collected. There were 10 samples respectively in each functional area. The urban soil heavy metals (Zn, Cd, Pb, Mn, Cu and Cr) pollution characteristics and sources in eight different functional areas were evaluated by mathematical statistics and geostatistical analysis method. Meanwhile, the spatial distributions of heavy metals based on the geography information system (GIS) were plotted. The average values of total Zn, Cd, Pb, Mn, Cu and Cr were 74.87, 0.15, 29.02, 553.55, 40.37 and 80.79 mg x kg(-1), respectively. The results showed that the average value of soil heavy metals was higher than the soil background value of Ningxia, which indicated accumulation of the heavy metals in urban soil. The single factor pollution index of soil heavy metals was in the sequence of Cu > Pb > Zn > Cr > Cd > Mn. The average values of total Zn, Cd, Pb and Cr were higher in north east, south west and central city, while the average values of Mn and Cu were higher in north east and central city. There was moderate pollution in road and industrial area of Yinchuan, while the other functional areas showed slight pollution according to Nemoro synthesis index. The pollution degree of different functional areas was as follows: road > industrial area > business district > medical treatment area > residential area > public park > development zone > science and education area. The results indicated that the soil heavy metal pollution condition in Yinchuan City has been affected by human activities with the development of economy.

  3. Personalized Pseudophakic Model for Refractive Assessment

    PubMed Central

    Ribeiro, Filomena J.; Castanheira-Dinis, António; Dias, João M.

    2012-01-01

    Purpose To test a pseudophakic eye model that allows for intraocular lens power (IOL) calculation, both in normal eyes and in extreme conditions, such as post-LASIK. Methods Participants: The model’s efficacy was tested in 54 participants (104 eyes) who underwent LASIK and were assessed before and after surgery, thus allowing to test the same method in the same eye after only changing corneal topography. Modelling The Liou-Brennan eye model was used as a starting point, and biometric values were replaced by individual measurements. Detailed corneal surface data were obtained from topography (Orbscan®) and a grid of elevation values was used to define corneal surfaces in an optical ray-tracing software (Zemax®). To determine IOL power, optimization criteria based on values of the modulation transfer function (MTF) weighted according to contrast sensitivity function (CSF), were applied. Results Pre-operative refractive assessment calculated by our eye model correlated very strongly with SRK/T (r = 0.959, p<0.001) with no difference of average values (16.9±2.9 vs 17.1±2.9 D, p>0.05). Comparison of post-operative refractive assessment obtained using our eye model with the average of currently used formulas showed a strong correlation (r = 0.778, p<0.001), with no difference of average values (21.5±1.7 vs 21.8±1.6 D, p>0.05). Conclusions Results suggest that personalized pseudophakic eye models and ray-tracing allow for the use of the same methodology, regardless of previous LASIK, independent of population averages and commonly used regression correction factors, which represents a clinical advantage. PMID:23056450

  4. [Research on the mercury species in Jiaozhou Bay in spring].

    PubMed

    Xu, Liao-Qi; Liu, Ru-Hai; Wang, Jin-Yu; Tang, Ai-Kun; Wang, Shu

    2012-01-01

    In April 2010, seawater samples collected every twenty minutes in the Jiaozhou Bay were separated and determined in-situ and indoor to study mercury speciation and its daily variation and to further understand the end-result and effect of mercury on offshore environment. Results showed that dissolved element mercury (DEM) concentration of seawater ranged from 38.2 pg x L(-1) to 156 pg x L(-1), with an average value of 97.5 pg x L(-1). The highest and the lowest value appeared at around 13:00 and 17:30 respectively under the influence of tide and light intensity. DEM concentration gradually declined with depth. DEM of surface sea primarily derived from photoreduction of bivalent mercury. Dissolved mercury (DHg) concentrations ranged from 7.32 ng x L(-1) to 49.1 ng x L(-1) (average value was 13.9 ng x L(-1)), from 4.39 ng x L(-1) to 19.3 ng x L(-1) (average value was 7.94 ng x L(-1)) for dissolved reactive mercury (RHg). The maximum peaks of DHg and RHg all appeared around 13:00, due to dirty seawater carried by tidal movement in the lowest tide. The variation trend with depth of RHg and DHg concentrations was similar at different time. Under the influence of the light and water temperature, the ratio of RHg to DHg was higher in the surface water. RHg accounted for 62% of DHg, so the mercury had relatively high activity and biological availability, and contributed to the form of DEM. The methylmercury concentration was low, with an average value of 0.30 ng x L(-1), and some samples were lower than the detection limit.

  5. Simulation study of entropy production in the one-dimensional Vlasov system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie

    2016-07-15

    The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.

  6. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  7. The reliability of the Bad Sobernheim Stress Questionnaire (BSSQbrace) in adolescents with scoliosis during brace treatment

    PubMed Central

    Botens-Helmus, Christine; Klein, Rolf; Stephan, Carola

    2006-01-01

    Background A new instrument to assess stress scoliosis patients have whilst wearing their brace has been developed. Aim of this study was to test the reliability of this new instrument. Methods Eight questions are provided focussing on this topic only, including two questions to test the credibility. A max. score of 24 can be achieved (from 0 for most stress to 24 for least stress). We have proposed a subdivision of the score values as follows: 0–8 (strong stress), 9–16 (medium stress) and 17–24 (little stress). 85 patients were invited to take part in this study and to complete the BSSQbrace questionnaire twice, once at the first presentation and a second after a further three days. 62 patients with an average age of 14,5 years and an average Cobb angle of 40° returned their fully completed questionnaires. Results The average stress value was 12,5/24 at the first measurement and 12,4/24 at the second measurement. Ceiling value was 23; floor value 2. The average stress value was 12,5 / 24 at the first measurement and 12,4 / 24 at the second measurement (from 0 for most stress to 24 for least stress). Ceiling value was 23; floor value 2. There was a correlation of 0,88 (Intraclass Correlation Coefficient) between the values of the two measurements. Cronbach alpha was 0, 97. Conclusion The BSSQbrace questionnaire is reliable with good internal consistency and reproducibility. It can be used to measure the coping strategies a patient uses and the impairment a patient feels to have, whilst wearing a brace. PMID:17176483

  8. Sediment Flux of Particulate Organic Phosphorus in the Open Black Sea

    NASA Astrophysics Data System (ADS)

    Parkhomenko, A. V.; Kukushkin, A. S.

    2018-03-01

    The interannual variation of the monthly average (weighted average) concentrations of particulate organic phosphorus (PPOM) in the photosynthetic layer, oxycline, redox zone, and H2S zone in the open Black Sea is estimated based on long-term observation data. The suspension sedimentation rates from the studied layers are assessed using model calculations and published data. The annual variation of PPOM sediment fluxes from the photosynthetic layer, oxycline, redox zone, and upper H2S zone to the anaerobic zone of the sea and the correspondingly annual average values are estimated for the first time. A regular decrease in the PPOM annual average flux with depth in the upper active layer is demonstrated. A correlation between the annual average values of PPOM sediment flux from the photosynthetic layer and ascending phosphate flux to this layer is shown, which suggests their balance in the open sea. The results are discussed in terms of the phosphorus biogeochemical cycle and the concept of new and regenerative primary production in the open Black Sea.

  9. Estimation of Aerosol Direct Radiative Effects Over the Mid-Latitude North Atlantic from Satellite and In Situ Measurements

    NASA Technical Reports Server (NTRS)

    Bergstrom, Robert W.; Russell, P. B.

    2000-01-01

    We estimate solar radiative flux changes due to aerosols over the mid-latitude North Atlantic by combining optical depths from AVHRR measurements with aerosol properties from the recent TARFOX program. Results show that, over the ocean the aerosol decreases the net radiative flux at the tropopause and therefore has a cooling effect. Cloud-free, 24-hour average flux changes range from -9 W/sq m near the eastern US coast in summer to -1 W/sq m in the mid-Atlantic during winter. Cloud-free North Atlantic regional averages range from -5.1 W/sq m in summer to -1.7 W/sq m in winter, with an annual average of -3.5 W/sq m. Cloud effects estimated from ISCCP data, reduce the regional annual average to -0.8 W/sq m. All values are for the moderately absorbing TARFOX aerosol (omega(0.55 microns) = 0.9); values for a nonabsorbing aerosol are approx. 30% more negative. We compare our results to a variety of other calculations of aerosol radiative effects.

  10. On the use of hydroxyl radical kinetics to assess the number-average molecular weight of dissolved organic matter.

    PubMed

    Appiani, Elena; Page, Sarah E; McNeill, Kristopher

    2014-10-21

    Dissolved organic matter (DOM) is involved in numerous environmental processes, and its molecular size is important in many of these processes, such as DOM bioavailability, DOM sorptive capacity, and the formation of disinfection byproducts during water treatment. The size and size distribution of the molecules composing DOM remains an open question. In this contribution, an indirect method to assess the average size of DOM is described, which is based on the reaction of hydroxyl radical (HO(•)) quenching by DOM. HO(•) is often assumed to be relatively unselective, reacting with nearly all organic molecules with similar rate constants. Literature values for HO(•) reaction with organic molecules were surveyed to assess the unselectivity of DOM and to determine a representative quenching rate constant (k(rep) = 5.6 × 10(9) M(-1) s(-1)). This value was used to assess the average molecular weight of various humic and fulvic acid isolates as model DOM, using literature HO(•) quenching constants, kC,DOM. The results obtained by this method were compared with previous estimates of average molecular weight. The average molecular weight (Mn) values obtained with this approach are lower than the Mn measured by other techniques such as size exclusion chromatography (SEC), vapor pressure osmometry (VPO), and flow field fractionation (FFF). This suggests that DOM is an especially good quencher for HO(•), reacting at rates close to the diffusion-control limit. It was further observed that humic acids generally react faster than fulvic acids. The high reactivity of humic acids toward HO(•) is in line with the antioxidant properties of DOM. The benefit of this method is that it provides a firm upper bound on the average molecular weight of DOM, based on the kinetic limits of the HO(•) reaction. The results indicate low average molecular weight values, which is most consistent with the recent understanding of DOM. A possible DOM size distribution is discussed to reconcile the small nature of DOM with the large-molecule behavior observed in other studies.

  11. Reduction of Averaging Time for Evaluation of Human Exposure to Radiofrequency Electromagnetic Fields from Cellular Base Stations

    NASA Astrophysics Data System (ADS)

    Kim, Byung Chan; Park, Seong-Ook

    In order to determine exposure compliance with the electromagnetic fields from a base station's antenna in the far-field region, we should calculate the spatially averaged field value in a defined space. This value is calculated based on the measured value obtained at several points within the restricted space. According to the ICNIRP guidelines, at each point in the space, the reference levels are averaged over any 6min (from 100kHz to 10GHz) for the general public. Therefore, the more points we use, the longer the measurement time becomes. For practical application, it is very advantageous to spend less time for measurement. In this paper, we analyzed the difference of average values between 6min and lesser periods and compared it with the standard uncertainty for measurement drift. Based on the standard deviation from the 6min averaging value, the proposed minimum averaging time is 1min.

  12. Modulation of thyroid hormone receptors by non-thyroidal stimuli

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ErkenBrack, D.E.; Clemons, G.K.

    1988-01-01

    The ability of non-thyroidal stimuli to affect the binding affinity and capacity of solubilized nuclear receptors for thyroid hormones was studied in a normal homeostatic system (erythropoiesis) and a pathobiologic one (lung-ozone interaction). No significant effects on affinity were found, as Kd control values for receptors derived from rat bone marrow averaged 57 (+/- 28) pM while experimental (hypoxic) values averaged 89 (+/- 55) pM. Kd control values in rat lung were found to average 142 (+/- 22) pM while average values derived from experimental protocols with ozone and methimazole were 267 (+/- 44) pM and 161 (+/- 35) pMmore » respectively. Finally, Kd control values for receptors derived from cultured MEL cells averaged 19 (+/- 2.6) pM while experimental values during exposure to DMSO or IGF1 were 23 (+/- 3.6) pM and 26 (+/- 11) pM respectively. In contrast, binding capacity (expressed as fmoles of hormone bound per unit protein of solubilized receptor) was markedly perturbed in several tissues by various agents: ozone effects on lung were shown by an average control value of 3.3 (+/- 0.4) as opposed to an experimental average of 28 (+/- 1.9); and hypoxia effects on erythroid tissue were displayed by an average control value of 0.7 (+/- 0.07) as opposed to the experimental figure of 1.8 (+/- 0.03). In cultured MEL cells, binding capacity was seen to be increased from control values of 388 (+/- 15) sites/cell to 1243 (+/- 142) sites/cell after DMSO exposure and 2002 (+/- 10) sites/cell after IGF1 exposure. Parallel experiments done with receptors derived from rat liver yielded values similar to those reported by other investigators and were unaffected by the experimental agents.« less

  13. Average M shell fluorescence yields for elements with 70≤Z≤92

    NASA Astrophysics Data System (ADS)

    Kahoul, A.; Deghfel, B.; Aylikci, V.; Aylikci, N. K.; Nekkab, M.

    2015-03-01

    The theoretical, experimental and analytical methods for the calculation of average M-shell fluorescence yield (ω¯M ) of different elements are very important because of the large number of their applications in various areas of physical chemistry and medical research. In this paper, the bulk of the average M-shell fluorescence yield measurements reported in the literature, covering the period 1955 to 2005 are interpolated by using an analytical function to deduce the empirical average M-shell fluorescence yield in the atomic range of 70≤Z≤92. The results were compared with the theoretical and fitted values reported by other authors. Reasonable agreement was typically obtained between our result and other works.

  14. Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y

    Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less

  15. Optimization of ultrasonic emulsification conditions for the production of orange peel essential oil nanoemulsions.

    PubMed

    Hashtjin, Adel Mirmajidi; Abbasi, Soleiman

    2015-05-01

    The aim of the present study was to investigate the influence of emulsifying conditions on some physical and rheological properties of orange peel essential oil (OPEO) in water nanoemulsions. In this regard, using the response surface methodology, the influence of ultrasonication conditions including sonication amplitude (70-100 %), sonication time (90-150 s) and process temperature (5-45 °C) on the mean droplets diameter (Z-average value), polydispersity index (PDI), and viscosity of the OPEO nanoemulsions was evaluated. In addition, the flow behavior and stability of selected nanoemulsions was evaluated during storage (up to 3 months) at different temperatures (5, 25 and 45 °C). Based on the results of the optimization, the optimum conditions for producing OPEO nanoemulsions (Z-average value 18.16 nm) were determined as 94 % (sonication amplitude), 138 s (sonication time) and 37 °C (process temperature). Moreover, analysis of variance (ANOVA) showed high coefficients of determination values (R (2) > 0.95) for the response surface models of the energy input and Z-average. In addition, the flow behavior of produced nanoemulsions was Newtonian, and the effect of time and storage temperature as well as their interactions on the Z-average value was highly significant (P < 0.0001).

  16. A Group Neighborhood Average Clock Synchronization Protocol for Wireless Sensor Networks

    PubMed Central

    Lin, Lin; Ma, Shiwei; Ma, Maode

    2014-01-01

    Clock synchronization is a very important issue for the applications of wireless sensor networks. The sensors need to keep a strict clock so that users can know exactly what happens in the monitoring area at the same time. This paper proposes a novel internal distributed clock synchronization solution using group neighborhood average. Each sensor node collects the offset and skew rate of the neighbors. Group averaging of offset and skew rate value are calculated instead of conventional point-to-point averaging method. The sensor node then returns compensated value back to the neighbors. The propagation delay is considered and compensated. The analytical analysis of offset and skew compensation is presented. Simulation results validate the effectiveness of the protocol and reveal that the protocol allows sensor networks to quickly establish a consensus clock and maintain a small deviation from the consensus clock. PMID:25120163

  17. Underground and ground-level particulate matter concentrations in an Italian metro system

    NASA Astrophysics Data System (ADS)

    Cartenì, Armando; Cascetta, Furio; Campana, Stefano

    2015-01-01

    All around the world, many studies and experimental results have assessed elevated concentrations of Particulate Matter (PM) in underground metro systems, with non-negligible implications for human health due to protracted exposure to fine particles. Starting from this consideration, an intensive particulate sampling campaign was carried out in January 2014 measuring the PM concentrations in the Naples (Italy) Metro Line 1, both at station platforms and inside trains. Naples Metro Line 1 is about 18 km long, with 17 stations (3 ground-level and 14 below-ground ones). Experimental results show that the average PM10 concentrations measured in the underground station platforms range between 172 and 262 μg/m3 whilst the average PM2.5 concentrations range between 45 and 60 μg/m3. By contrast, in ground-level stations no significant difference between stations platforms and urban environment measurements was observed. Furthermore, a direct correlation between trains passage and PM concentrations was observed, with an increase up to 42% above the average value. This correlation is possibly caused by the re-suspension of the particles due to the turbulence induced by trains. The main original finding was the real-time estimations of PM levels inside the trains travelling both in ground-level and underground sections of Line 1. The results show that high concentrations of both PM10 (average values between 58 μg/m3 and 138 μg/m3) and PM2.5 (average values between 18 μg/m3 and 36 μg/m3) were also measured inside trains. Furthermore, measurements show that windows left open on trains caused the increase in PM concentrations inside trains in the underground section, while in the ground-level section the clean air entering the trains produced an environmental "washing effect". Finally, it was estimated that every passenger spends on average about 70 min per day exposed to high levels of PM.

  18. 49 CFR 537.9 - Determination of fuel economy values and average fuel economy.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 6 2014-10-01 2014-10-01 false Determination of fuel economy values and average fuel economy. 537.9 Section 537.9 Transportation Other Regulations Relating to Transportation... ECONOMY REPORTS § 537.9 Determination of fuel economy values and average fuel economy. (a) Vehicle...

  19. 49 CFR 537.9 - Determination of fuel economy values and average fuel economy.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 6 2012-10-01 2012-10-01 false Determination of fuel economy values and average fuel economy. 537.9 Section 537.9 Transportation Other Regulations Relating to Transportation... ECONOMY REPORTS § 537.9 Determination of fuel economy values and average fuel economy. (a) Vehicle...

  20. 49 CFR 537.9 - Determination of fuel economy values and average fuel economy.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 6 2013-10-01 2013-10-01 false Determination of fuel economy values and average fuel economy. 537.9 Section 537.9 Transportation Other Regulations Relating to Transportation... ECONOMY REPORTS § 537.9 Determination of fuel economy values and average fuel economy. (a) Vehicle...

  1. 49 CFR 537.9 - Determination of fuel economy values and average fuel economy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 6 2011-10-01 2011-10-01 false Determination of fuel economy values and average fuel economy. 537.9 Section 537.9 Transportation Other Regulations Relating to Transportation... ECONOMY REPORTS § 537.9 Determination of fuel economy values and average fuel economy. (a) Vehicle...

  2. Land use change influences soil C, N, and P stoichiometry under ‘Grain-to-Green Program’ in China

    PubMed Central

    Fazhu, Zhao; Jiao, Sun; Chengjie, Ren; Di, Kang; Jian, Deng; Xinhui, Han; Gaihe, Yang; Yongzhong, Feng; Guangxin, Ren

    2015-01-01

    Changes in land use might affect the combined C, N and P stoichiometry in soil. The Grain-to-Green Program (GTGP), which converts low-yield croplands or abandoned lands into forest, shrub, and/or grassland, was the largest land reforestation project in China. This study collected the reported C, N and P contents of soil in GTGP zones to achieve the factors driving the changes in the C:N, C:P, and N:P values. The results showed that the annual average precipitation exerted significant effects on the C:P value, and on the N:P value became significant 20 years after the change in land use. The annual average temperature was the main factor affecting the C:N value during the first 10 years, while the annual average precipitation strongly affected this value afterwards. In addition, “Redfield-like” interactions between C, N, and P in the soil may exist. A linear regression revealed significant positive correlations between the C:N, C:P, and N:P values and the restoration age, temperature, and precipitation after a change in land use. Therefore large-scale changes in land use under the ‘GTGP’ program might significantly affect the C:N, C:P and N:P ratios in soil. PMID:25988714

  3. Rock Fracture Toughness Under Mode II Loading: A Theoretical Model Based on Local Strain Energy Density

    NASA Astrophysics Data System (ADS)

    Rashidi Moghaddam, M.; Ayatollahi, M. R.; Berto, F.

    2018-01-01

    The values of mode II fracture toughness reported in the literature for several rocks are studied theoretically by using a modified criterion based on strain energy density averaged over a control volume around the crack tip. The modified criterion takes into account the effect of T-stress in addition to the singular terms of stresses/strains. The experimental results are related to mode II fracture tests performed on the semicircular bend and Brazilian disk specimens. There are good agreements between theoretical predictions using the generalized averaged strain energy density criterion and the experimental results. The theoretical results reveal that the value of mode II fracture toughness is affected by the size of control volume around the crack tip and also the magnitude and sign of T-stress.

  4. Comparison of Cerebral Oximeter and Pulse Oximeter Values in the First 72 Hours in Premature, Asphyctic and Healthy Newborns.

    PubMed

    Kaya, A; Okur, M; Sal, E; Peker, E; Köstü, M; Tuncer, O; Kırımi, E

    2014-12-01

    The monitoring of oxygenation is essential for providing patient safety and optimal results. We aimed to determine brain oxygen saturation values in healthy, asphyctic and premature newborns and to compare cerebral oximeter and pulse oximeter values in the first 72 hours of life in neonatal intensive care units. This study was conducted at the neonatal intensive care unit (NICU) of Van Yüzüncü Yil University Research and Administration Hospital. Seventy-five neonatal infants were included in the study (28 asphyxia, 24 premature and 23 mature healthy infants for control group). All infants were studied within the first 72 hours of life. We used a Somanetics 5100C cerebral oximeter (INVOS cerebral/somatic oximeter, Troy, MI, USA). The oxygen saturation information was collected by a Nellcor N-560 pulse oximeter (Nellcor-Puriton Bennet Inc, Pleasanton, CA, USA). In the asphyxia group, the cerebral oximeter average was 76.85 ± 14.1, the pulse oximeter average was 91.86 ± 5.9 and the heart rate average was 139.91 ± 22.3. Among the premature group, the cerebral oximeter average was 79.08 ± 9.04, the pulse oximeter average was 92.01 ± 5.3 and the heart rate average was 135.35 ± 17.03. In the control group, the cerebral oximeter average was 77.56 ± 7.6, the pulse oximeter average was 92.82 ± 3.8 and the heart rate average was 127.04 ± 19.7. Cerebral oximeter is a promising modality in bedside monitoring in neonatal intensive care units. It is complementary to pulse oximeter. It may be used routinely in neonatal intensive care units.

  5. Urban 'Dry Island' in Moscow

    NASA Astrophysics Data System (ADS)

    Lokoshchenko, Mikhail A.

    2017-04-01

    The urban 'dry island' (UDI) phenomenon over Moscow city has been studied and analyzed for the period since the end of the 19th century till recent years using the data of the ground meteorological network. It concludes into less values of relative humidity in a city in the comparison with surrounding rural zone. The reason of this phenomenon is, firstly, limited areas of forest zones and less number of other water vapor sources inside a city and, besides, indirect influence of the urban heat island (UHI), i.e. higher air temperature T inside a city. Mean-annual water vapor pressure E doesn't demonstrate systematic changes in Moscow during the last 146 years. The linear regression coefficient K of its course is equal to only 0.0015 [hPa/year], thus since 1870 the average water content in the ground air layer above Moscow increased on average only a little: by 0.2 hPa; such a small difference seems to be negligible and statistically non-significant. Unlike this parameter mean-annual relative humidity F demonstrates quick and systematic (steady in time) fall with the average rate of K = -0.06 [%/year] during the last 146 years; in other words, it decreased from 81 % in 1870s to nearly 72 % in recent years. Inside the city it is the result of general T increase due to both global warming and, besides, intensification of Moscow UHI. Long-term changes of the F spatial field in Moscow city have been studied in details for separate periods since 1890s till recent years. As a result the urban 'dry island' is found as a real physical phenomenon which is closely connected with UHI; the absolute value of its intensity as well as for the UHI is increasing in time: from -4 % at the end of the 19th century to -8 ÷-9 % now. During last two decades UDI as well as UHI became much stronger in Moscow than before. For instance, on average of five years from 2010 to 2014 the F value at 'Balchug' station at the city centre (close to Moscow Kremlin) is the lowest among all other stations in the region: 68.0 %; the mean F values in urban and rural areas by the data of 5 urban and 13 rural stations for the same period are 73.2 and 76.6 % accordingly. Hence the maximum intensity of UDI, i.e. a difference between values from central urban station and rural stations, is equal to -8.6 % whereas the spatial-averaged intensity that is a difference between average values from all urban and all rural stations is -3.4 %. Thus, the UDI in recent years is mapped by two isovapores: 70 and 75 %. The difference between values of E inside and outside the city is small. For example, on average of 7 years from 1991 to 1997 it was only 0.1 hPa so it is not statistically significant. Thus, unlike average dryness, average humidity does not demonstrate stable in time local effects such as urban island.

  6. RESPIRATION AND INTENSITY DEPENDENCE OF PHOTOSYNTHESIS IN CHLORELLA

    PubMed Central

    Brackett, Frederick S.; Olson, Rodney A.; Crickard, Robert G.

    1953-01-01

    1. Respiration changes as a result of illumination. 2. In the absence of glucose or other supply of substrate, respiration decays in the dark showing at least two types—a fast decay in a few minutes and a slow decay lasting hours. 3. Respiratory response to illumination is delayed. 4. Intermittent illumination (in the absence of glucose, etc.) produces a periodic variation in respiration with a delay or phase lag. 5. Periodic variation of respiration may produce a higher average value in the dark than in the light due to the lag and depending upon the period of intermittent illumination. 6. Based upon average respiration values our data confirm the Kok effect. 7. Interpolated values of respiration, however, result in photosynthetic rates which are linearly dependent upon intensity of illumination. 8. Thus the quantum efficiency is found to be independent of intensity, over the wide range of intensities investigated. PMID:13035068

  7. Biodiversity of free-living marine nematodes in the southern Yellow Sea, China

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoshou; Xu, Man; Hua, Er; Zhang, Zhinan

    2016-02-01

    Biodiversity patterns of free-living marine nematodes were studied using specific, taxonomic and phylogenetic diversity measures in the southern Yellow Sea, China. The results showed that the average of Shannon-Wiener diversity index ( H') in the study area was 3.17. The higher values were distributed in the east part of Shandong coastal waters and north part of Jiangsu coastal waters, while the lower values were distributed in the southern Yellow Sea Cold Water Mass (YSCWM). The average of taxonomic diversity ( Δ) was 62.09 in the study region. The higher values were distributed in the transitional areas between the coastal areas and the southern YSCWM, while the lower values were distributed near the north part of Jiangsu coastal waters and the YSCWM. Results of correlation analysis of species diversity and taxonomic diversity showed that some of the two kinds of diversity index were independent, which suggested that combining the two kinds of diversity indices can reflect the ecological characteristics better. A test for 95% probability funnels of average taxonomic distinctness and variation in taxonomic distinctness suggested that Station 8794 (in the YSCWM) was outside of the 95% probability funnels, which may be due to the environmental stress. Results of correlation analysis between marine nematodes biodiversity and environmental variables showed that the sediment characteristics (Mdø and Silt-clay fraction) and phaeophorbide a (Pha- a) were the most important factors to determine the biodiversity patterns of marine nematodes.

  8. [An investigation of ionizing radiation dose in a manufacturing enterprise of ion-absorbing type rare earth ore].

    PubMed

    Zhang, W F; Tang, S H; Tan, Q; Liu, Y M

    2016-08-20

    Objective: To investigate radioactive source term dose monitoring and estimation results in a manufacturing enterprise of ion-absorbing type rare earth ore and the possible ionizing radiation dose received by its workers. Methods: Ionizing radiation monitoring data of the posts in the control area and supervised area of workplace were collected, and the annual average effective dose directly estimated or estimated using formulas was evaluated and analyzed. Results: In the control area and supervised area of the workplace for this rare earth ore, α surface contamination activity had a maximum value of 0.35 Bq/cm 2 and a minimum value of 0.01 Bq/cm 2 ; β radioactive surface contamination activity had a maximum value of 18.8 Bq/cm 2 and a minimum value of 0.22 Bq/cm 2 . In 14 monitoring points in the workplace, the maximum value of the annual average effective dose of occupational exposure was 1.641 mSv/a, which did not exceed the authorized limit for workers (5 mSv/a) , but exceeded the authorized limit for general personnel (0.25 mSv/a) . The radionuclide specific activity of ionic mixed rare earth oxides was determined to be 0.9. Conclusion: The annual average effective dose of occupational exposure in this enterprise does not exceed the authorized limit for workers, but it exceeds the authorized limit for general personnel. We should pay attention to the focus of the radiation process, especially for public works radiation.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratz, S. Adam; Jones, Steven J.; Mullen, Austin D.

    Newly-established adsorption enthalpy and entropy values of 12 lanthanide hexafluoroacetylacetonates, denoted Ln[hfac] 4, along with the experimental and theoretical methodology used to obtain these values, are presented for the first time. The results of this work can be used in conjunction with theoretical modeling techniques to optimize a large-scale gas-phase separation experiment using isothermal chromatography. The results to date indicate average adsorption enthalpy and entropy values of the 12 Ln[hfac] 4 complexes ranging from -33 to -139 kJ/mol K and -299 to -557 J/mol, respectively.

  10. Real time detection of farm-level swine mycobacteriosis outbreak using time series modeling of the number of condemned intestines in abattoirs.

    PubMed

    Adachi, Yasumoto; Makita, Kohei

    2015-09-01

    Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.

  11. Variation and significance of surface heat after the mechanical sand control of Qinghai-Tibet Railway was covered with sandy sediments

    NASA Astrophysics Data System (ADS)

    Xie, Shengbo; Qu, Jianjun; Mu, Yanhu; Xu, Xiangtian

    Mechanical control of drifting sand used to protect the Qinghai-Tibet Railway from sand damage inevitably results in sand deposition, and the change in radiation and heat flux after the ground surface is covered with sandy sediments remains unclear. These variations were studied in this work through field observations along with laboratory analyses and tests. After the ground surface was covered with sandy sediments produced by the mechanical control of sand in the Qinghai-Tibet Railway, the reflectivity increased, and the annual average reflectivity on the surface covered with sandy sediments was higher than that without sandy sediments, with the value increasing by 0.043. Moreover, the surface shortwave radiation increased, whereas the surface net radiation decreased. The annual average value of the surface shortwave radiant flux density on the sandy sediments was higher than that without sandy sediments, with the value increasing by 7.291 W·m-2. The annual average value of the surface net radiant flux density on the sandy sediments decreased by 9.639 W·m-2 compared with that without sandy sediments. The soil heat flux also decreased, and the annual average value of the heat flux in the sandy sediments decreased by 0.375 W·m-2 compared with that without sandy sediments. These variations caused the heat source on the surface of sandy sediments underground to decrease, which is beneficial for preventing permafrost from degradation in the section of sand control of the railway.

  12. Cloud-to-ground lightning activity in Colombia: A 14-year study using lightning location system data

    NASA Astrophysics Data System (ADS)

    Herrera, J.; Younes, C.; Porras, L.

    2018-05-01

    This paper presents the analysis of 14 years of cloud-to-ground lightning activity observation in Colombia using lightning location systems (LLS) data. The first Colombian LLS operated from 1997 to 2001. After a few years, this system was upgraded and a new LLS has been operating since 2007. Data obtained from these two systems was analyzed in order to obtain lightning parameters used in designing lightning protection systems. The flash detection efficiency was estimated using average peak current maps and some theoretical results previously published. Lightning flash multiplicity was evaluated using a stroke grouping algorithm resulting in average values of about 1.0 and 1.6 for positive and negative flashes respectively and for both LLS. The time variation of this parameter changes slightly for the years considered in this study. The first stroke peak current for negative and positive flashes shows median values close to 29 kA and 17 kA respectively for both networks showing a great dependence on the flash detection efficiency. The average percentage of negative and positive flashes shows a 74.04% and 25.95% of occurrence respectively. The daily variation shows a peak between 23 and 02 h. The monthly variation of this parameter exhibits a bimodal behavior typical of the regions located near The Equator. The lightning flash density was obtained dividing the study area in 3 × 3 km cells and resulting in maximum average values of 25 and 35 flashes km- 2 year- 1 for each network respectively. A comparison of these results with global lightning activity hotspots was performed showing good correlation. Besides, the lightning flash density variation with altitude shows an inverse relation between these two variables.

  13. An examination of elicitation method on fundamental frequency and repeatability of average airflow measures in children age 4:0-5:11 years.

    PubMed

    Brehm, Susan Baker; Weinrich, Barbara D; Sprouse, Dana C; May, Shelley K; Hughes, Michael R

    2012-11-01

    The purpose of this study was to determine the effect of task type on fundamental frequency (F(0)) and the short-term repeatability of average airflow values in preschool/kindergarten-age children. Prospective, experimental. Thirty healthy children (age 4.0-5.11 years) were included in this study. Participants completed three tasks (sustained vowel, counting, and storytelling) used to elicit measurements of F(0). With a 10-minute interval, participants also completed two trials of sustained /a/ at a comfortable pitch and loudness level for the measurement of average airflow rate. F(0) and intensity of the vowel production were recorded for both trials. A repeated measures analysis of variance revealed a significant main effect for task type elicitation on F(0) values (P=0.0003). A significant difference between elicitation tasks for F(0) was observed in the comparison of the counting and storytelling task (P<0.0001). A paired t test revealed no significant difference in average airflow rate across two trials (P=0.872). The change in F(0) and intensity was measured across the trials, and separate analyses of covariance revealed that these changes did not significantly influence average airflow values, (P=0.809) and (P=0.365), respectively. The results of this study demonstrated that F(0) may be influenced by task type in young children. Average airflow values appear to be stable over a short time period. This information is important in determining methods of evaluation and the reliability of instrumental measures in young children with voice disorders. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  14. Height, weight and body mass index values of mid-19th century New York legislative officers.

    PubMed

    Bodenhorn, Howard

    2010-07-01

    Previous studies of mid-19th century American heights and body mass index values have used potentially unrepresentative groups-students in military academies, prisoners, and African Americans. This paper uses an alternative source with heights and weights of ordinary people employed in a wide variety of occupations. The results reveal the operation of the antebellum paradox in that average heights declined between men born circa 1820 and those born circa 1840. Average weights also declined for adult males, suggesting a decline in mid-19th century nutritional status. 2010 Elsevier B.V. All rights reserved.

  15. Signal-averaged P wave in patients with paroxysmal atrial fibrillation.

    PubMed

    Rosenheck, S

    1997-10-01

    The theoretical and experimental rational of atrial signal-averaged ECG in patients with AF is delay in the intra-atrial and interatrial conduction. Similar to the ventricular signal-averaged ECG, the atrial signal-averaged ECG is an averaging of a high number of consecutive P waves that match the template created earlier P wave triggering is preferred over QRS triggering because of more accurate aligning. However, the small amplitude of the atrial ECG and its gradual increase from the isoelectric line may create difficulties in defining the start point if P wave triggering is used. Studies using P wave triggering and those using QRS triggering demonstrate a prolonged P wave duration in patients with paroxysmal AF. The negative predictive value of this test is relatively high at 60%-80%. The positive predictive value of atrial signal-averaged ECGs in predicting the risk of AF is considerably lower than the negative predictive value. All the data accumulated prospectively on the predictive value of P wave signal-averaging was determined only in patients undergoing coronary bypass surgery or following MI; its value in other patients with paroxysmal AF is still not determined. The clinical role of frequency-domain analysis (alone or added to time-domain analysis) remains undefined. Because of this limited knowledge on the predictive value of P wave signal-averaging, it is still not clinical medicine, and further research is needed before atrial signal-averaged ECG will be part of clinical testing.

  16. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less

  17. Child-Langmuir flow with periodically varying anode voltage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rokhlenko, A.

    Using the Lagrangian technique, we study settled Child-Langmuir flows in a one dimensional planar diodes whose anode voltages periodically vary around given positive values. Our goal is to find analytically if the average currents in these systems can exceed the famous Child-Langmuir limit found for the stationary current a long time ago. The main result of our study is that in a periodic quasi-stationary regime the average current can be larger than the Child-Langmuir maximum even by 50% compared with its adiabatic average value. The cathode current in this case has the form of rectangular pulses which are formed bymore » a very special triangular voltage modulation. This regime, i.e., periodicity, shape of pulses, and their amplitude, needs to be carefully chosen for the best performance.« less

  18. The salivary glands of Ameiva ameiva (Teiidae, Lacertilia). A morphological, morphometric and histochemical study.

    PubMed

    Lopes, R A; Costa, J R; Piccolo, A M; Petenusci, S O

    1982-01-01

    The authors studied morphological, morphometric, and histochemically the mucosubstances and proteins in the salivary glands of the lizard Ameiva. Based on the results, the authors concluded: 1. The labial salivary gland is formed by small mucous and mucoserous glands; the sublingual gland by mucoserous cells. 2. Mucous cells show neutral and sulphated mucosubstances and sialic acid. Mucoserous cells of the labial gland show neutral mucosubstance, sialic acid, hyaluronic acid and protein radicals. Mucoserous cells of the sublingual gland show neutral mucosubstance, sialic acid and protein radicals. 3. The average values for acinar area were: 1,198.11 microns 2 for mucoserous acini and 2,105.95 microns 2 for mucous acini of the labial salivary gland. The average values for nucleus volume were: 47.41 microns 3 for mucoserous cells and 38.97 microns 4 for mucous cells. 4. The average values for acinar area and nuclear volume of the mucoserous cells of the subingual gland were respectively: 1,474.62 microns 2 and 67.77 microns 3.

  19. CMEs, the Tail of the Solar Wind Magnetic Field Distribution, and 11-yr Cosmic Ray Modulation at 1 AU. Revised

    NASA Technical Reports Server (NTRS)

    Cliver, E. W.; Ling, A. G.; Richardson, I. G.

    2003-01-01

    Using a recent classification of the solar wind at 1 AU into its principal components (slow solar wind, high-speed streams, and coronal mass ejections (CMEs) for 1972-2000, we show that the monthly-averaged galactic cosmic ray intensity is anti-correlated with the percentage of time that the Earth is imbedded in CME flows. We suggest that this correlation results primarily from a CME related change in the tail of the distribution function of hourly-averaged values of the solar wind magnetic field (B) between solar minimum and solar maximum. The number of high-B (square proper subset 10 nT) values increases by a factor of approx. 3 from minimum to maximum (from 5% of all hours to 17%), with about two-thirds of this increase due to CMEs. On an hour-to-hour basis, average changes of cosmic ray intensity at Earth become negative for solar wind magnetic field values square proper subset 10 nT.

  20. Disinfection Byproducts in Drinking Water and Evaluation of Potential Health Risks of Long-Term Exposure in Nigeria.

    PubMed

    Benson, Nsikak U; Akintokun, Oyeronke A; Adedapo, Adebusayo E

    2017-01-01

    Levels of trihalomethanes (THMs) in drinking water from water treatment plants (WTPs) in Nigeria were studied using a gas chromatograph (GC Agilent 7890A with autosampler Agilent 7683B) equipped with electron capture detector (ECD). The mean concentrations of the trihalomethanes ranged from zero in raw water samples to 950  μ g/L in treated water samples. Average concentration values of THMs in primary and secondary disinfection samples exceeded the standard maximum contaminant levels. Results for the average THMs concentrations followed the order TCM > BDCM > DBCM > TBM. EPA-developed models were adopted for the estimation of chronic daily intakes (CDI) and excess cancer incidence through ingestion pathway. Higher average intake was observed in adults (4.52 × 10 -2  mg/kg-day), while the ingestion in children (3.99 × 10 -2  mg/kg-day) showed comparable values. The total lifetime cancer incidence rate was relatively higher in adults than children with median values 244 and 199 times the negligible risk level.

  1. The fractional urinary fluoride excretion of adults consuming naturally and artificially fluoridated water and the influence of water hardness: a randomized trial.

    PubMed

    Villa, A; Cabezas, L; Anabalón, M; Rugg-Gunn, A

    2009-09-01

    To assess whether there was any significant difference in the average fractional urinary fluoride excretion (FUFE) values among adults consuming (NaF) fluoridated Ca-free water (reference water), naturally fluoridated hard water and an artificially (H2SiF6) fluoridated soft water. Sixty adult females (N=20 for each treatment) participated in this randomized, double-blind trial. The experimental design of this study provided an indirect estimation of the fluoride absorption in different types of water through the assessment of the fractional urinary fluoride excretion of volunteers. Average daily FUFE values (daily amount of fluoride excreted in urine/daily total fluoride intake) were not significantly different between the three treatments (Kruskal-Wallis; p = 0.62). The average 24-hour FUFE value (n=60) was 0.69; 95% C.I. 0.65-0.73. The results of this study suggest that the absorption of fluoride is not affected by water hardness.

  2. Average M shell fluorescence yields for elements with 70≤Z≤92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahoul, A., E-mail: ka-abdelhalim@yahoo.fr; LPMRN laboratory, Department of Materials Science, Faculty of Sciences and Technology, Mohamed El Bachir El Ibrahimi University, Bordj-Bou-Arreridj 34030; Deghfel, B.

    2015-03-30

    The theoretical, experimental and analytical methods for the calculation of average M-shell fluorescence yield (ω{sup ¯}{sub M}) of different elements are very important because of the large number of their applications in various areas of physical chemistry and medical research. In this paper, the bulk of the average M-shell fluorescence yield measurements reported in the literature, covering the period 1955 to 2005 are interpolated by using an analytical function to deduce the empirical average M-shell fluorescence yield in the atomic range of 70≤Z≤92. The results were compared with the theoretical and fitted values reported by other authors. Reasonable agreement wasmore » typically obtained between our result and other works.« less

  3. Using polarizable POSSIM force field and fuzzy-border continuum solvent model to calculate pK(a) shifts of protein residues.

    PubMed

    Sharma, Ity; Kaminski, George A

    2017-01-15

    Our Fuzzy-Border (FB) continuum solvent model has been extended and modified to produce hydration parameters for small molecules using POlarizable Simulations Second-order Interaction Model (POSSIM) framework with an average error of 0.136 kcal/mol. It was then used to compute pK a shifts for carboxylic and basic residues of the turkey ovomucoid third domain (OMTKY3) protein. The average unsigned errors in the acid and base pK a values were 0.37 and 0.4 pH units, respectively, versus 0.58 and 0.7 pH units as calculated with a previous version of polarizable protein force field and Poisson Boltzmann continuum solvent. This POSSIM/FB result is produced with explicit refitting of the hydration parameters to the pK a values of the carboxylic and basic residues of the OMTKY3 protein; thus, the values of the acidity constants can be viewed as additional fitting target data. In addition to calculating pK a shifts for the OMTKY3 residues, we have studied aspartic acid residues of Rnase Sa. This was done without any further refitting of the parameters and agreement with the experimental pK a values is within an average unsigned error of 0.65 pH units. This result included the Asp79 residue that is buried and thus has a high experimental pK a value of 7.37 units. Thus, the presented model is capable or reproducing pK a results for residues in an environment that is significantly different from the solvated protein surface used in the fitting. Therefore, the POSSIM force field and the FB continuum solvent parameters have been demonstrated to be sufficiently robust and transferable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Somatic cell counts in bulk milk and their importance for milk processing

    NASA Astrophysics Data System (ADS)

    Savić, N. R.; Mikulec, D. P.; Radovanović, R. S.

    2017-09-01

    Bulk tank milk somatic cell counts are the indicator of the mammary gland health in the dairy herds and may be regarded as an indirect measure of milk quality. Elevated somatic cell counts are correlated with changes in milk composition The aim of this study was to assess the somatic cell counts that significantly affect the quality of milk and dairy products. We examined the somatic cell counts in bulk tank milk samples from 38 farms during the period of 6 months, from December to the May of the next year. The flow cytometry, Fossomatic was used for determination of somatic cell counts. In the same samples content of total proteins and lactose was determined by Milcoscan. Our results showed that average values for bulk tank milk samples were 273,605/ml from morning milking and 292,895/ml from evening milking. The average values for total proteins content from morning and evening milking are 3,31 and 3,34%, respectively. The average values for lactose content from morning and evening milking are 4,56 and 4,63%, respectively. The highest somatic cell count (516,000/ml) was detected in bulk tank milk sample from evening milk in the Winter and the lowest content of lactose was 4,46%. Our results showed that obtained values for bulk tank milk somatic cell counts did not significantly affected the content of total proteins and lactose.

  5. Computational Fluid-Dynamic Analysis after Carotid Endarterectomy: Patch Graft versus Direct Suture Closure.

    PubMed

    Domanin, Maurizio; Buora, Adelaide; Scardulla, Francesco; Guerciotti, Bruno; Forzenigo, Laura; Biondetti, Pietro; Vergara, Christian

    2017-10-01

    Closure technique after carotid endarterectomy (CEA) still remains an issue of debate. Routine use of patch graft (PG) has been advocated to reduce restenosis, stroke, and death, but its protective effect, particularly from late restenosis, is less evident and recent studies call into question this thesis. This study aims to compare PG and direct suture (DS) by means of computational fluid dynamics (CFD). To identify carotid regions with flow recirculation more prone to restenosis development, we analyzed time-averaged oscillatory shear index (OSI) and relative residence time (RRT), that are well-known indices correlated with plaque formation. CFD was performed in 12 patients (13 carotids) who underwent surgery for stenosis >70%, 9 with PG, and 4 with DS. Flow conditions were modeled using patient-specific boundary conditions derived from Doppler ultrasound and geometries from magnetic resonance angiography. Mean value of the spatial averaged OSI resulted 0.07 for PG group and 0.03 for DS group, the percentage of area with OSI above a threshold of 0.2 resulted 10.1% and 3.7%, respectively. The mean of averaged-in-space RRT values was 4.4 1/Pa for PG group and 1.6 1/Pa for DS group, the percentage of area with RRT values above a threshold of 4 1/Pa resulted 22.5% and 6.5%, respectively. Both OSI and RRT values resulted higher when PG was preferred to DS and also areas with disturbed flow resulted wider. The absolute higher values computed by means of CFD were observed when PG was used indiscriminately regardless of carotid diameters. DS does not seem to create negative hemodynamic conditions with potential adverse effects on long-term outcomes, in particular when CEA is performed at the common carotid artery and/or the bulb or when ICA diameter is greater than 5.0 mm. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Exercise reduces depressive symptoms in adults with arthritis: Evidential value

    PubMed Central

    Kelley, George A; Kelley, Kristi S

    2016-01-01

    AIM To determine whether evidential value exists that exercise reduces depression in adults with arthritis and other rheumatic conditions. METHODS Utilizing data derived from a prior meta-analysis of 29 randomized controlled trials comprising 2449 participants (1470 exercise, 979 control) with fibromyalgia, osteoarthritis, rheumatoid arthritis or systemic lupus erythematosus, a new method, P-curve, was utilized to assess for evidentiary worth as well as dismiss the possibility of discriminating reporting of statistically significant results regarding exercise and depression in adults with arthritis and other rheumatic conditions. Using the method of Stouffer, Z-scores were calculated to examine selective-reporting bias. An alpha (P) value < 0.05 was deemed statistically significant. In addition, average power of the tests included in P-curve, adjusted for publication bias, was calculated. RESULTS Fifteen of 29 studies (51.7%) with exercise and depression results were statistically significant (P < 0.05) while none of the results were statistically significant with respect to exercise increasing depression in adults with arthritis and other rheumatic conditions. Right-skew to dismiss selective reporting was identified (Z = −5.28, P < 0.0001). In addition, the included studies did not lack evidential value (Z = 2.39, P = 0.99), nor did they lack evidential value and were P-hacked (Z = 5.28, P > 0.99). The relative frequencies of P-values were 66.7% at 0.01, 6.7% each at 0.02 and 0.03, 13.3% at 0.04 and 6.7% at 0.05. The average power of the tests included in P-curve, corrected for publication bias, was 69%. Diagnostic plot results revealed that the observed power estimate was a better fit than the alternatives. CONCLUSION Evidential value results provide additional support that exercise reduces depression in adults with arthritis and other rheumatic conditions. PMID:27489782

  7. Disaggregating measurement uncertainty from population variability and Bayesian treatment of uncensored results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.

    2012-04-17

    In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average ofmore » the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.« less

  8. Average capacity optimization in free-space optical communication system over atmospheric turbulence channels with pointing errors.

    PubMed

    Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui

    2010-10-01

    A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamankaradeniz, R.; Horuz, I.

    In this study, the characteristics of solar assisted heat pump are investigated theoretically and experimentally for clear days during the seven months of the winter season in Istanbul/Turkey. A theoretical model was developed and a computer program was written on this basis. The characteristics such as: daily average collector efficiency and solar radiation, monthly average heat transfer at the condenser, monthly average cooling capacity, the mean COP and the mean COP for total system were examined. The theoretical results were found to be in good agreement with the experimental values.

  10. Deformation of a plate with periodically changing parameters

    NASA Astrophysics Data System (ADS)

    Naumova, Natalia V.; Ivanov, Denis; Voloshinova, Tatiana

    2018-05-01

    Deformation of reinforced square plate under external pressure is considered. The averaged fourth-order partial differential equation for the plate deflection w is obtained. The new mathematical model of the plate is offered. Asymptotic averaging and Finite Elements Method (ANSYS) are used to get the values of normal deflections of the plate surface. The comparison of numerical and asymptotic results is performed.

  11. Measurement of 89Y(n,2n) spectral averaged cross section in LR-0 special core reactor spectrum

    NASA Astrophysics Data System (ADS)

    Košťál, Michal; Losa, Evžen; Baroň, Petr; Šolc, Jaroslav; Švadlenková, Marie; Koleška, Michal; Mareček, Martin; Uhlíř, Jan

    2017-12-01

    The present paper describes reaction rate measurement of 89Y(n,2n)88Y in a well-defined reactor spectrum of a special core assembled in the LR-0 reactor and compares this value with results of simulation. The reaction rate is derived from the measurement of activity of 88Y using gamma-ray spectrometry of irradiated Y2O3 sample. The resulting cross section value averaged in spectrum is 43.9 ± 1.5 μb, averaged in the 235U spectrum is 0.172 ± 0.006 mb. This cross-section is important as it is used as high energy neutron monitor and is therefore included in the International Reactor Dosimetry and Fusion File. Calculations of reaction rates were performed with the MCNP6 code using ENDF/B-VII.0, JEFF-3.1, JEFF-3.2, JENDL-3.3, JENDL-4, ROSFOND-2010, CENDL-3.1 and IRDFF nuclear data libraries. The agreement with uranium description by CIELO library is very good, while in ENDF/B-VII.0 description of uranium, underprediction about 10% in average can be observed.

  12. Neutron Lifetime and Axial Coupling Connection

    NASA Astrophysics Data System (ADS)

    Czarnecki, Andrzej; Marciano, William J.; Sirlin, Alberto

    2018-05-01

    Experimental studies of neutron decay, n →p e ν ¯, exhibit two anomalies. The first is a 8.6(2.1) s, roughly 4 σ difference between the average beam measured neutron lifetime, τnbeam=888.0 (2.0 ) s , and the more precise average trapped ultracold neutron determination, τntrap=879.4 (6 ) s . The second is a 5 σ difference between the pre2002 average axial coupling, gA, as measured in neutron decay asymmetries gApre 2002=1.2637 (21 ) , and the more recent, post2002, average gApost 2002=1.2755 (11 ), where, following the UCNA Collaboration division, experiments are classified by the date of their most recent result. In this Letter, we correlate those τn and gA values using a (slightly) updated relation τn(1 +3 gA2)=5172.0 (1.1 ) s . Consistency with that relation and better precision suggest τnfavored=879.4 (6 ) s and gAfavored=1.2755 (11 ) as preferred values for those parameters. Comparisons of gAfavored with recent lattice QCD and muonic hydrogen capture results are made. A general constraint on exotic neutron decay branching ratios, <0.27 %, is discussed and applied to a recently proposed solution to the neutron lifetime puzzle.

  13. Automatic user customization for improving the performance of a self-paced brain interface system.

    PubMed

    Fatourechi, Mehrdad; Bashashati, Ali; Birch, Gary E; Ward, Rabab K

    2006-12-01

    Customizing the parameter values of brain interface (BI) systems by a human expert has the advantage of being fast and computationally efficient. However, as the number of users and EEG channels grows, this process becomes increasingly time consuming and exhausting. Manual customization also introduces inaccuracies in the estimation of the parameter values. In this paper, the performance of a self-paced BI system whose design parameter values were automatically user customized using a genetic algorithm (GA) is studied. The GA automatically estimates the shapes of movement-related potentials (MRPs), whose features are then extracted to drive the BI. Offline analysis of the data of eight subjects revealed that automatic user customization improved the true positive (TP) rate of the system by an average of 6.68% over that whose customization was carried out by a human expert, i.e., by visually inspecting the MRP templates. On average, the best improvement in the TP rate (an average of 9.82%) was achieved for four individuals with spinal cord injury. In this case, the visual estimation of the parameter values of the MRP templates was very difficult because of the highly noisy nature of the EEG signals. For four able-bodied subjects, for which the MRP templates were less noisy, the automatic user customization led to an average improvement of 3.58% in the TP rate. The results also show that the inter-subject variability of the TP rate is also reduced compared to the case when user customization is carried out by a human expert. These findings provide some primary evidence that automatic user customization leads to beneficial results in the design of a self-paced BI for individuals with spinal cord injury.

  14. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.

  15. Parameter interdependence and uncertainty induced by lumping in a hydrologic model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark R.; Doherty, John

    2007-05-01

    Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.

  16. The Economic Value of Career Counseling Services for College Students in South Korea

    ERIC Educational Resources Information Center

    Choi, Bo Young; Lee, Ji Hee; Kim, Areum; Kim, Boram; Cho, Daeyeon; Lee, Sang Min

    2013-01-01

    This study investigated college students' perception of the monetary value of career counseling services by using the contingent valuation method. The results of a multivariate survival analysis based on interviews with a convenience sample of 291 undergraduate students in South Korea indicate that, on average, participants' expressed willingness…

  17. Economic and Educational Correlates of TIMSS Results

    ERIC Educational Resources Information Center

    Mikk, Jaan

    2005-01-01

    The good knowledge of the correlates of educational achievement highlights the ways to the efficient use of economic and human capital in raising the efficiency of education. The present paper investigates the correlates and compares the values of the correlates for the Republic of Lithuania with the average international values. The data for the…

  18. Selected Hydrologic Applications of LANDSAT-2 Data: an Evaluation. [Snowmelt in the American River Basin and soil moisture studies at the Phoenix, Arizona Test Site and at Luverne, Minnesota

    NASA Technical Reports Server (NTRS)

    Wiesnet, D. R.; Mcginnis, D. F., Jr.; Matson, M. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Estimates of soil moisture were obtained from visible, near-IR gamma ray and microwave data. Attempts using GOES thermal-IR were unsuccessful due to resolutions (8 km). Microwaves were the most effective at soil moisture estimates, with and without vegetative cover. Gamma rays provided only one value for the test site, produced by many data points obtained from overlapping 150 meter diameter circles. Even though the resulting averaged value was near the averaged field moisture value, this method suffers from atmospheric contaminants, the need to fly at low altitudes, and the necessity of prior calibration of a given site. Visible and near-IR relationships are present for bare fields but appear to be limited to soil moisture levels between 5 and 20%. The densely vegetated alfalfa fields correlated with near-IR reflectance only; soil moisture values from wheat fields showed no relation to either or near-IR MSS data.

  19. Effect of Leg-to-Body Ratio on Body Shape Attractiveness.

    PubMed

    Kiire, Satoru

    2016-05-01

    Recent studies have examined various aspects of human physical attractiveness. Attractiveness is considered an evolved psychological mechanism acquired via natural selection because an attractive body reflects an individual's health and fertility. The length of the legs is an often-emphasized aspect of attractiveness and has been investigated using the leg-to-body ratio (LBR), which reflects nutritional status of the infant, health status, fecundity, and other factors that are predictive of physical fitness. However, previous studies of leg length and physical fitness have produced mixed results. The present study investigated the relationship between LBR, defined as the height to perineum divided by total height, and perceived attractiveness. Three-dimensional stimuli (11 male and 11 female) were constructed with various LBR features. Each stimulus was rated by 40 women and 40 men in Japan on a 7-point scale. The results showed that the values closest to the average LBRs were rated as the most attractive. Furthermore, by fitting a quadratic curve on the relationship between attractiveness and LBR, an inverted U-shaped curve with the peak located at the average LBR was observed. In addition, high LBR values were rated as more attractive in females, whereas the opposite was true for males. These results suggest that average LBR is indicative of good health and good reproductive potential, whereas more extreme values are avoided because they could be indicative of diseases and other maladaptive conditions.

  20. IQ and the values of nations.

    PubMed

    Kanazawa, Satoshi

    2009-07-01

    The origin of values and preferences is an unresolved theoretical question in behavioural and social sciences. The Savanna-IQ Interaction Hypothesis, derived from the Savanna Principle and a theory of the evolution of general intelligence, suggests that more intelligent individuals may be more likely to acquire and espouse evolutionarily novel values and preferences (such as liberalism and atheism and, for men, sexual exclusivity) than less intelligent individuals, but that general intelligence may have no effect on the acquisition and espousal of evolutionarily familiar values. Macro-level analyses show that nations with higher average intelligence are more liberal (have greater highest marginal individual tax rate and, as a result, lower income inequality), less religious (a smaller proportion of the population believes in God or considers themselves religious) and more monogamous. The average intelligence of a population appears to be the strongest predictor of its level of liberalism, atheism and monogamy.

  1. Abrupt shift in δ18O values at Medicine Lake volcano (California, USA)

    USGS Publications Warehouse

    Donnelly-Nolan, J. M.

    1998-01-01

     Oxygen-isotope analyses of lavas from Medicine Lake volcano (MLV), in the southern Cascade Range, indicate a significant change in δ18O in Holocene time. In the Pleistocene, basaltic lavas with <52% SiO2 averaged +5.9‰, intermediate lavas averaged +5.7‰, and silicic lavas (≥63.0%SiO2) averaged +5.6‰. No analyzed Pleistocene rhyolites or dacites have values greater than +6.3‰. In post-glacial time, basalts were similar at +5.7‰ to those erupted in the Pleistocene, but intermediate lavas average +6.8‰ and silicic lavas +7.4‰ with some values as high as +8.5‰. The results indicate a change in the magmatic system supplying the volcano. During the Pleistocene, silicic lavas resulted either from melting of low-18O crust or from fractionation combined with assimilation of very-low-18O crustal material such as hydrothermally altered rocks similar to those found in drill holes under the center of the volcano. By contrast, Holocene silicic lavas were produced by assimilation and/or wholesale melting of high-18O crustal material such as that represented by inclusions of granite in lavas on the upper flanks of MLV. This sudden shift in assimilant indicates a fundamental change in the magmatic system. Magmas are apparently ponding in the crust at a very different level than in Pleistocene time.

  2. Variability of radon and thoron equilibrium factors in indoor environment of Garhwal Himalaya.

    PubMed

    Prasad, Mukesh; Rawat, Mukesh; Dangwal, Anoop; Kandari, Tushar; Gusain, G S; Mishra, Rosaline; Ramola, R C

    2016-01-01

    The measurements of radon, thoron and their progeny concentrations have been carried out in the dwellings of Uttarkashi and Tehri districts of Garhwal Himalaya, India using LR-115 detector based pin-hole dosimeter and DRPS/DTPS techniques. The equilibrium factors for radon, thoron and their progeny were calculated by using the values measured with these techniques. The average values of equilibrium factor between radon and its progeny have been found to be 0.44, 0.39, 0.39 and 0.28 for rainy, autumn, winter and summer seasons, respectively. For thoron and its progeny, the average values of equilibrium factor have been found to be 0.04, 0.04, 0.04 and 0.03 for rainy, autumn, winter and summer seasons, respectively. The equilibrium factor between radon and its progeny has been found to be dependent on the seasonal changes. However, the equilibrium factor for thoron and progeny has been found to be same for rainy, autumn and winter seasons but slightly different for summer season. The annual average equilibrium factors for radon and thoron have been found to vary from 0.23 to 0.80 with an average of 0.42 and from 0.01 to 0.29 with an average of 0.07, respectively. The detailed discussion of the measurement techniques and the explanation for the results obtained is given in the paper. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Effect of γ-Al2O3/water nanofluid on the thermal performance of shell and coil heat exchanger with different coil torsions

    NASA Astrophysics Data System (ADS)

    Elshazly, K. M.; Sakr, R. Y.; Ali, R. K.; Salem, M. R.

    2017-06-01

    This work investigated experimentally the thermal performance of shell and coil heat exchanger with different coil torsions (λ) for γ-Al2O3/water nanofluid flow. Five helically coiled tube (HCT) with 0.0442 ≤ λ ≤ 0.1348 were tested within turbulent flow regime. The average size of γ-Al2O3 particles is 40 nm and volume concentration (φ) is varied from 0 to 2%. Results showed that reducing coil torsion enhances the heat transfer rate and increases HCT-friction factor (fc). Also, it is noticed that HCT average Nusselt number (Nut) and fc of nanofluids increase with increasing γ-Al2O3 volume concentration. The thermal performance index, TPI = (ht,nf/ht,bf)/(ΔPc,nf/ΔPc,bf). increases with increasing nanoparticles concentration, coil torsion, HCT-side inlet temperature and nanofluid flow rate. Over the studied range of HCT-Reynolds number, the average value of TPI is of 1.34 and 2.24 at φ = 0.5% and φ = 2%, respectively. The average value of TPI is of 1.64 at λ = 0.0442 while its average value at λ = 0.1348 is of 2.01. One of the main contributions is to provide heat equipments designers with Nut and fc correlations for practical configurations shell and coil heat exchangers with a wide range of nanofluid concentration.

  4. Exercise reduces depressive symptoms in adults with arthritis: Evidential value.

    PubMed

    Kelley, George A; Kelley, Kristi S

    2016-07-12

    To determine whether evidential value exists that exercise reduces depression in adults with arthritis and other rheumatic conditions. Utilizing data derived from a prior meta-analysis of 29 randomized controlled trials comprising 2449 participants (1470 exercise, 979 control) with fibromyalgia, osteoarthritis, rheumatoid arthritis or systemic lupus erythematosus, a new method, P -curve, was utilized to assess for evidentiary worth as well as dismiss the possibility of discriminating reporting of statistically significant results regarding exercise and depression in adults with arthritis and other rheumatic conditions. Using the method of Stouffer, Z -scores were calculated to examine selective-reporting bias. An alpha ( P ) value < 0.05 was deemed statistically significant. In addition, average power of the tests included in P -curve, adjusted for publication bias, was calculated. Fifteen of 29 studies (51.7%) with exercise and depression results were statistically significant ( P < 0.05) while none of the results were statistically significant with respect to exercise increasing depression in adults with arthritis and other rheumatic conditions. Right-skew to dismiss selective reporting was identified ( Z = -5.28, P < 0.0001). In addition, the included studies did not lack evidential value ( Z = 2.39, P = 0.99), nor did they lack evidential value and were P -hacked ( Z = 5.28, P > 0.99). The relative frequencies of P -values were 66.7% at 0.01, 6.7% each at 0.02 and 0.03, 13.3% at 0.04 and 6.7% at 0.05. The average power of the tests included in P -curve, corrected for publication bias, was 69%. Diagnostic plot results revealed that the observed power estimate was a better fit than the alternatives. Evidential value results provide additional support that exercise reduces depression in adults with arthritis and other rheumatic conditions.

  5. Determination of the Sampler Type and Rainfall Effect on the Deposition Fluxes of the Polychlorinated Biphenyls

    PubMed Central

    Birgül, Askin; Tasdemir, Yücel

    2012-01-01

    Atmospheric concentration and deposition samples were collected between June 2008 and June 2009 in an urban sampling site Yavuzselim, Turkey. Eighty-three polychlorinated biphenyl (PCB) congeners were targeted in the collected samples. It was found that 90% of the total PCB concentration was in the gas phase. Deposition samples were collected by a wet-dry deposition sampler (WDDS) and a bulk deposition sampler (BDS). Average total deposition fluxes measured with the BDS in dry periods was 5500 ± 2400 pg/(m2day); average dry deposition fluxes measured by the WDDS in the same period were 6400 ± 3300 pg/(m2day). The results indicated that the sampler type affected the measured flux values. Bulk deposition samples were also collected in rainy periods by using the BDS and the average flux value was 8700 ± 3100 pg/(m2day). The measured flux values were lower than the values reported for the urban and industrial areas. Dry deposition velocities for the WDDS and BDS samples were calculated 0.48 ± 0.35 cm/s and 0.13 ± 0.15 cm/s, respectively. PMID:22629199

  6. Temporal and Spatial Variation of Water Yield Modulus in the Yangtze River Basin in Recent 60 Years

    NASA Astrophysics Data System (ADS)

    Shi, Xiaoqing; Weng, Baisha; Qin, Tianling

    2018-01-01

    The Yangtze River Basin is the largest river basin of Asia and the third largest river basin of the world, the gross water resources amount ranks first in the river basins of the country, and it occupies an important position in the national water resources strategic layout. Under the influence of climate change and human activities, the water cycle has changed. The temporal and spatial distribution of precipitation in the basin is more uneven and the floods are frequent. In order to explore the water yield condition in the Yangtze River Basin, we selected the Water Yield Modulus (WYM) as the evaluation index, then analyzed the temporal and spatial evolution characteristics of the WYM in the Yangtze River Basin by using the climate tendency method and the M-K trend test method. The results showed that the average WYM of the Yangtze River Basin in 1956-2015 are between 103,600 and 1,262,900 m3/km2, with an average value of 562,300 m3/km2, which is greater than the national average value of 295,000 m3/km2. The minimum value appeared in the northwestern part of the Tongtian River district, the maximum value appeared in the northeastern of Dongting Lake district. The rate of change in 1956-2015 is between -0.68/a and 0.79/a, it showed a downward trend in the western part but not significantly, an upward trend in the eastern part reached a significance level of α=0.01. The minimum value appeared in the Tongtian River district, the largest value appeared in the Hangjia Lake district, and the average tendency rate is 0.04/a in the whole basin.

  7. Comparative analyses of bicyclists and motorcyclists in vehicle collisions focusing on head impact responses.

    PubMed

    Wang, Xinghua; Peng, Yong; Yi, Shengen

    2017-11-01

    To investigate the differences of the head impact responses between bicyclists and motorcyclists in vehicle collisions. A series of vehicle-bicycle and vehicle-motorcycle lateral impact simulations on four vehicle types at seven vehicle speeds (30, 35, 40, 45, 50, 55 and 60 km/h) and three two-wheeler moving speeds (5, 7.5 and 10 km/h for bicycle, 10, 12.5 and 15 km/h for motorcycle) were established based on PC-Crash software. To further comprehensively explore the differences, additional impact scenes with other initial conditions, such as impact angle (0, π/3, 2π/3 and π) and impact position (left, middle and right part of vehicle front-end), also were supplemented. And then, extensive comparisons were accomplished with regard to average head peak linear acceleration, average head impact speed, average head peak angular acceleration, average head peak angular speed and head injury severity. The results showed there were prominent differences of kinematics and body postures for bicyclists and motorcyclists even under same impact conditions. The variations of bicyclist head impact responses with the changing of impact conditions were a far cry from that of motorcyclists. The average head peak linear acceleration, average head impact speed and average head peak angular acceleration values were higher for motorcyclists than for bicyclists in most cases, while the bicyclists received greater average head peak angular speed values. And the head injuries of motorcyclists worsened faster with increased vehicle speed. The results may provide even deeper understanding of two-wheeler safety and contribute to improve the public health affected by road traffic accidents.

  8. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    NASA Astrophysics Data System (ADS)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  9. Numerical and experimental research on pentagonal cross-section of the averaging Pitot tube

    NASA Astrophysics Data System (ADS)

    Zhang, Jili; Li, Wei; Liang, Ruobing; Zhao, Tianyi; Liu, Yacheng; Liu, Mingsheng

    2017-07-01

    Averaging Pitot tubes have been widely used in many fields because of their simple structure and stable performance. This paper introduces a new shape of the cross-section of an averaging Pitot tube. Firstly, the structure of the averaging Pitot tube and the distribution of pressure taps are given. Then, a mathematical model of the airflow around it is formulated. After that, a series of numerical simulations are carried out to optimize the geometry of the tube. The distribution of the streamline and pressures around the tube are given. To test its performance, a test platform was constructed in accordance with the relevant national standards and is described in this paper. Curves are provided, linking the values of flow coefficient with the values of Reynolds number. With a maximum deviation of only  ±3%, the results of the flow coefficient obtained from the numerical simulations were in agreement with those obtained from experimental methods. The proposed tube has a stable flow coefficient and favorable metrological characteristics.

  10. Measurement of 226Ra, 232Th, 137Cs and 40K activities of Wheat and Corn Products in Ilam Province – Iran and Resultant Annual Ingestion Radiation Dose

    PubMed Central

    CHANGIZI, Vahid; SHAFIEI, Elham; ZAREH, Mohammad Reza

    2013-01-01

    Background: Background: Natural background radiation is the main source of human exposure to radioactive material. Soils naturally have radioactive mineral contents. The aim of this study is to determine natural (238 U, 232 Th, 40 K) and artificial (137 Cs) radioactivity levels in wheat and corn fields of Eilam province. Methods: HPGe detector was used to measure the concentration activity of 238 U and 232 Th series, 40 K and 137 Cs in wheat and corn samples taken from different regions of Eilam province, in Iran. Results: In wheat and corn samples, the average activity concentrations of 226 Ra, 232 Th, 40 K and 137 Cs were found to be 1, 67, 0.5, 91.73, 0.01 and 0.81, 0.85, 101.52, 0.07 Bq/kg (dry weight), respectively. H ex and H in in the present work are lower than 1. The average value of H ex was found to be 0.02 and 0.025 and average value of H in to be found 0.025 and 0.027 in wheat fields samples and corn samples in Eilam provinces, respectively. The obtained values of AGDE are 30.49 mSv/yr for wheat filed samples and 37.89 mSv/yr for corn samples; the AEDE rate values are 5.28 mSv/yr in wheat filed samples and this average value was found to be 6.13 mSv/yr in corn samples in Eilam. Transfer factors (TFs) of long lived radionuclide such as 137 Cs, 226 Ra, 232 Th and 40 K from soils to corn and wheat plants have been studied by radiotracer experiments. Conclusion: The natural radioactivity levels in Eilam province are not at the range of high risk of morbidity and are under international standards. PMID:26056646

  11. Climatic signals registered as Carbon isotopic values in Metasequoia leaf tissues: A statistical analysis

    NASA Astrophysics Data System (ADS)

    Yang, H.; Blais, B.; Perez, G.; Pagani, M.

    2006-12-01

    To examine climatic signals registered as carbon isotopic values in leaf tissues of C3 plants, we collected mature leaf tissues from sun and shade leaves of Metasequoia trees germinated from the 1947 batch of seeds from China and planted along a latitudinal gradient of the United States. Samples from 40 individual trees, along with fossilized material from the early Tertiary of the Canadian Arctic, were analyzed for C and concentration and isotopic values using EA-IRMS after the removal of free lipids. The generated datasets were then merged with climate data compiled from each tree site recorded as average values over the past thirty years (1971-2002, NOAA database). When the isotope data were cross plotted against each geographic and climatic indicator, Latitude, Mean Annual Temperature (MAT), Average Summer Mean Temperature (ASMT)(June-August), Mean Annual Precipitation (MAP), and Average Summer Mean Precipitation (ASMP) respectively correlation patterns were revealed. The best correlating trend was obtained between temperature parameters and C isotopic values, and this correlation is stronger in the northern leaf samples than the southern samples. We discovered a strong positive correlation between latitude and the offset of C isotopic values between shade and sun leaves. This investigation represents a comprehensive examination on climatic signals registered as C isotopic values on a single species that is marked by single genetic source. The results bear implications on paleoclimatic interpretations of C isotopic signals obtained from fossil plant tissues.

  12. The Choice of Spatial Interpolation Method Affects Research Conclusions

    NASA Astrophysics Data System (ADS)

    Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.

    2017-12-01

    Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.

  13. Gas chemical adsorption characterization of lanthanide hexafluoroacetylacetonates

    DOE PAGES

    Stratz, S. Adam; Jones, Steven J.; Mullen, Austin D.; ...

    2017-03-21

    Newly-established adsorption enthalpy and entropy values of 12 lanthanide hexafluoroacetylacetonates, denoted Ln[hfac] 4, along with the experimental and theoretical methodology used to obtain these values, are presented for the first time. The results of this work can be used in conjunction with theoretical modeling techniques to optimize a large-scale gas-phase separation experiment using isothermal chromatography. The results to date indicate average adsorption enthalpy and entropy values of the 12 Ln[hfac] 4 complexes ranging from -33 to -139 kJ/mol K and -299 to -557 J/mol, respectively.

  14. Spectral analysis of 87-lead body surface signal-averaged ECGs in patients with previous anterior myocardial infarction as a marker of ventricular tachycardia.

    PubMed

    Hosoya, Y; Kubota, I; Shibata, T; Yamaki, M; Ikeda, K; Tomoike, H

    1992-06-01

    There were few studies on the relation between the body surface distribution of high- and low-frequency components within the QRS complex and ventricular tachycardia (VT). Eighty-seven signal-averaged ECGs were obtained from 30 normal subjects (N group) and 30 patients with previous anterior myocardial infarction (MI) with VT (MI-VT[+] group, n = 10) or without VT (MI-VT[-] group, n = 20). The onset and offset of the QRS complex were determined from 87-lead root mean square values computed from the averaged (but not filtered) ECG waveforms. Fast Fourier transform analysis was performed on signal-averaged ECG. The resulting Fourier coefficients were attenuated by use of the transfer function, and then inverse transform was done with five frequency ranges (0-25, 25-40, 40-80, 80-150, and 150-250 Hz). From the QRS onset to the QRS offset, the time integration of the absolute value of reconstructed waveforms was calculated for each of the five frequency ranges. The body surface distributions of these areas were expressed as QRS area maps. The maximal values of QRS area maps were compared among the three groups. In the frequency ranges of 0-25 and 150-250 Hz, there were no significant differences in the maximal values among these three groups. Both MI groups had significantly smaller maximal values of QRS area maps in the frequency ranges of 25-40 and 40-80 Hz compared with the N group. The MI-VT(+) group had significantly smaller maximal values in the frequency ranges of 40-80 and 80-150 Hz than the MI-VT(-) group. These three groups were clearly differentiated by the maximal values of the 40-80-Hz QRS area map. It was suggested that the maximal value of the 40-80-Hz QRS area map was a new marker for VT after anterior MI.

  15. Effect of crack curvature on stress intensity factors for ASTM standard compact tension specimens

    NASA Technical Reports Server (NTRS)

    Alam, J.; Mendelson, A.

    1983-01-01

    The stress intensity factors (SIF) are calculated using the method of lines for the compact tension specimen in tensile and shear loading for curved crack fronts. For the purely elastic case, it was found that as the crack front curvature increases, the SIF value at the center of the specimen decreases while increasing at the surface. For the higher values of crack front curvatures, the maximum value of the SIF occurs at an interior point located adjacent to the surface. A thickness average SIF was computed for parabolically applied shear loading. These results were used to assess the requirements of ASTM standards E399-71 and E399-81 on the shape of crack fronts. The SIF is assumed to reflect the average stress environment near the crack edge.

  16. Ozone and its projection in regard to climate change

    NASA Astrophysics Data System (ADS)

    Melkonyan, Ani; Wagner, Patrick

    2013-03-01

    In this paper, the dependence of ozone-forming potential on temperature was analysed based on data from two stations (with an industrial and rural background, respectively) in North Rhine-Westphalia, Germany, for the period of 1983-2007. After examining the interrelations between ozone, NOx and temperature, a projection of the days with ozone exceedance (over a limit value of a daily maximum 8-h average ≥ 120 μg m-3 for 25 days per year averaged for 3 years) in terms of global climate change was made using probability theory and an autoregression integrated moving average (ARIMA) model. The results show that with a temperature increase of 3 K, the frequency of days when ozone exceeds its limit value will increase by 135% at the industrial station and by 87% at the rural background station.

  17. Orbit-averaged quantities, the classical Hellmann-Feynman theorem, and the magnetic flux enclosed by gyro-motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, R. J., E-mail: rperkins@pppl.gov; Bellan, P. M.

    Action integrals are often used to average a system over fast oscillations and obtain reduced dynamics. It is not surprising, then, that action integrals play a central role in the Hellmann-Feynman theorem of classical mechanics, which furnishes the values of certain quantities averaged over one period of rapid oscillation. This paper revisits the classical Hellmann-Feynman theorem, rederiving it in connection to an analogous theorem involving the time-averaged evolution of canonical coordinates. We then apply a modified version of the Hellmann-Feynman theorem to obtain a new result: the magnetic flux enclosed by one period of gyro-motion of a charged particle inmore » a non-uniform magnetic field. These results further demonstrate the utility of the action integral in regards to obtaining orbit-averaged quantities and the usefulness of this formalism in characterizing charged particle motion.« less

  18. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis.

    PubMed

    Lian, Yanyun; Song, Zhijian

    2014-01-01

    Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.

  19. Measurement of natural and 137Cs radioactivity concentrations at Izmit Bay (Marmara Sea), Turkey

    NASA Astrophysics Data System (ADS)

    Öksüz, I.; Güray, R. T.; Özkan, N.; Yalçin, C.; Ergül, H. A.; Aksan, S.

    2016-03-01

    In order to determine the radioactivity level at Izmit Bay Marmara Sea, marine sediment samples were collected from five different locations. The radioactivity concentrations of naturally occurring 238U, 232Th and 40K isotopes and also that of an artificial isotope 137Cs were measured by using gamma-ray spectroscopy. Preliminary results show that the radioactivity concentrations of 238U and 232Th isotopes are lower than the average worldwide values while the radioactivity concentrations of the 40K are higher than the average worldwide value. A small amount of 137Cs contamination, which might be caused by the Chernobyl accident, was also detected.

  20. Structure-activity relationships of pyrethroid insecticides. Part 2. The use of molecular dynamics for conformation searching and average parameter calculation

    NASA Astrophysics Data System (ADS)

    Hudson, Brian D.; George, Ashley R.; Ford, Martyn G.; Livingstone, David J.

    1992-04-01

    Molecular dynamics simulations have been performed on a number of conformationally flexible pyrethroid insecticides. The results indicate that molecular dynamics is a suitable tool for conformational searching of small molecules given suitable simulation parameters. The structures derived from the simulations are compared with the static conformation used in a previous study. Various physicochemical parameters have been calculated for a set of conformations selected from the simulations using multivariate analysis. The averaged values of the parameters over the selected set (and the factors derived from them) are compared with the single conformation values used in the previous study.

  1. Early Dose Response to Yttrium-90 Microsphere Treatment of Metastatic Liver Cancer by a Patient-Specific Method Using Single Photon Emission Computed Tomography and Positron Emission Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Janice M.; Department of Radiation Oncology, Wayne State University, Detroit, MI; Wong, C. Oliver

    2009-05-01

    Purpose: To evaluate a patient-specific single photon emission computed tomography (SPECT)-based method of dose calculation for treatment planning of yttrium-90 ({sup 90}Y) microsphere selective internal radiotherapy (SIRT). Methods and Materials: Fourteen consecutive {sup 90}Y SIRTs for colorectal liver metastasis were retrospectively analyzed. Absorbed dose to tumor and normal liver tissue was calculated by partition methods with two different tumor/normal liver vascularity ratios: an average 3:1 and a patient-specific ratio derived from pretreatment technetium-99m macroaggregated albumin SPECT. Tumor response was quantitatively evaluated from fluorine-18 fluoro-2-deoxy-D-glucose positron emission tomography scans. Results: Positron emission tomography showed a significant decrease in total tumor standardizedmore » uptake value (average, 52%). There was a significant difference in the tumor absorbed dose between the average and specific methods (p = 0.009). Response vs. dose curves fit by linear and linear-quadratic modeling showed similar results. Linear fit r values increased for all tumor response parameters with the specific method (+0.20 for mean standardized uptake value). Conclusion: Tumor dose calculated with the patient-specific method was more predictive of response in liver-directed {sup 90}Y SIRT.« less

  2. Effects of pH and dose on nasal absorption of scopolamine hydrobromide in human subjects

    NASA Technical Reports Server (NTRS)

    Ahmed, S.; Sileno, A. P.; deMeireles, J. C.; Dua, R.; Pimplaskar, H. K.; Xia, W. J.; Marinaro, J.; Langenback, E.; Matos, F. J.; Putcha, L.; hide

    2000-01-01

    PURPOSE: The present study was conducted to evaluate the effects of formulation pH and dose on nasal absorption of scopolamine hydrobromide, the single most effective drug available for the prevention of nausea and vomiting induced by motion sickness. METHODS: Human subjects received scopolamine nasally at a dose of 0.2 mg/0.05 mL or 0.4 mg/0.10 mL, blood samples were collected at different time points, and plasma scopolamine concentrations were determined by LC-MS/MS. RESULTS: Following administration of a 0.2 mg dose, the average Cmax values were found to be 262+/-118, 419+/-161, and 488+/-331 pg/ mL for pH 4.0, 7.0, and 9.0 formulations, respectively. At the 0.4 mg dose the average Cmax values were found to be 503+/-199, 933+/-449, and 1,308+/-473 pg/mL for pH 4.0, 7.0, and 9.0 formulations, respectively. At a 0.2 mg dose, the AUC values were found to be 23,208+/-6,824, 29,145+/-9,225, and 25,721+/-5,294 pg x min/mL for formulation pH 4.0, 7.0, and 9.0, respectively. At a 0.4 mg dose, the average AUC value was found to be high for pH 9.0 formulation (70,740+/-29,381 pg x min/mL) as compared to those of pH 4.0 (59,573+/-13,700 pg x min/mL) and pH 7.0 (55,298+/-17,305 pg x min/mL) formulations. Both the Cmax and AUC values were almost doubled with doubling the dose. On the other hand, the average Tmax, values decreased linearly with a decrease in formulation pH at both doses. For example, at a 0.4 mg dose, the average Tmax values were 26.7+/-5.8, 15.0+/-10.0, and 8.8+/-2.5 minutes at formulation pH 4.0, 7.0, and 9.0, respectively. CONCLUSIONS: Nasal absorption of scopolamine hydrobromide in human subjects increased substantially with increases in formulation pH and dose.

  3. [Forest lighting fire forecasting for Daxing'anling Mountains based on MAXENT model].

    PubMed

    Sun, Yu; Shi, Ming-Chang; Peng, Huan; Zhu, Pei-Lin; Liu, Si-Lin; Wu, Shi-Lei; He, Cheng; Chen, Feng

    2014-04-01

    Daxing'anling Mountains is one of the areas with the highest occurrence of forest lighting fire in Heilongjiang Province, and developing a lightning fire forecast model to accurately predict the forest fires in this area is of importance. Based on the data of forest lightning fires and environment variables, the MAXENT model was used to predict the lightning fire in Daxing' anling region. Firstly, we studied the collinear diagnostic of each environment variable, evaluated the importance of the environmental variables using training gain and the Jackknife method, and then evaluated the prediction accuracy of the MAXENT model using the max Kappa value and the AUC value. The results showed that the variance inflation factor (VIF) values of lightning energy and neutralized charge were 5.012 and 6.230, respectively. They were collinear with the other variables, so the model could not be used for training. Daily rainfall, the number of cloud-to-ground lightning, and current intensity of cloud-to-ground lightning were the three most important factors affecting the lightning fires in the forest, while the daily average wind speed and the slope was of less importance. With the increase of the proportion of test data, the max Kappa and AUC values were increased. The max Kappa values were above 0.75 and the average value was 0.772, while all of the AUC values were above 0.5 and the average value was 0. 859. With a moderate level of prediction accuracy being achieved, the MAXENT model could be used to predict forest lightning fire in Daxing'anling Mountains.

  4. In vivo skin imaging for hydration and micro relief-measurement.

    PubMed

    Kardosova, Z; Hegyi, V

    2013-01-01

    We present the results of our work with device used for measurement of skin capacitance before and after application of moisturizing creams and results of experiment performed on cellulose filter papers soaked with different solvents. The measurements were performed by a device built on capacitance sensor, which provides an investigator with a capacitance image of the skin. The capacitance values are coded in a range of 256 gray levels then the skin hydration can be characterized using parameters derived from gray level histogram by specific software. The images obtained by device allow a highly precise observation of skin topography. Measuring of skin capacitance brings new, objective, reliable information about topographical, physical and chemical parameters of the skin. The study shows that there is a good correlation between the average grayscale values and skin hydration. In future works we need to complete more comparison studies, interpret the average grayscale values to skin hydration levels and use it for follow-up of dynamics of skin micro-relief and hydration changes (Fig. 6, Ref. 15).

  5. Energy gain calculations in Penning fusion systems using a bounce-averaged Fokker-Planck model

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Miley, G. H.; Barnes, D. C.; Knoll, D. A.

    2000-11-01

    In spherical Penning fusion devices, a spherical cloud of electrons, confined in a Penning-like trap, creates the ion-confining electrostatic well. Fusion energy gains for these systems have been calculated in optimistic conditions (i.e., spherically uniform electrostatic well, no collisional ion-electron interactions, single ion species) using a bounce-averaged Fokker-Planck (BAFP) model. Results show that steady-state distributions in which the Maxwellian ion population is dominant correspond to lowest ion recirculation powers (and hence highest fusion energy gains). It is also shown that realistic parabolic-like wells result in better energy gains than square wells, particularly at large well depths (>100 kV). Operating regimes with fusion power to ion input power ratios (Q-value) >100 have been identified. The effect of electron losses on the Q-value has been addressed heuristically using a semianalytic model, indicating that large Q-values are still possible provided that electron particle losses are kept small and well depths are large.

  6. Phantom studies investigating extravascular density imaging for partial volume correction of 3-D PET /sup 18/FDG studies

    NASA Astrophysics Data System (ADS)

    Wassenaar, R. W.; Beanlands, R. S. B.; deKemp, R. A.

    2004-02-01

    Limited scanner resolution and cardiac motion contribute to partial volume (PV) averaging of cardiac PET images. An extravascular (EV) density image, created from the subtraction of a blood pool scan from a transmission image, has been used to correct for PV averaging in H/sub 2//sup 15/O studies using 2-D imaging but not with 3-D imaging of other tracers such as /sup 18/FDG. A cardiac phantom emulating the left ventricle was used to characterize the method for use in 3-D PET studies. Measurement of the average myocardial activity showed PV losses of 32% below the true activity (p<0.001). Initial application of the EV density correction still yielded a myocardial activity 13% below the true value (p<0.001). This failure of the EV density image was due to the 1.66 mm thick plastic barrier separating the myocardial and ventricular chambers within the phantom. Upon removal of this artifact by morphological dilation of the blood pool, the corrected myocardial value was within 2% of the true value (p=ns). Spherical ROIs (diameter of 2 to 10 mm), evenly distributed about the myocardium, were also used to calculate the average activity. The EV density image was able to account for PV averaging throughout the range of diameters to within a 5% accuracy, however, a small bias was seen as the size of the ROIs increased. This indicated a slight mismatch between the emission and transmission image resolutions, a result of the difference in data acquisitions (i.e., span and ring difference) and default smoothing. These results show that the use of EV density image to correct for PV averaging is possible with 3-D PET. A method of correcting barrier effects in phantoms has been presented, as well as a process for evaluating resolution mismatch.

  7. Analysis of the solar radiation data for Beer Sheva, Israel, and its environs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudish, A.I.; Ianetz, A.

    The solar radiation climate of Beer Sheva, Israel, is reported upon in detail. The database utilized in this analysis consisted of global radiation on a horizontal surface, normal incidence beam radiation, and global radiation on a south-facing surface tilted at 40{degree}. Monthly-average hourly and daily values are reported for each of these three types of measured radiations, together with the calculated monthly-average daily values for the components of the global radiation, viz. the horizontal beam and diffuse radiations. The monthly-average hourly and daily clearness index values have also been calculated and analyzed. Monthly-average daily frequency distributions of the clearness indexmore » values are reported for each month. The solar radiation climate of Beer Sheva has also been compared to those reported for a number of countries in this region. The annual-average daily global radiation incident on a horizontal surface is 18.91 MG/m{sup 2} and that for normal incidence beam radiation is 21.17 MG/m{sup 2}. The annual-average daily fraction of the horizontal global radiation that is beam is 0.72. The annual-average daily value for the clearness index is 0.587 and the average frequency of clear days annually is 58.6%. The authors conclude, based upon the above analysis, that Beer Sheva and its environs are characterized by relatively high, average-daily irradiation rates, both global and beam, and a relatively high frequency of clear days.« less

  8. Diagnostic features of quantitative comb-push shear elastography for breast lesion differentiation

    PubMed Central

    Denis, Max; Gregory, Adriana; Mehrmohammadi, Mohammad; Kumar, Viksit; Meixner, Duane; Fazzio, Robert T.; Fatemi, Mostafa

    2017-01-01

    Background Lesion stiffness measured by shear wave elastography has shown to effectively separate benign from malignant breast masses. The aim of this study was to evaluate different aspects of Comb-push Ultrasound Shear Elastography (CUSE) performance in differentiating breast masses. Methods With written signed informed consent, this HIPAA- compliant, IRB approved prospective study included patients from April 2014 through August 2016 with breast masses identified on conventional imaging. Data from 223 patients (19–85 years, mean 59.93±14.96 years) with 227 suspicious breast masses identifiable by ultrasound (mean size 1.83±2.45cm) were analyzed. CUSE was performed on all patients. Three regions of interest (ROI), 3 mm in diameter each, were selected inside the lesion on the B-mode ultrasound which also appeared in the corresponding shear wave map. Lesion elasticity values were measured in terms of the Young’s modulus. In correlation to pathology results, statistical analyses were performed. Results Pathology revealed 108 lesions as malignant and 115 lesions as benign. Additionally, 4 lesions (BI-RADS 2 and 3) were considered benign and were not biopsied. Average lesion stiffness measured by CUSE resulted in 84.26% sensitivity (91 of 108), 89.92% specificity (107 of 119), 85.6% positive predictive value, 89% negative predictive value and 0.91 area under the curve (P<0.0001). Stiffness maps showed spatial continuity such that maximum and average elasticity did not have significantly different results (P > 0.21). Conclusion CUSE was able to distinguish between benign and malignant breast masses with high sensitivity and specificity. Continuity of stiffness maps allowed for choosing multiple quantification ROIs which covered large areas of lesions and resulted in similar diagnostic performance based on average and maximum elasticity. The overall results of this study, highlights the clinical value of CUSE in differentiation of breast masses based on their stiffness. PMID:28257467

  9. Some induced intuitionistic fuzzy aggregation operators applied to multi-attribute group decision making

    NASA Astrophysics Data System (ADS)

    Su, Zhi-xin; Xia, Guo-ping; Chen, Ming-yuan

    2011-11-01

    In this paper, we define various induced intuitionistic fuzzy aggregation operators, including induced intuitionistic fuzzy ordered weighted averaging (OWA) operator, induced intuitionistic fuzzy hybrid averaging (I-IFHA) operator, induced interval-valued intuitionistic fuzzy OWA operator, and induced interval-valued intuitionistic fuzzy hybrid averaging (I-IIFHA) operator. We also establish various properties of these operators. And then, an approach based on I-IFHA operator and intuitionistic fuzzy weighted averaging (WA) operator is developed to solve multi-attribute group decision-making (MAGDM) problems. In such problems, attribute weights and the decision makers' (DMs') weights are real numbers and attribute values provided by the DMs are intuitionistic fuzzy numbers (IFNs), and an approach based on I-IIFHA operator and interval-valued intuitionistic fuzzy WA operator is developed to solve MAGDM problems where the attribute values provided by the DMs are interval-valued IFNs. Furthermore, induced intuitionistic fuzzy hybrid geometric operator and induced interval-valued intuitionistic fuzzy hybrid geometric operator are proposed. Finally, a numerical example is presented to illustrate the developed approaches.

  10. Assessment of the effects of CT dose in averaged x-ray CT images of a dose-sensitive polymer gel

    NASA Astrophysics Data System (ADS)

    Kairn, T.; Kakakhel, M. B.; Johnston, H.; Jirasek, A.; Trapp, J. V.

    2015-01-01

    The signal-to-noise ratio achievable in x-ray computed tomography (CT) images of polymer gels can be increased by averaging over multiple scans of each sample. However, repeated scanning delivers a small additional dose to the gel which may compromise the accuracy of the dose measurement. In this study, a NIPAM-based polymer gel was irradiated and then CT scanned 25 times, with the resulting data used to derive an averaged image and a "zero-scan" image of the gel. Comparison between these two results and the first scan of the gel showed that the averaged and zero-scan images provided better contrast, higher contrast-to- noise and higher signal-to-noise than the initial scan. The pixel values (Hounsfield units, HU) in the averaged image were not noticeably elevated, compared to the zero-scan result and the gradients used in the linear extrapolation of the zero-scan images were small and symmetrically distributed around zero. These results indicate that the averaged image was not artificially lightened by the small, additional dose delivered during CT scanning. This work demonstrates the broader usefulness of the zero-scan method as a means to verify the dosimetric accuracy of gel images derived from averaged x-ray CT data.

  11. Mean Arterial Blood Pressure Correlates with Neurological Recovery after Human Spinal Cord Injury: Analysis of High Frequency Physiologic Data

    PubMed Central

    Hawryluk, Gregory; Whetstone, William; Saigal, Rajiv; Ferguson, Adam; Talbott, Jason; Bresnahan, Jacqueline; Dhall, Sanjay; Pan, Jonathan; Beattie, Michael

    2015-01-01

    Abstract Current guidelines for the care of patients with acute spinal cord injuries (SCIs) recommend maintaining mean arterial pressure (MAP) values of 85–90 mm Hg for 7 days after an acute SCI however, little evidence supports this recommendation. We sought to better inform the relationship between MAP values and neurological recovery. A computer system automatically collected and stored q1 min physiological data from intensive care unit monitors on patients with SCI over a 6-year period. Data for 100 patients with acute SCI were collected. 74 of these patients had American Spinal Injury Association Impairment Scale (AIS) grades determined by physical examination on admission and at time of hospital discharge. Average MAP values as well as the proportion of MAP values below thresholds were explored for values from 120 mm Hg to 40 mm Hg in 1 mm Hg increments; the relationship between these measures and outcome was explored at various time points up to 30 days from the time of injury. A total of 994,875 q1 min arterial line blood pressure measurements were recorded for the included patients amid 1,688,194 min of recorded intensive care observations. A large proportion of measures were below 85 mm Hg despite generally acceptable average MAP values. Higher average MAP values correlated with improved recovery in the first 2–3 days after SCI while the proportion of MAP values below the accepted threshold of 85 mm Hg seemed a stronger correlate, decreasing in strength over the first 5–7 days after injury. This study provides strong evidence supporting a correlation between MAP values and neurological recovery. It does not, however, provide evidence of a causal relationship. Duration of hypotension may be more important than average MAP. It provides support for the notion of MAP thresholds in SCI recovery, and the highest MAP values correlated with the greatest degree of neurological recovery. The results are concordant with current guidelines in suggesting that MAP thresholds >85 mm Hg may be appropriate after acute SCI. PMID:25669633

  12. [Comparing the value of ecological protection in Sanjiang Plain wetland, Northeast China based on the stated preference method.

    PubMed

    Fan, Zi Juan; Ao, Chang Lin; Mao, Bi Qi; Chen, Hong Guang; Wang, Xu Dong

    2017-02-01

    Stated preference method is usually used to evaluate the non-market value of environmental goods which includes contingent valuation method (CVM) and choice experiments (CE). In this paper, stated preference method was adopted to evaluate the non-market value of Sanjiang Plain wetland. A willingness to pay (WTP) evaluation model of stated preference method was constructed based on the random utility theory. The average WTP of CVM and CE was obtained, respectively. The average WTP elicited by CE was 379 yuan per year, and the marginal WTPs of different selection properties including water conservation, wetland area, natural landscape and biodiversity were114.00, 72.55, 59.55 and 37.09 yuan per year, respectively. Meanwhile, the average WTP elicited by CVM was 134 yuan per year. The influence of factors on WTP was analyzed and reasons for protest responses were discussed. Results showed that the respondents' WTP elicited by CE was signi-ficantly higher than that by CVM, and respondents' socio-economic attitudes such as level of education and personal annual income had a significant positive impact on respondents' WTP. There were no significant difference in the reasons of protest responses between CVM and CE. Besides, respondents' multiple attributes and multiple levels analysis could be carried out by CE and the WTP of wetland's selection attributes could be calculated. Therefore, CE had the better ability of revealing respondents' preference information than CVM and its assessment results were more close to the actual value.

  13. Turning Fiction Into Non-fiction for Signal-to-Noise Ratio Estimation -- The Time-Multiplexed and Adaptive Split-Symbol Moments Estimator

    NASA Astrophysics Data System (ADS)

    Simon, M.; Dolinar, S.

    2005-08-01

    A means is proposed for realizing the generalized split-symbol moments estimator (SSME) of signal-to-noise ratio (SNR), i.e., one whose implementation on the average allows for a number of subdivisions (observables), 2L, per symbol beyond the conventional value of two, with other than an integer value of L. In theory, the generalized SSME was previously shown to yield optimum performance for a given true SNR, R, when L=R/sqrt(2) and thus, in general, the resulting estimator was referred to as the fictitious SSME. Here we present a time-multiplexed version of the SSME that allows it to achieve its optimum value of L as above (to the extent that it can be computed as the average of a sum of integers) at each value of SNR and as such turns fiction into non-fiction. Also proposed is an adaptive algorithm that allows the SSME to rapidly converge to its optimum value of L when in fact one has no a priori information about the true value of SNR.

  14. SU-D-18C-02: Feasibility of Using a Short ASL Scan for Calibrating Cerebral Blood Flow Obtained From DSC-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P; Chang, T; Huang, K

    2014-06-01

    Purpose: This study aimed to evaluate the feasibility of using a short arterial spin labeling (ASL) scan for calibrating the dynamic susceptibility contrast- (DSC-) MRI in a group of patients with internal carotid artery stenosis. Methods: Six patients with unilateral ICA stenosis enrolled in the study on a 3T clinical MRI scanner. The ASL-cerebral blood flow (-CBF) maps were calculated by averaging different number of dynamic points (N=1-45) acquired by using a Q2TIPS sequence. For DSC perfusion analysis, arterial input function was selected to derive the relative cerebral blood flow (rCBF) map and the delay (Tmax) map. Patient-specific CF wasmore » calculated from the mean ASL- and DSC-CBF obtained from three different masks: (1)Tmax< 3s, (2)combined gray matter mask with mask 1, (3)mask 2 with large vessels removed. One CF value was created for each number of averages by using each of the three masks for calibrating the DSC-CBF map. The CF value of the largest number of averages (NL=45) was used to determine the acceptable range(< 10%, <15%, and <20%) of CF values corresponding to the minimally acceptable number of average (NS) for each patient. Results: Comparing DSC CBF maps corrected by CF values of NL (CBFL) in ACA, MCA and PCA territories, all masks resulted in smaller CBF on the ipsilateral side than the contralateral side of the MCA territory(p<.05). The values obtained from mask 1 were significantly different than the mask 3(p<.05). Using mask 3, the medium values of Ns were 4(<10%), 2(<15%) and 2(<20%), with the worst case scenario (maximum Ns) of 25, 4, and 4, respectively. Conclusion: This study found that reliable calibration of DSC-CBF can be achieved from a short pulsed ASL scan. We suggested use a mask based on the Tmax threshold, the inclusion of gray matter only and the exclusion of large vessels for performing the calibration.« less

  15. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  16. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    PubMed

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  17. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    PubMed Central

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  18. Adverse Effects of UV-B Radiation on Plants Growing at Schirmacher Oasis, East Antarctica.

    PubMed

    Singh, Jaswant; Singh, Rudra P

    2014-01-01

    This study aimed to assess the impacts of ultraviolet-B (UV-B) radiation over a 28-day period on the levels of pigments of Umbilicaria aprina and Bryum argenteum growing in field. The depletion of stratospheric ozone is most prominent over Antarctica, which receives more UV-B radiation than most other parts of the planet. Although UV-B radiation adversely affects all flora, Antarctic plants are better equipped to survive the damaging effects of UV-B owing to defenses provided by UV-B absorbing compounds and other screening pigments. The UV-B radiations and daily average ozone values were measured by sun photometer and the photosynthetic pigments were analyzed by the standard spectrophotometric methods of exposed and unexposed selected plants. The daily average atmospheric ozone values were recorded from 5 January to 2 February 2008. The maximum daily average for ozone (310.7 Dobson Units (DU)) was recorded on 10 January 2008. On that day, average UV-B spectral irradiances were 0.016, 0.071, and 0.186 W m(-2) at wavelengths of 305, 312, and 320 nm, respectively. The minimum daily average ozone value (278.6 DU) was recorded on 31 January 2008. On that day, average UV-B spectral irradiances were 0.018, 0.085, and 0.210 W m(-2) at wavelengths of 305, 312, and 320 nm, respectively. Our results concludes that following prolonged UV-B exposure, total chlorophyll levels decreased gradually in both species, whereas levels of UV-B absorbing compounds, phenolics, and carotenoids gradually increased.

  19. Adverse Effects of UV-B Radiation on Plants Growing at Schirmacher Oasis, East Antarctica

    PubMed Central

    Singh, Jaswant; Singh, Rudra P.

    2014-01-01

    This study aimed to assess the impacts of ultraviolet-B (UV-B) radiation over a 28-day period on the levels of pigments of Umbilicaria aprina and Bryum argenteum growing in field. The depletion of stratospheric ozone is most prominent over Antarctica, which receives more UV-B radiation than most other parts of the planet. Although UV-B radiation adversely affects all flora, Antarctic plants are better equipped to survive the damaging effects of UV-B owing to defenses provided by UV-B absorbing compounds and other screening pigments. The UV-B radiations and daily average ozone values were measured by sun photometer and the photosynthetic pigments were analyzed by the standard spectrophotometric methods of exposed and unexposed selected plants. The daily average atmospheric ozone values were recorded from 5 January to 2 February 2008. The maximum daily average for ozone (310.7 Dobson Units (DU)) was recorded on 10 January 2008. On that day, average UV-B spectral irradiances were 0.016, 0.071, and 0.186 W m-2 at wavelengths of 305, 312, and 320 nm, respectively. The minimum daily average ozone value (278.6 DU) was recorded on 31 January 2008. On that day, average UV-B spectral irradiances were 0.018, 0.085, and 0.210 W m-2 at wavelengths of 305, 312, and 320 nm, respectively. Our results concludes that following prolonged UV-B exposure, total chlorophyll levels decreased gradually in both species, whereas levels of UV-B absorbing compounds, phenolics, and carotenoids gradually increased. PMID:24748743

  20. Assessment of average of normals (AON) procedure for outlier-free datasets including qualitative values below limit of detection (LoD): an application within tumor markers such as CA 15-3, CA 125, and CA 19-9.

    PubMed

    Usta, Murat; Aral, Hale; Mete Çilingirtürk, Ahmet; Kural, Alev; Topaç, Ibrahim; Semerci, Tuna; Hicri Köseoğlu, Mehmet

    2016-11-01

    Average of normals (AON) is a quality control procedure that is sensitive only to systematic errors that can occur in an analytical process in which patient test results are used. The aim of this study was to develop an alternative model in order to apply the AON quality control procedure to datasets that include qualitative values below limit of detection (LoD). The reported patient test results for tumor markers, such as CA 15-3, CA 125, and CA 19-9, analyzed by two instruments, were retrieved from the information system over a period of 5 months, using the calibrator and control materials with the same lot numbers. The median as a measure of central tendency and the median absolute deviation (MAD) as a measure of dispersion were used for the complementary model of AON quality control procedure. The u bias values, which were determined for the bias component of the measurement uncertainty, were partially linked to the percentages of the daily median values of the test results that fall within the control limits. The results for these tumor markers, in which lower limits of reference intervals are not medically important for clinical diagnosis and management, showed that the AON quality control procedure, using the MAD around the median, can be applied for datasets including qualitative values below LoD.

  1. Probabilistic Prognosis of Environmental Radioactivity Concentrations due to Radioisotopes Discharged to Water Bodies from Nuclear Power Plants.

    PubMed

    Tomás Zerquera, Juan; Mora, Juan C; Robles, Beatriz

    2017-11-15

    Due to their very low values, the complexity of comparing the contribution of nuclear power plants (NPPs) to environmental radioactivity with modeled values is recognized. In order to compare probabilistic prognosis of radioactivity concentrations with environmental measurement values, an exercise was performed using public data of radioactive routine discharges from three representative Spanish nuclear power plants. Specifically, data on liquid discharges from three Spanish NPPs: Almaraz, Vandellós II, and Ascó to three different aquatic bodies (river, lake, and coast) were used. Results modelled using generic conservative models together with Monte Carlo techniques used for uncertainties propagation were compared with values of radioactivity concentrations in the environment measured in the surroundings of these NPPs. Probability distribution functions were inferred for the source term, used as an input to the model to estimate the radioactivity concentrations in the environment due to discharges to the water bodies. Radioactivity concentrations measured in bottom sediments were used in the exercise due to their accumulation properties. Of all the radioisotopes measured in the environmental monitoring programs around the NPPs, only Cs-137, Sr-90, and Co-60 had positive values greater than their respective detection limits. Of those, Sr-90 and Cs-137 are easily measured in the environment, but significant contribution from the radioactive fall-out due to nuclear explosions in the atmosphere exists, and therefore their values cannot be attributed to the NPPs. On the contrary, Co-60 is especially useful as an indicator of the radioactive discharges from NPPs because its presence in the environment can solely be attributed to the impact of the closer nuclear facilities. All the modelled values for Co-60 showed a reasonable correspondence with measured environmental data in all cases, being conservative in two of them. The more conservative predictions obtained with the models were the activity concentrations in the sediments of a lake (Almaraz) where, on average, values two times higher were obtained. For the case of rivers (Ascó), calculated results were adequately conservative-up to 3.4 times on average. However, the results for coasts (Vandellos II) were in the same range as the environmental measurements, obtaining predictions that are only-at maximum-1.1 times higher than measured values. Only for this specific case of coasts could it be established that the models are not conservative enough, although the results, on average, are relatively close to the real values.

  2. Disinfection Byproducts in Drinking Water and Evaluation of Potential Health Risks of Long-Term Exposure in Nigeria

    PubMed Central

    Akintokun, Oyeronke A.; Adedapo, Adebusayo E.

    2017-01-01

    Levels of trihalomethanes (THMs) in drinking water from water treatment plants (WTPs) in Nigeria were studied using a gas chromatograph (GC Agilent 7890A with autosampler Agilent 7683B) equipped with electron capture detector (ECD). The mean concentrations of the trihalomethanes ranged from zero in raw water samples to 950 μg/L in treated water samples. Average concentration values of THMs in primary and secondary disinfection samples exceeded the standard maximum contaminant levels. Results for the average THMs concentrations followed the order TCM > BDCM > DBCM > TBM. EPA-developed models were adopted for the estimation of chronic daily intakes (CDI) and excess cancer incidence through ingestion pathway. Higher average intake was observed in adults (4.52 × 10−2 mg/kg-day), while the ingestion in children (3.99 × 10−2 mg/kg-day) showed comparable values. The total lifetime cancer incidence rate was relatively higher in adults than children with median values 244 and 199 times the negligible risk level. PMID:28900447

  3. Study of Water Quality Changes due to Offshore Dike Development Plan at Semarang Bay

    NASA Astrophysics Data System (ADS)

    Wibowo, M.; Hakim, B. A.

    2018-03-01

    Now, coast of Semarang Gulf is experiencing rapid growth because Semarang as a center economic growth in Central Java. On the other hand, coast of Gulf Semarang also experience a variety of very complex problems, such as tidal flood, land subsidence, as well as coastal damage due to erosion and sedimentation process. To overcome these problems BPPT and other institutions proposed construction of offshore dike. Construction of the offshore dike is a technology intervention to the marine environment that will certainly affect the hydrodynamic balance in coastal water including water quality in the Gulf of Semarang. Therefore, to determine changes in water quality that will happen is necessary to study the water quality modeling. The study was conducted by using a computational modeling software MIKE-21 Eco Lab Module from DHI. Based on this study result knowed that development offshore dike will change water quality in the west and east dam that formed. In west dam the average value of the DO decline 81.56% - 93.32 % and the average value of BOD rise from 22.01 to 31.19% and in the east dam, there is an increase average value DO of 83.19% - 75.80%, while the average value of BOD decrease by 95,04% - 96.01%. To prevent the downward trend in water quality due to the construction of the offshore dike, its necessary precautions at the upstream area before entering the Gulf of Semarang.

  4. Health risk profile for terrestrial radionuclides in soil around artisanal gold mining area at Alsopag, Sudan

    NASA Astrophysics Data System (ADS)

    Idriss, Hajo; Salih, Isam; Alaamer, Abdulaziz S.; AL-Rajhi, M. A.; Osman, Alshfia; Adreani, Tahir Elamin; Abdelgalil, M. Y.; Ali, Nagi I.

    2018-06-01

    This study shows the assessment of radiation hazard parameters due to terrestrial radionuclides in the soil around artisanal gold mining for addressing the issue of natural radioactivity in mining areas. Hence, the levels 238U, 232Th, 40K and 226Ra in soil (using gamma spectrometry), 222Rn in soil and 222Rn in air were determined. Radiation hazard parameters were then computed. These include absorbed dose D, annual effective dose E, radium equivalent activity Raeq, external hazard H ex, annual gonadal dose equivalent hazard index AGDE and excess lifetime cancer risk ELCR due to the inhalation of radon (222Rn) and consumption of radium (226Ra) in vegetation. Uranium (238U), thorium (232Th) and potassium (40K) averages were, respectively, 26, 36 and 685 Becquerel per kilogram (Bq kg-1). Soil radon (4671 Bq m-3) and radon in air (14.77 Bq m-3) were found to be less than worldwide data. Nevertheless, the average 40K concentration was 685 Bq kg-1. This is slightly higher than the United Nations Scientific Committee on the Effects of Atomic Radiation average value of 412 Bq kg-1. The obtained result indicates that some of the radiation hazard parameters seem unsavory. The mean value of absorbed dose rate (62.49 nGy h-1) was slightly higher than average value of 57 nGy h-1 ( 45% from 40K), and that of AGDE (444 μSv year-1) was higher than worldwide average reported value (300 μSv year-1). This study highlights the necessity to launch extensive nationwide radiation protection program in the mining areas for regulatory control.

  5. Seasonal atmospheric deposition variations of polychlorinated biphenyls (PCBs) and comparison of some deposition sampling techniques.

    PubMed

    Birgül, Askın; Tasdemir, Yücel

    2011-03-01

    Ambient air and bulk deposition samples were collected between June 2008 and June 2009. Eighty-three polychlorinated biphenyl (PCB) congeners were targeted in the samples. The average gas and particle PCB concentrations were found as 393 ± 278 and 70 ± 102 pg/m(3), respectively, and 85% of the atmospheric PCBs were in the gas phase. Bulk deposition samples were collected by using a sampler made of stainless steel. The average PCB bulk deposition flux value was determined as 6,020 ± 4,350 pg/m(2) day. The seasonal bulk deposition fluxes were not statistically different from each other, but the summer flux had higher values. Flux values differed depending on the precipitation levels. The average flux value in the rainy periods was 7,480 ± 4,080 pg/m(2) day while the average flux value in dry periods was 5,550 ± 4,420 pg/m(2) day. The obtained deposition values were lower than the reported values given for the urban and industrialized areas, yet close to the ones for the rural sites. The reported deposition values were also influenced by the type of the instruments used. The average dry deposition and total deposition velocity values calculated based on deposition and concentration values were found as 0.23 ± 0.21 and 0.13 ± 0.13 cm/s, respectively.

  6. Average grip strength: a meta-analysis of data obtained with a Jamar dynamometer from individuals 75 years or more of age.

    PubMed

    Bohannon, Richard W; Bear-Lehman, Jane; Desrosiers, Johanne; Massy-Westropp, Nicola; Mathiowetz, Virgil

    2007-01-01

    Although strength diminishes with age, average values for grip strength have not been available heretofore for discrete strata after 75 years. The purpose of this meta-analysis was to provide average values for the left and right hands of men and women 75-79, 80-84, 85-89, and 90-99 years. Contributing to the analysis were 7 studies and 739 subjects with whom the Jamar dynamometer and standard procedures were employed. Based on the analysis, average values for the left and right hands of men and women in each age stratum were derived. The derived values can serve as a standard of comparison for individual patients. An individual whose grip strength is below the lower limit of the confidence intervals of each stratum can be confidently considered to have less than average grip strength.

  7. Using Acoustic Structure Quantification During B-Mode Sonography for Evaluation of Hashimoto Thyroiditis.

    PubMed

    Rhee, Sun Jung; Hong, Hyun Sook; Kim, Chul-Hee; Lee, Eun Hye; Cha, Jang Gyu; Jeong, Sun Hye

    2015-12-01

    This study aimed to evaluate the usefulness of Acoustic Structure Quantification (ASQ; Toshiba Medical Systems Corporation, Nasushiobara, Japan) values in the diagnosis of Hashimoto thyroiditis using B-mode sonography and to identify a cutoff ASQ level that differentiates Hashimoto thyroiditis from normal thyroid tissue. A total of 186 thyroid lobes with Hashimoto thyroiditis and normal thyroid glands underwent sonography with ASQ imaging. The quantitative results were reported in an echo amplitude analysis (Cm(2)) histogram with average, mode, ratio, standard deviation, blue mode, and blue average values. Receiver operating characteristic curve analysis was performed to assess the diagnostic ability of the ASQ values in differentiating Hashimoto thyroiditis from normal thyroid tissue. Intraclass correlation coefficients of the ASQ values were obtained between 2 observers. Of the 186 thyroid lobes, 103 (55%) had Hashimoto thyroiditis, and 83 (45%) were normal. There was a significant difference between the ASQ values of Hashimoto thyroiditis glands and those of normal glands (P < .001). The ASQ values in patients with Hashimoto thyroiditis were significantly greater than those in patients with normal thyroid glands. The areas under the receiver operating characteristic curves for the ratio, blue average, average, blue mode, mode, and standard deviation were: 0.936, 0.902, 0.893, 0.855, 0.846, and 0.842, respectively. The ratio cutoff value of 0.27 offered the best diagnostic performance, with sensitivity of 87.38% and specificity of 95.18%. The intraclass correlation coefficients ranged from 0.86 to 0.94, which indicated substantial agreement between the observers. Acoustic Structure Quantification is a useful and promising sonographic method for diagnosing Hashimoto thyroiditis. Not only could it be a helpful tool for quantifying thyroid echogenicity, but it also would be useful for diagnosis of Hashimoto thyroiditis. © 2015 by the American Institute of Ultrasound in Medicine.

  8. Biochemical parameters as monitoring markers of the inflammatory reaction by patients with chronic obstructive pulmonary disease (COPD)

    PubMed

    Lenártová, Petra; Kopčeková, Jana; Gažarová, Martina; Mrázová, Jana; Wyka, Joanna

    Chronic obstructive pulmonary disease (COPD) is an airway inflammatory disease caused by inhalation of toxic particles, mainly cigarette smoking, and now is accepted as a disease associated with systemic characteristics. The aim of this work was to investigate and compare selected biochemical parameters in patients with and without COPD. Observation group consisted of clinically stable patients with COPD (n = 60). The control group was healthy persons from the general population, without COPD, who were divided into two subgroups – smokers (n = 30) and non-smokers (n = 30). Laboratory parameters were investigated by automated clinical chemistry analyzer LISA 200th. Albumin in our measurements showed an average value of 39.55 g.l-1 in the patient population; 38.89 g.l-1 in smokers and in non-smokers group 44.65 g.l-1. The average value of pre-albumin in the group of patients was 0.28 ± 0.28 g.l-1 and 0.30 ± 0.04 g.l-1 in smokers group. The average value of the orosomucoid in patients was about 1.11 ± 0.90 mg.ml-1. In the group of smokers, the mean value of orosomucoid was 0.60 ± 0.13 mg.ml-1. The level of C-reactive protein (CRP) in the patient group reached an average value of 15.31 ± 22.04 mg.l-1, in the group of smokers was 5.18 ± 4.58 mg. l-1. Prognostic inflammatory and nutritional index (PINI) in the group of patients showed a mean value of 4.65 ± 10.77 and 0.026 ± 0.025 in smokers. The results of this work show, that the values of index PINI in COPD patients are significantly higher than in smokers (P <0.001). This along with other monitored parameters indicative inflammation as well as a catabolic process that occurs in the organism of patients with COPD.

  9. White Matter Integrity in High-Altitude Pilots Exposed to Hypobaria

    PubMed Central

    McGuire, Stephen A.; Boone, Goldie R.E.; Sherman, Paul M.; Tate, David F.; Wood, Joe D.; Patel, Beenish; Eskandar, George; Wijtenburg, S. Andrea; Rowland, Laura M.; Clarke, Geoffrey D.; Grogan, Patrick M.; Sladky, John H.; Kochunov, Peter V.

    2017-01-01

    Introduction Nonhypoxic hypobaric (low atmospheric pressure) occupational exposure, such as experienced by U.S. Air Force U-2 pilots and safety personnel operating inside altitude chambers, is associated with increased subcortical white matter hyperintensity (WMH) burden. The pathophysiological mechanisms underlying this discrete WMH change remain unknown. The objectives of this study were to demonstrate that occupational exposure to nonhypoxic hypobaria is associated with altered white matter integrity as quantified by fractional anisotropy (FA) measured using diffusion tensor imaging and relate these findings to WMH burden and neurocognitive ability. Methods There were 102 U-2 pilots and 114 age- and gender-controlled, health-matched controls who underwent magnetic resonance imaging. All pilots performed neurocognitive assessment. Whole-brain and tract-wise average FA values were compared between pilots and controls, followed by comparison within pilots separated into high and low WMH burden groups. Neurocognitive measurements were used to help interpret group difference in FA values. Results Pilots had significantly lower average FA values than controls (0.489/0.500, respectively). Regionally, pilots had higher FA values in the fronto-occipital tract where FA values positively correlated with visual-spatial performance scores (0.603/0.586, respectively). There was a trend for high burden pilots to have lower FA values than low burden pilots. Discussion Nonhypoxic hypobaric exposure is associated with significantly lower average FA in young, healthy U-2 pilots. This suggests that recurrent hypobaric exposure causes diffuse axonal injury in addition to focal white matter changes. PMID:28323582

  10. Analysis of GSC 2475-1587 and GSC 841-277: Two Eclipsing Binary Stars Found During Asteroid Lightcurve Observations

    NASA Astrophysics Data System (ADS)

    Stephens, R. D.; Warner, B. D.

    2006-05-01

    When observing asteroids we select from two to five comparison stars for differential photometry, taking the average value of the comparisons for the single value to be subtracted from the value for the asteroid. As a check, the raw data of each comparison star are plotted as is the difference between any single comparison and the average of the remaining stars in the set. On more than one occasion, we have found that at least one of the comparisons was variable. In two instances, we took time away from our asteroid lightcurve work to determine the period of the two binaries and attempted to model the system using David Bradstreet's Binary Maker 3. Unfortunately, neither binary showed a total eclipse. Therefore, our results are not conclusive and present only one of many possibilities.

  11. Cost Effectiveness of Adopted Quality Requirements in Hospital Laboratories

    PubMed Central

    HAMZA, Alneil; AHMED-ABAKUR, Eltayib; ABUGROUN, Elsir; BAKHIT, Siham; HOLI, Mohamed

    2013-01-01

    Background The present study was designed in quasi-experiment to assess adoption of the essential clauses of particular clinical laboratory quality management requirements based on international organization for standardization (ISO 15189) in hospital laboratories and to evaluate the cost effectiveness of compliance to ISO 15189. Methods: The quality management intervention based on ISO 15189 was conceded through three phases; pre – intervention phase, Intervention phase and Post-intervention phase. Results: In pre-intervention phase the compliance to ISO 15189 was 49% for study group vs. 47% for control group with P value 0.48, while the post intervention results displayed 54% vs. 79% for study group and control group respectively in compliance to ISO 15189 and statistically significant difference (P value 0.00) with effect size (Cohen’s d) of (0.00) in pre-intervention phase and (0.99) in post – intervention phase. The annual average cost per-test for the study group and control group was 1.80 ± 0.25 vs. 1.97 ± 0.39, respectively with P value 0.39 whereas the post-intervention results showed that the annual average total costs per-test for study group and control group was 1.57 ± 0.23 vs 2.08 ± 0.38, P value 0.019 respectively, with cost-effectiveness ratio of (0.88) in pre -intervention phase and (0.52) in post-intervention phase. Conclusion: The planned adoption of quality management requirements (QMS) in clinical laboratories had great effect to increase the compliance percent with quality management system requirement, raise the average total cost effectiveness, and improve the analytical process capability of the testing procedure. PMID:23967422

  12. Microstructure and mesh sensitivities of mesoscale surrogate driving force measures for transgranular fatigue cracks in polycrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castelluccio, Gustavo M.; McDowell, David L.

    The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less

  13. Microstructure and mesh sensitivities of mesoscale surrogate driving force measures for transgranular fatigue cracks in polycrystals

    DOE PAGES

    Castelluccio, Gustavo M.; McDowell, David L.

    2015-05-22

    The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less

  14. Jitter compensation circuit

    DOEpatents

    Sullivan, James S.; Ball, Don G.

    1997-01-01

    The instantaneous V.sub.co signal on a charging capacitor is sampled and the charge voltage on capacitor C.sub.o is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V.sub. co signal is split between a gain stage (G=0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V.sub.co signal is applied to the negative input of a differential amplifier gain stage (G=10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V.sub.co signal from the instantaneous value of sampled V.sub.co signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V.sub.co values squared divided by the total volt-second product of the magnetic compression circuit.

  15. Jitter compensation circuit

    DOEpatents

    Sullivan, J.S.; Ball, D.G.

    1997-09-09

    The instantaneous V{sub co} signal on a charging capacitor is sampled and the charge voltage on capacitor C{sub o} is captured just prior to its discharge into the first stage of magnetic modulator. The captured signal is applied to an averaging circuit with a long time constant and to the positive input terminal of a differential amplifier. The averaged V{sub co} signal is split between a gain stage (G = 0.975) and a feedback stage that determines the slope of the voltage ramp applied to the high speed comparator. The 97.5% portion of the averaged V{sub co} signal is applied to the negative input of a differential amplifier gain stage (G = 10). The differential amplifier produces an error signal by subtracting 97.5% of the averaged V{sub co} signal from the instantaneous value of sampled V{sub co} signal and multiplying the difference by ten. The resulting error signal is applied to the positive input of a high speed comparator. The error signal is then compared to a voltage ramp that is proportional to the averaged V{sub co} values squared divided by the total volt-second product of the magnetic compression circuit. 11 figs.

  16. Radiation Surveys of the Naval Postgraduate School LINAC.

    DTIC Science & Technology

    1992-06-01

    personnel dosimetry at the NPS LINAC. This will result in the reduction of the TLD measured neutron dose evaluation for personnel. Accession For NTIS F. A...29 ix Figure 16: Average TLD NECF for electron energy and slit width co m b inatio ns...values obtained at 90 MeV electron energy, or NECFfmal = 0.341 ± 0.015 TABLE 5: AVERAGE TLD NEUTRON ENERGY CORRECTION FACTORS Electron Energy S lit

  17. Neutron Lifetime and Axial Coupling Connection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czarnecki, Andrzej; Marciano, William J.; Sirlin, Alberto

    Here, experimental studies of neutron decay, n→pe¯ν, exhibit two anomalies. The first is a 8.6(2.1) s, roughly 4σ difference between the average beam measured neutron lifetime, τ beam n = 888.0(2.0) s, and the more precise average trapped ultracold neutron determination, τ trap n = 879.4(6) s. The second is a 5σ difference between the pre2002 average axial coupling, gA, as measured in neutron decay asymmetries g pre2002 A = 1.2637(21), and the more recent, post2002, average g post2002 A = 1.2755(11), where, following the UCNA Collaboration division, experiments are classified by the date of their most recent result. Inmore » this Letter, we correlate those τ n and g A values using a (slightly) updated relation τ n(1+3g 2 A) = 5172.0(1.1) s. Consistency with that relation and better precision suggest τ favored n = 879.4(6) s and g favored A = 1.2755(11) as preferred values for those parameters. Comparisons of g favored A with recent lattice QCD and muonic hydrogen capture results are made. A general constraint on exotic neutron decay branching ratios, <0.27%, is discussed and applied to a recently proposed solution to the neutron lifetime puzzle.« less

  18. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography

    PubMed Central

    Umut, İlhan; Çentik, Güven

    2016-01-01

    The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present. PMID:27213008

  19. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography.

    PubMed

    Umut, İlhan; Çentik, Güven

    2016-01-01

    The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present.

  20. Evolution of the Orszag-Tang vortex system in a compressible medium. I - Initial average subsonic flow

    NASA Technical Reports Server (NTRS)

    Dahlburg, R. B.; Picone, J. M.

    1989-01-01

    The results of fully compressible, Fourier collocation, numerical simulations of the Orszag-Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2-0.6. These values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.

  1. Neutron Lifetime and Axial Coupling Connection

    DOE PAGES

    Czarnecki, Andrzej; Marciano, William J.; Sirlin, Alberto

    2018-05-16

    Here, experimental studies of neutron decay, n→pe¯ν, exhibit two anomalies. The first is a 8.6(2.1) s, roughly 4σ difference between the average beam measured neutron lifetime, τ beam n = 888.0(2.0) s, and the more precise average trapped ultracold neutron determination, τ trap n = 879.4(6) s. The second is a 5σ difference between the pre2002 average axial coupling, gA, as measured in neutron decay asymmetries g pre2002 A = 1.2637(21), and the more recent, post2002, average g post2002 A = 1.2755(11), where, following the UCNA Collaboration division, experiments are classified by the date of their most recent result. Inmore » this Letter, we correlate those τ n and g A values using a (slightly) updated relation τ n(1+3g 2 A) = 5172.0(1.1) s. Consistency with that relation and better precision suggest τ favored n = 879.4(6) s and g favored A = 1.2755(11) as preferred values for those parameters. Comparisons of g favored A with recent lattice QCD and muonic hydrogen capture results are made. A general constraint on exotic neutron decay branching ratios, <0.27%, is discussed and applied to a recently proposed solution to the neutron lifetime puzzle.« less

  2. The impact of network characteristics on the diffusion of innovations

    NASA Astrophysics Data System (ADS)

    Peres, Renana

    2014-05-01

    This paper studies the influence of network topology on the speed and reach of new product diffusion. While previous research has focused on comparing network types, this paper explores explicitly the relationship between topology and measurements of diffusion effectiveness. We study simultaneously the effect of three network metrics: the average degree, the relative degree of social hubs (i.e., the ratio of the average degree of highly-connected individuals to the average degree of the entire population), and the clustering coefficient. A novel network-generation procedure based on random graphs with a planted partition is used to generate 160 networks with a wide range of values for these topological metrics. Using an agent-based model, we simulate diffusion on these networks and check the dependence of the net present value (NPV) of the number of adopters over time on the network metrics. We find that the average degree and the relative degree of social hubs have a positive influence on diffusion. This result emphasizes the importance of high network connectivity and strong hubs. The clustering coefficient has a negative impact on diffusion, a finding that contributes to the ongoing controversy on the benefits and disadvantages of transitivity. These results hold for both monopolistic and duopolistic markets, and were also tested on a sample of 12 real networks.

  3. Application of scl - pbl method to increase quality learning of industrial statistics course in department of industrial engineering pancasila university

    NASA Astrophysics Data System (ADS)

    Darmawan, M.; Hidayah, N. Y.

    2017-12-01

    Currently, there has been a change of new paradigm in the learning model in college, ie from Teacher Centered Learning (TCL) model to Student Centered Learing (SCL). It is generally assumed that the SCL model is better than the TCL model. The Courses of 2nd Industrial Statistics in the Department Industrial Engineering Pancasila University is the subject that belongs to the Basic Engineering group. So far, the applied learning model refers more to the TCL model, and field facts show that the learning outcomes are less satisfactory. Of the three consecutive semesters, ie even semester 2013/2014, 2014/2015, and 2015/2016 obtained grade average is equal to 56.0; 61.1, and 60.5. In the even semester of 2016/2017, Classroom Action Research (CAR) is conducted for this course through the implementation of SCL model with Problem Based Learning (PBL) methods. The hypothesis proposed is that the SCL-PBL model will be able to improve the final grade of the course. The results shows that the average grade of the course can be increased to 73.27. This value was then tested using the ANOVA and the test results concluded that the average grade was significantly different from the average grade value in the previous three semesters.

  4. Study of T-wave morphology parameters based on Principal Components Analysis during acute myocardial ischemia

    NASA Astrophysics Data System (ADS)

    Baglivo, Fabricio Hugo; Arini, Pedro David

    2011-12-01

    Electrocardiographic repolarization abnormalities can be detected by Principal Components Analysis of the T-wave. In this work we studied the efect of signal averaging on the mean value and reproducibility of the ratio of the 2nd to the 1st eigenvalue of T-wave (T21W) and the absolute and relative T-wave residuum (TrelWR and TabsWR) in the ECG during ischemia induced by Percutaneous Coronary Intervention. Also, the intra-subject and inter-subject variability of T-wave parameters have been analyzed. Results showed that TrelWR and TabsWR evaluated from the average of 10 complexes had lower values and higher reproducibility than those obtained from 1 complex. On the other hand T21W calculated from 10 complexes did not show statistical diferences versus the T21W calculated on single beats. The results of this study corroborate that, with a signal averaging technique, the 2nd and the 1st eigenvalue are not afected by noise while the 4th to 8th eigenvalues are so much afected by this, suggesting the use of the signal averaged technique before calculation of absolute and relative T-wave residuum. Finally, we have shown that T-wave morphology parameters present high intra-subject stability.

  5. Evolution of the Orszag--Tang vortex system in a compressible medium. I. Initial average subsonic flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, R.B.; Picone, J.M.

    In this paper the results of fully compressible, Fourier collocation, numerical simulations of the Orszag--Tang vortex system are presented. The initial conditions for this system consist of a nonrandom, periodic field in which the magnetic and velocity field contain X points but differ in modal structure along one spatial direction. The velocity field is initially solenoidal, with the total initial pressure field consisting of the superposition of the appropriate incompressible pressure distribution upon a flat pressure field corresponding to the initial, average Mach number of the flow. In these numerical simulations, this initial Mach number is varied from 0.2--0.6. Thesemore » values correspond to average plasma beta values ranging from 30.0 to 3.3, respectively. It is found that compressible effects develop within one or two Alfven transit times, as manifested in the spectra of compressible quantities such as the mass density and the nonsolenoidal flow field. These effects include (1) a retardation of growth of correlation between the magnetic field and the velocity field, (2) the emergence of compressible small-scale structure such as massive jets, and (3) bifurcation of eddies in the compressible flow field. Differences between the incompressible and compressible results tend to increase with increasing initial average Mach number.« less

  6. Evaluation results of the 700 deg C Chinese strain gauges. [for gas turbine engine

    NASA Technical Reports Server (NTRS)

    Hobart, H. F.

    1985-01-01

    Gauges fabricated from specially developed Fe-Cr-Al-V-Ti-Y alloy wire in the Republic of China were evaluated for use in static strain measurement of hot gas turbine engines. Gauge factor variation with temperature, apparent strain, and drift were included. Results of gauge factor versus temperature tests show gauge factor decreasing with increasing temperature. The average slope is -3-1/2 percent/100 K, with an uncertainty band of + or - 8 percent. Values of room temperature gauge factor for the Chinese and Kanthal A-1 gauges averaged 2.73 and 2.12, respectively. The room temperature gauge factor of the Chinese gauges was specified to be 2.62. The apparent strain data for both the Chinese alloy and Kanthal A-1 showed large cycle to cycle nonrepeatability. All apparent strain curves had a similar S-shape, first going negative and then rising to positive value with increasing temperatures. The mean curve for the Chinese gauges between room temperature and 100 K had a total apparent strain of 1500 microstrain. The equivalent value for Kanthal A-1 was about 9000 microstrain. Drift tests at 950 K for 50 hr show an average drift rate of about -9 microstrain/hr. Short-term (1 hr) rates are higher, averaging about -40 microstrain for the first hour. In the temperature range 700 to 870 K, however, short-term drift rates can be as high as 1700 microstrain for the first hour. Therefore, static strain measurements in this temperature range should be avoided.

  7. Diagnostic features of quantitative comb-push shear elastography for breast lesion differentiation.

    PubMed

    Bayat, Mahdi; Denis, Max; Gregory, Adriana; Mehrmohammadi, Mohammad; Kumar, Viksit; Meixner, Duane; Fazzio, Robert T; Fatemi, Mostafa; Alizad, Azra

    2017-01-01

    Lesion stiffness measured by shear wave elastography has shown to effectively separate benign from malignant breast masses. The aim of this study was to evaluate different aspects of Comb-push Ultrasound Shear Elastography (CUSE) performance in differentiating breast masses. With written signed informed consent, this HIPAA- compliant, IRB approved prospective study included patients from April 2014 through August 2016 with breast masses identified on conventional imaging. Data from 223 patients (19-85 years, mean 59.93±14.96 years) with 227 suspicious breast masses identifiable by ultrasound (mean size 1.83±2.45cm) were analyzed. CUSE was performed on all patients. Three regions of interest (ROI), 3 mm in diameter each, were selected inside the lesion on the B-mode ultrasound which also appeared in the corresponding shear wave map. Lesion elasticity values were measured in terms of the Young's modulus. In correlation to pathology results, statistical analyses were performed. Pathology revealed 108 lesions as malignant and 115 lesions as benign. Additionally, 4 lesions (BI-RADS 2 and 3) were considered benign and were not biopsied. Average lesion stiffness measured by CUSE resulted in 84.26% sensitivity (91 of 108), 89.92% specificity (107 of 119), 85.6% positive predictive value, 89% negative predictive value and 0.91 area under the curve (P<0.0001). Stiffness maps showed spatial continuity such that maximum and average elasticity did not have significantly different results (P > 0.21). CUSE was able to distinguish between benign and malignant breast masses with high sensitivity and specificity. Continuity of stiffness maps allowed for choosing multiple quantification ROIs which covered large areas of lesions and resulted in similar diagnostic performance based on average and maximum elasticity. The overall results of this study, highlights the clinical value of CUSE in differentiation of breast masses based on their stiffness.

  8. Service use and financial performance in a replication program on adult day centers.

    PubMed

    Reifler, B V; Cox, N J; Jones, B N; Rushing, J; Yates, K

    1999-01-01

    The authors describe results from Partners in Caregiving: The Dementia Services Program, and present information on service utilization and financial performance among a group of 48 adult day centers across the United States from 1992 to 1996. Centers, with nonrandom assignment, received either grant support (average value: $93,000) or intensive technical assistance (average value: $39,000). Sites reported baseline data and submitted utilization information (enrollment and census) and financial data (revenue and expenses) quarterly. Overall, there were significant increases in enrollment, census, and financial performance (percent of cash expenses met through operating revenue) over the 4-year period. The grant-supported and technical-assistance sites had similar rates of improvement. Results provide data on service utilization and financial performance and demonstrate gains that can be achieved in these areas through improved marketing and financial management.

  9. Attitude towards technology, social media usage and grade-point average as predictors of global citizenship identification in Filipino University Students.

    PubMed

    Lee, Romeo B; Baring, Rito; Maria, Madelene Sta; Reysen, Stephen

    2017-06-01

    We examine the influence of a positive attitude towards technology, number of social media network memberships and grade-point average (GPA) on global citizenship identification antecedents and outcomes. Students (N = 3628) at a university in the Philippines completed a survey assessing the above constructs. The results showed that attitude towards technology, number of social network site memberships and GPA-predicted global citizenship identification, and subsequent prosocial outcomes (e.g. intergroup helping, responsibility to act for the betterment of the world), through the perception that valued others prescribe a global citizen identity (normative environment) and perceived knowledge of the world and felt interconnectedness with others (global awareness). The results highlight the associations between technology and academic performance with a global identity and associated values. © 2015 International Union of Psychological Science.

  10. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    NASA Astrophysics Data System (ADS)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, S; Suh, T; Chung, J

    Purpose: The purpose of this study is to evaluate the dosimetric and radiobiological impact of Acuros XB (AXB) and Anisotropic Analytic Algorithm (AAA) dose calculation algorithms on prostate stereotactic body radiation therapy plans with both conventional flattened (FF) and flattening-filter free (FFF) modes. Methods: For thirteen patients with prostate cancer, SBRT planning was performed using 10-MV photon beam with FF and FFF modes. The total dose prescribed to the PTV was 42.7 Gy in 7 fractions. All plans were initially calculated using AAA algorithm in Eclipse treatment planning system (11.0.34), and then were re-calculated using AXB with the same MUsmore » and MLC files. The four types of plans for different algorithms and beam energies were compared in terms of homogeneity and conformity. To evaluate the radiobiological impact, the tumor control probability (TCP) and normal tissue complication probability (NTCP) calculations were performed. Results: For PTV, both calculation algorithms and beam modes lead to comparable homogeneity and conformity. However, the averaged TCP values in AXB plans were always lower than in AAA plans with an average difference of 5.3% and 6.1% for 10-MV FFF and FF beam, respectively. In addition, the averaged NTCP values for organs at risk (OARs) were comparable. Conclusion: This study showed that prostate SBRT plan were comparable dosimetric results with different dose calculation algorithms as well as delivery beam modes. For biological results, even though NTCP values for both calculation algorithms and beam modes were similar, AXB plans produced slightly lower TCP compared to the AAA plans.« less

  12. Precision half-life measurement of 11C: The most precise mirror transition F t value

    NASA Astrophysics Data System (ADS)

    Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.

    2018-03-01

    Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.

  13. Using Zipf-Mandelbrot law and graph theory to evaluate animal welfare

    NASA Astrophysics Data System (ADS)

    de Oliveira, Caprice G. L.; Miranda, José G. V.; Japyassú, Hilton F.; El-Hani, Charbel N.

    2018-02-01

    This work deals with the construction and testing of metrics of welfare based on behavioral complexity, using assumptions derived from Zipf-Mandelbrot law and graph theory. To test these metrics we compared yellow-breasted capuchins (Sapajus xanthosternos) (Wied-Neuwied, 1826) (PRIMATES CEBIDAE) found in two institutions, subjected to different captive conditions: a Zoobotanical Garden (hereafter, ZOO; n = 14), in good welfare condition, and a Wildlife Rescue Center (hereafter, WRC; n = 8), in poor welfare condition. In the Zipf-Mandelbrot-based analysis, the power law exponent was calculated using behavior frequency values versus behavior rank value. These values allow us to evaluate variations in individual behavioral complexity. For each individual we also constructed a graph using the sequence of behavioral units displayed in each recording (average recording time per individual: 4 h 26 min in the ZOO, 4 h 30 min in the WRC). Then, we calculated the values of the main graph attributes, which allowed us to analyze the complexity of the connectivity of the behaviors displayed in the individuals' behavioral sequences. We found significant differences between the two groups for the slope values in the Zipf-Mandelbrot analysis. The slope values for the ZOO individuals approached -1, with graphs representing a power law, while the values for the WRC individuals diverged from -1, differing from a power law pattern. Likewise, we found significant differences for the graph attributes average degree, weighted average degree, and clustering coefficient when comparing the ZOO and WRC individual graphs. However, no significant difference was found for the attributes modularity and average path length. Both analyses were effective in detecting differences between the patterns of behavioral complexity in the two groups. The slope values for the ZOO individuals indicated a higher behavioral complexity when compared to the WRC individuals. Similarly, graph construction and the calculation of its attributes values allowed us to show that the complexity of the connectivity among the behaviors was higher in the ZOO than in the WRC individual graphs. These results show that the two measuring approaches introduced and tested in this paper were capable of capturing the differences in welfare levels between the two conditions, as shown by differences in behavioral complexity.

  14. Occupational burnout levels in emergency medicine--a nationwide study and analysis.

    PubMed

    Popa, Florian; Raed, Arafat; Purcarea, Victor Lorin; Lală, Adrian; Bobirnac, George

    2010-01-01

    The specificity of the emergency medical act strongly manifests itself on account of a wide series of psycho-traumatizing factors augmented both by the vulnerable situation of the patient and the paroxysmal state of the act. Also, it has been recognized that the physical solicitation and distress levels are the highest among all medical specialties, this being a valuable marker for establishing the quality of the medical act. We have surveyed a total of 4725 emergency medical workers with the MBI-HSS instrument, receiving 4693 valid surveys (99.32% response rate). Professional categories included Emergency Department doctors (M-EMD), ambulance doctors (M-AMB), ED doctors with field work in emergency and resuscitation (including mobile intensive care units and airborne intensive care units) (D-SMU), medical nurses in Emergency Departments (N-EMD), medical nurses in the ambulance service (N-AMB), ED medical nurses with field activity in emergency and resuscitation (N-SMU), ambulance drivers (DRV) and paramedic (EMT). The n values for every category of subjects and percentage of system coverage (table 3) shows that we have covered an estimated total of 29.94% of the Romanian emergency medical field workers. MBI-HSS results show a moderate to high level of occupational stress for the surveyed subjects. The average values for the three parameters, corresponding to the entire Romanian emergency medical field were 1.41 for EE, 0.99 for DP and 4.47 for PA (95% CI). Average results stratified by professional category show higher EE average values (v) for the M-SMU (v=2.01, 95%CI) and M-EMD (v=2.21, 95% CI) groups corresponding to higher DP values for the same groups (vM-EMD=1.41 and vM-SMU=1.22, 95% CI). PA values for these groups are below average, corresponding to an increased risk factor for high degrees of burnout. Calculated PA values are 4.30 for the M-EMD group and 4.20 for the M-SMU group. Of all surveyed groups, our study shows a high risk of burnout consisting of high emotional exhaustion (EE) and high depersonalization (DP) values for Emergency Department doctors, Emergency, and Resuscitation Service doctors (M-SMU). Possible explanations for this might be linked to high patient flow, Emergency Department crowding, long work hours and individual parameters such as coping mechanisms, social development and work environment.

  15. [Nitrogen balance assessment in burn patients].

    PubMed

    Beça, Andreia; Egipto, Paula; Carvalho, Davide; Correia, Flora; Oliveira, Bruno; Rodrigues, Acácio; Amarante, José; Medina, J Luís

    2010-01-01

    The burn injury probably represents the largest stimulus for muscle protein catabolism. This state is characterized by an accelerated catabolism of the lean or skeletal mass that results in a clinical negative balance of nitrogen and muscle wasting. The determination of an appropriate value for protein intake is essential, since it is positively related to the nitrogen balance (NB) and accordingly several authors argue that a positive NB is the key parameter associated with nutritional improvement of a burn patient. Evaluation of the degree of protein catabolism by assessment of the Nitrogen Balance; Defining of nutritional support (protein needs) to implement in patients with burned surface area (BSA) = 10%. We prospectively evaluated the clinical files and scrutinized the clinical variables of interest. The NB was estimated according to three formulae. Each gram of nitrogen calculated by the NB was then converted into grams of protein, subtracted or added to protein intake (or administered enteric or parenterically) and divided by kg of reference Weight (kg Rweight), in an attempt to estimate the daily protein needs. The cohort consisted of 10 patients, 6 females, with average age of 58(23) years old, a mean of BSA of 21.4(8.4)%, ranging from a minimum of 10.0% and máximum of 35.0%. On average, patients were 58 (23) years old. The average number of days of hospitalization in the burn unit was 64.8(36.5) days. We observed significant differences between the 3 methods used for calculating the NB (p = 0.004), on average the NB was positive. When the formula A was used the average value of NB was higher. Regarding the attempt to estimate the needs of g prot/kg Rweight/day most of the values did not exceed, on average, 2.6 g Prot/kg Rweight/day and no significant differences between patients with a BSA% of 10-20% and with BSA% > 20% were found. Despite being able to estimate the protein catabolism through these formulas and verifying that most values were above zero, wide individual fluctuations were visible over time. Based on the sample reference that recommends a value of 1.5-2 g Prot/kg Rweight/day, we can conclude it to be underestimated, when comparing with the mean value of 2.6 g Prot/kg Rweight/day we established.

  16. The average direct current offset values for small digital audio recorders in an acoustically consistent environment.

    PubMed

    Koenig, Bruce E; Lacey, Douglas S

    2014-07-01

    In this research project, nine small digital audio recorders were tested using five sets of 30-min recordings at all available recording modes, with consistent audio material, identical source and microphone locations, and identical acoustic environments. The averaged direct current (DC) offset values and standard deviations were measured for 30-sec and 1-, 2-, 3-, 6-, 10-, 15-, and 30-min segments. The research found an inverse association between segment lengths and the standard deviation values and that lengths beyond 30 min may not meaningfully reduce the standard deviation values. This research supports previous studies indicating that measured averaged DC offsets should only be used for exclusionary purposes in authenticity analyses and exhibit consistent values when the general acoustic environment and microphone/recorder configurations were held constant. Measured average DC offset values from exemplar recorders may not be directly comparable to those of submitted digital audio recordings without exactly duplicating the acoustic environment and microphone/recorder configurations. © 2014 American Academy of Forensic Sciences.

  17. Force sum rules for stepped surfaces of jellium

    NASA Astrophysics Data System (ADS)

    Farjam, Mani

    2007-03-01

    The Budd-Vannimenus theorem for jellium surface is generalized for stepped surfaces of jellium. Our sum rules show that the average value of the electrostatic potential over the stepped jellium surface equals the value of the potential at the corresponding flat jellium surface. Several sum rules are tested with numerical results obtained within the Thomas-Fermi model of stepped surfaces.

  18. Seasonal variation in natural abundance of δ13C and 15N in Salicornia brachiata Roxb. populations from a coastal area of India.

    PubMed

    Chaudhary, Doongar R; Seo, Juyoung; Kang, Hojeong; Rathore, Aditya P; Jha, Bhavanath

    2018-05-01

    High and fluctuating salinity is characteristic for coastal salt marshes, which strongly affect the physiology of halophytes consequently resulting in changes in stable isotope distribution. The natural abundance of stable isotopes (δ 13 C and δ 15 N) of the halophyte plant Salicornia brachiata and physico-chemical characteristics of soils were analysed in order to investigate the relationship of stable isotope distribution in different populations in a growing period in the coastal area of Gujarat, India. Aboveground and belowground biomass of S. brachiata was collected from six different populations at five times (September 2014, November 2014, January 2015, March 2015 and May 2015). The δ 13 C values in aboveground (-30.8 to -23.6 ‰, average: -26.6 ± 0.4 ‰) and belowground biomass (-30.0 to -23.1 ‰, average: -26.3 ± 0.4 ‰) were similar. The δ 13 C values were positively correlated with soil salinity and Na concentration, and negatively correlated with soil mineral nitrogen. The δ 15 N values of aboveground (6.7-16.1 ‰, average: 9.6 ± 0.4 ‰) were comparatively higher than belowground biomass (5.4-13.2 ‰, average: 7.8 ± 0.3 ‰). The δ 15 N values were negatively correlated with soil available P. We conclude that the variation in δ 13 C values of S. brachiata was possibly caused by soil salinity (associated Na content) and N limitation which demonstrates the potential of δ 13 C as an indicator of stress in plants.

  19. Intra- and Interobserver Variability of Cochlear Length Measurements in Clinical CT.

    PubMed

    Iyaniwura, John E; Elfarnawany, Mai; Riyahi-Alam, Sadegh; Sharma, Manas; Kassam, Zahra; Bureau, Yves; Parnes, Lorne S; Ladak, Hanif M; Agrawal, Sumit K

    2017-07-01

    The cochlear A-value measurement exhibits significant inter- and intraobserver variability, and its accuracy is dependent on the visualization method in clinical computed tomography (CT) images of the cochlea. An accurate estimate of the cochlear duct length (CDL) can be used to determine electrode choice, and frequency map the cochlea based on the Greenwood equation. Studies have described estimating the CDL using a single A-value measurement, however the observer variability has not been assessed. Clinical and micro-CT images of 20 cadaveric cochleae were acquired. Four specialists measured A-values on clinical CT images using both standard views and multiplanar reconstructed (MPR) views. Measurements were repeated to assess for intraobserver variability. Observer variabilities were evaluated using intra-class correlation and absolute differences. Accuracy was evaluated by comparison to the gold standard micro-CT images of the same specimens. Interobserver variability was good (average absolute difference: 0.77 ± 0.42 mm) using standard views and fair (average absolute difference: 0.90 ± 0.31 mm) using MPR views. Intraobserver variability had an average absolute difference of 0.31 ± 0.09 mm for the standard views and 0.38 ± 0.17 mm for the MPR views. MPR view measurements were more accurate than standard views, with average relative errors of 9.5 and 14.5%, respectively. There was significant observer variability in A-value measurements using both the standard and MPR views. Creating the MPR views increased variability between experts, however MPR views yielded more accurate results. Automated A-value measurement algorithms may help to reduce variability and increase accuracy in the future.

  20. Energy balance studies and plasma catecholamine values for patients with healed burns.

    PubMed

    Wallace, B H; Cone, J B; Caldwell, F T

    1991-01-01

    We report heat balance studies and plasma catecholamine values for 49 children and young adults with healed burn wounds (age range 0.6 to 31 years and burn range 1% to 82% body surface area burned; mean 41%). All measurements were made during the week of discharge. Heat production for patients with healed burns was not significantly different from predicted normal values. However, compartmented heat loss demonstrated a persistent increment in evaporative heat loss that was secondary to continued elevation of cutaneous water vapor loss immediately after wound closure. A reciprocal decrement in dry heat loss was demonstrated (as a result of a cooler average surface temperature, 0.84 degree C cooler than the average integrated skin temperature of five normal volunteers who were studied in our unit under similar environmental conditions). Mean values for plasma catecholamines were in the normal range: epinephrine = 56 +/- 37 pg/ml, norepinephrine = 385 +/- 220 pg/ml, and dopamine = 34 +/- 29 pg/ml. In conclusion, patients with freshly healed burn wounds have normal rates of heat production; however, there is a residual increment in transcutaneous water vapor loss, which produces surface cooling and decreased average surface temperature, which in turn lowers dry heat loss by an approximately equivalent amount.

  1. Probabilistic models for capturing more physicochemical properties on protein-protein interface.

    PubMed

    Guo, Fei; Li, Shuai Cheng; Du, Pufeng; Wang, Lusheng

    2014-06-23

    Protein-protein interactions play a key role in a multitude of biological processes, such as signal transduction, de novo drug design, immune responses, and enzymatic activities. It is of great interest to understand how proteins interact with each other. The general approach is to explore all possible poses and identify near-native ones with the energy function. The key issue here is to design an effective energy function, based on various physicochemical properties. In this paper, we first identify two new features, the coupled dihedral angles on the interfaces and the geometrical information on π-π interactions. We study these two features through statistical methods: a mixture of bivariate von Mises distributions is used to model the correlation of the coupled dihedral angles, while a mixture of bivariate normal distributions is used to model the orientation of the aromatic rings on π-π interactions. Using 6438 complexes, we parametrize the joint distribution of each new feature. Then, we propose a novel method to construct the energy function for protein-protein interface prediction, which includes the new features as well as the existing energy items such as dDFIRE energy, side-chain energy, atom contact energy, and amino acid energy. Experiments show that our method outperforms the state-of-the-art methods, ZRANK and ClusPro. We use the CAPRI evaluation criteria, Irmsd value, and Fnat value. On Benchmark v4.0, our method has an average Irmsd value of 3.39 Å and Fnat value of 62%, which improves upon the average Irmsd value of 3.89 Å and Fnat value of 49% for ZRANK, and the average Irmsd value of 3.99 Å and Fnat value of 46% for ClusPro. On the CAPRI targets, our method has an average Irmsd value of 3.56 Å and Fnat value of 42%, which improves upon the average Irmsd value of 4.27 Å and Fnat value of 39% for ZRANK, the average Irmsd value of 5.15 Å and Fnat value of 30% for ClusPro.

  2. Nitrogen and Phosphorus Pollutants in Cosmetics Wastewater and Its Treatment Process of a Certain Brand

    NASA Astrophysics Data System (ADS)

    Ma, Guosheng; Chen, Juan

    2018-02-01

    Cosmetics wastewater is one of the sources of nitrogen and phosphorus pollutants that cause eutrophication of water bodies. This paper is to test the cosmetics wastewater in the production process with American Hach method, and the pH and other indicators would be detected during a whole production cycle. The results show that the pH value in wastewater is 8.6~8.7 (average 8.67), SS 880~1090 mg. L-1 (average 968.57), TN 65.2~100.4 mg.m-3 (average 80.50), TP 6.6~11.4 mg.m-3 (average 9.84), NH3-N 44.2~77.0 mg.m-3 (average 55.61), COD 4650~5900 mg.m-3 (average 5490). After pollutant treatment, the nitrogen and phosphorus pollutants in wastewater can reach the standard discharge.

  3. Assessment of differences between repeated pulse wave velocity measurements in terms of 'bias' in the extrapolated cardiovascular risk and the classification of aortic stiffness: is a single PWV measurement enough?

    PubMed

    Papaioannou, T G; Protogerou, A D; Nasothimiou, E G; Tzamouranis, D; Skliros, N; Achimastos, A; Papadogiannis, D; Stefanadis, C I

    2012-10-01

    Currently, there is no recommendation regarding the minimum number of pulse wave velocity (PWV) measurements to optimize individual's cardiovascular risk (CVR) stratification. The aim of this study was to examine differences between three single consecutive and averaged PWV measurements in terms of the extrapolated CVR and the classification of aortic stiffness as normal. In 60 subjects who referred for CVR assessment, three repeated measurements of blood pressure (BP), heart rate and PWV were performed. The reproducibility was evaluated by the intraclass correlation coefficient (ICC) and mean±s.d. of differences. The absolute differences between single and averaged PWV measurements were classified as: ≤0.25, 0.26-0.49, 0.50-0.99 and ≥1 m s(-1). A difference ≥0.5 m s(-1) (corresponding to 7.5% change in CVR, meta-analysis data from >12 000 subjects) was considered as clinically meaningful; PWV values (single or averaged) were classified as normal according to respective age-corrected normal values (European Network data). Kappa statistic was used to evaluate the agreement between classifications. PWV for the first, second and third measurement was 7.0±1.9, 6.9±1.9, 6.9±2.0 m s(-1), respectively (P=0.319); BP and heart rate did not vary significantly. A good reproducibility between single measurements was observed (ICC>0.94, s.d. ranged between 0.43 and 0.64 m s(-1)). A high percent with difference ≥0.5 m s(-1) was observed between: any pair of the three single PWV measurements (26.6-38.3%); the first or second single measurement and the average of the first and second (18.3%); any single measurement and the average of three measurements (10-20%). In only up to 5% a difference ≥0.5 m s(-1) was observed between the average of three and the average of any two PWV measurements. There was no significant agreement regarding PWV classification as normal between: the first or second measurement and the averaged PWV values. There was significant agreement in classification made by the average of the first two and the average of three PWV measurements (κ=0.85, P<0.001). Even when high reproducibility in PWV measurement is succeeded single measurements provide quite variable results in terms of the extrapolated CVR and the classification of aortic stiffness as normal. The average of two PWV measurements provides similar results with the average of three.

  4. SU-F-J-224: Impact of 4D PET/CT On PERCIST Classification of Lung and Liver Metastases in NSLC and Colorectal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meier, J; Lopez, B; Mawlawi, O

    2016-06-15

    Purpose: To quantify the impact of 4D PET/CT on PERCIST metrics in lung and liver tumors in NSCLC and colorectal cancer patients. Methods: 32 patients presenting lung or liver tumors of 1–3 cm size affected by respiratory motion were scanned on a GE Discovery 690 PET/CT. The bed position with lesion(s) affected by motion was acquired in a 12 minute PET LIST mode and unlisted into 8 bins with respiratory gating. Three different CT maps were used for attenuation correction: a clinical helical CT (CT-clin), an average CT (CT-ave), and an 8-phase 4D CINE CT (CT-cine). All reconstructions were 3Dmore » OSEM, 2 iterations, 24 subsets, 6.4 Gaussian filtration, 192×192 matrix, non-TOF, and non-PSF. Reconstructions using CT-clin and CT-ave used only 3 out of the 12 minutes of the data (clinical protocol); all 12 minutes were used for the CT-cine reconstruction. The percent change of SUVbw-peak and SUVbw-max was calculated between PET-CTclin and PET-CTave. The same percent change was also calculated between PET-CTclin and PET-CTcine in each of the 8 bins and in the average of all bins. A 30% difference from PET-CTclin classified lesions as progressive metabolic disease (PMD) using maximum bin value and the average of eight bin values. Results: 30 lesions in 25 patients were evaluated. Using the bin with maximum SUVbw-peak and SUVbw-max difference, 4 and 13 lesions were classified as PMD, respectively. Using the average bin values for SUVbw-peak and SUVbw-max, 3 and 6 lesions were classified as PMD, respectively. Using PET-CTave values for SUVbw-peak and SUVbw-max, 4 and 3 lesions were classified as PMD, respectively. Conclusion: These results suggest that response evaluation in 4D PET/CT is dependent on SUV measurement (SUVpeak vs. SUVmax), number of bins (single or average), and the CT map used for attenuation correction.« less

  5. Compilation of PZC and IEP of sparingly soluble metal oxides and hydroxides from literature.

    PubMed

    Kosmulski, Marek

    2009-11-30

    The values of PZC and IEP of metal oxides reported in the literature are affected by the choice of the specimens to be studied. The specimens, which have PZC and IEP similar to the "recommended" value, are preferred by the scientists. The biased choice leads to accumulation of results for a few specimens, and the other specimens are seldom studied or they are subjected to washing procedures aimed at shift of the original IEP toward the "recommended" value. Taking the average or median of all published PZC and IEP for certain oxide as the "recommended" value leads to substantiation of previously published results due to overrepresentation of certain specimens in the sample.

  6. Dual-test monitoring of hyperglycemia using daily glucose and weekly fructosamine values.

    PubMed

    Carter, A W; Borchardt, N; Cooney, M; Greene, D

    2001-01-01

    The purpose of this study was to assess the impact of using a dual-test blood glucose/fructosamine home monitoring system to assist individuals identified as having the potential for poor glycemic control to achieve values closer to normal. Forty-eight subjects found to have a fasting blood glucose value of > or = 126 mg/dL, casual blood glucose value of > or = 140 mg/dL, and/or blood fructosamine value of > or = 310 micromol/L, agreed to perform daily self testing for 90 days and were provided a dual-test blood glucose/fructosamine home monitoring system and testing supplies at no charge to them. Medication changes/compliance along with dietary and exercise habits were compared to testing results by the principle investigator at approximate 30-day intervals. The desired goal of this project was to achieve and/or maintain a fasting blood glucose value of < or = 110 mg/dL, a casual blood glucose value of < or = 140 mg/dL and a blood fructosamine value of < or = 310 micromol/L by encouraging each individual to realize the effect of dietary intake and exercise habits, and understand the importance of medication compliance, if appropriate, in achieving better overall glycemic control. Four subjects withdrew from the study prior to completion, 11 of the remaining 44 completed 60 days of testing and 33 of 44 completed 90 days of testing. Regular monitoring and counseling achieved an average reduction in blood glucose of 27.5% and a 16.6% reduction in average blood fructosamine when compared to original screening results of these 44 individuals. This study indicates that the addition of weekly fructosamine values to daily blood glucose values provides both the patient and clinician valuable information to evaluate the impact of dietary, exercise, and medication therapy changes on glycemic control by bridging the existing gap between daily blood glucose values and quarterly HbA1c confirmation of intervention results.

  7. Chemical and physical characteristics of coal and carbonaceous shale samples from the Salt Range coal field, Punjab Province, Pakistan

    USGS Publications Warehouse

    Warwick, Peter D.; Shakoor, T.; Javed, Shahid; Mashhadi, S.T.A.; Hussain, H.; Anwar, M.; Ghaznavi, M.I.

    1990-01-01

    Sixty coal and carbonaceous shale samples collected from the Paleocene Patala Formation in the Salt Range coal field, Punjab Province, Pakistan, were analyzed to examine the relationships between coal bed chemical and physical characteristics and depositional environments. Results of proximate and ultimate analyses, reported on an as received basis, indicate that coal beds have an average ash yield of 24.23 percent, average sulfur content of 5.32 percent, average pyritic sulfur content of 4.07 percent, and average calorific value of 8943 Btu (4972 kcal/kg). Thirty five coal samples, analyzed on a whole coal, dry basis for selected trace elements and oxides, have anomalously high average concentrations of Ti, at O.3& percent; Zr, at 382 ppm; and Se, at 11.4 ppm, compared to world wide averages for these elements in coal.Some positive correlation coefficients, significant at a 0.01 level, are those between total sulfur and As, pyritic sulfur and As, total sulfur and sample location, organic sulfur and Se, calorific value (Btu) and sample location, and coal bed thickness and Se. Calorific values -for the samples, calculated on a moist, mineral matter free basis, indicate that the apparent rank of the coal is high volatile C bituminous.Variations observed in the chemical and physical characteristics of the coal beds may be related to depositional environments. Total ash yields and concentrations of Se and organic sulfur increase toward more landward depositional environments and may be related to an increase of fluvial influence on peat deposition. Variations in pyritic sulfur concentrations may be related to post-peat pyrite filled burrows commonly observed in the upper part of the coal bed. The thickest coal beds that have the lowest ash content, and highest calorific values, formed from peats deposited in back barrier, tidal flat environments of the central and western parts of the coal field. The reasons for correlations between Se and coal bed thickness and Se and ash content are not clear and may be a product of averaging.

  8. Non-Contact Determination of Antisymmetric Plate Wave Velocity in Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Kautz, Harold E.

    1996-01-01

    A 13 mJ NdYAG 1064 nm, 4 ns, laser pulse was employed to produce ultrasonic plate waves in 20 percent porous SiC/SiC composite tensile specimens of three different architectures. An air coupled 0.5 MHz transducer was used to detect and collect the waveforms which contained first antisymmetric plate wave pulses for determining the shear wave velocity (VS). These results were compared to VS values determined on the same specimens with 0.5 MHz ultrasonic transducers with contact coupling. Averages of four noncontact determinations on each of 18 specimens were compared to averages of four contact values. The noncontact VS's fall in the same range as the contact. The standard deviations for the noncontact VS's averaged 2.8 percent. The standard deviations for the contact measurements averaged 2.3 percent, indicating similar reproducibility. Repeated laser pulsing at the same location always lead to deterioration of the ulu-"nic signal. The signal would recover in about 24 hr in air however, indicating that no permanent damage was produced.

  9. Comparing CT perfusion with oxygen partial pressure in a rabbit VX2 soft-tissue tumor model.

    PubMed

    Sun, Chang-Jin; Li, Chao; Lv, Hai-Bo; Zhao, Cong; Yu, Jin-Ming; Wang, Guang-Hui; Luo, Yun-Xiu; Li, Yan; Xiao, Mingyong; Yin, Jun; Lang, Jin-Yi

    2014-01-01

    The aim of this study was to evaluate the oxygen partial pressure of the rabbit model of the VX2 tumor using a 64-slice perfusion CT and to compare the results with that obtained using the oxygen microelectrode method. Perfusion CT was performed for 45 successfully constructed rabbit models of a VX2 brain tumor. The perfusion values of the brain tumor region of interest, the blood volume (BV), the time to peak (TTP) and the peak enhancement intensity (PEI) were measured. The results were compared with the partial pressure of oxygen (PO2) of that region of interest obtained using the oxygen microelectrode method. The perfusion values of the brain tumor region of interest in 45 successfully constructed rabbit models of a VX2 brain tumor ranged from 1.3-127.0 (average, 21.1 ± 26.7 ml/min/ml); BV ranged from 1.2-53.5 ml/100g (average, 22.2 ± 13.7 ml/100g); PEI ranged from 8.7-124.6 HU (average, 43.5 ± 28.7 HU); and TTP ranged from 8.2-62.3 s (average, 38.8 ± 14.8 s). The PO2 in the corresponding region ranged from 0.14-47 mmHg (average, 16 ± 14.8 mmHg). The perfusion CT positively correlated with the tumor PO2, which can be used for evaluating the tumor hypoxia in clinical practice.

  10. Comparison the Psychological Wellbeing of University Students from Hungary and Romania

    ERIC Educational Resources Information Center

    Barth, Anita; Nagy, Ildikó; Kiss, János

    2015-01-01

    The results obtained in our research of mental distress indicators and results of conflict management strategies are consistent with the results of international studies. Students participating in the study (N = 237) reached the highest average results in the field of personal growth, while we measured the lowest value in the fields of autonomy…

  11. Potential of solid waste utilization as source of refuse derived fuel (RDF) energy (case study at temporary solid waste disposal site in West Jakarta)

    NASA Astrophysics Data System (ADS)

    Indrawati, D.; Lindu, M.; Denita, P.

    2018-01-01

    This study aims to measure the volume of solid waste generated as well asits density, composition, and characteristics, to analyze the potential of waste in TPS to become RDF materials and to analyze the best composition mixture of RDF materials. The results show that the average of solid waste generation in TPS reaches 40.80 m3/day, with the largest percentage of its share is the organic waste component of 77.9%, while the smallest amount of its share is metal and rubber of 0.1%. The average water content and ash content of solid waste at the TPS is 27.7% and 6.4% respectively, while the average calorific potential value is 728.71 kcal/kg. The results of solid waste characteristics comparison at three TPS indicate thatTPS Tanjung Duren has the greatest waste potential to be processed into RDF materials with a calorific value of 893.73 kcal/kg, water content level of 24.6%, andlow ash content of 6.11%. This research has also shown that the best composition for RDF composite materials is rubber, wood, and textile mixtureexposed to outdoor drying conditions because it produced low water content and low ash content of 10.8% and 9.6%, thus optimizedthe calorific value of 4,372.896 kcal/kg.

  12. Effect of humic acid in leachate on specific methanogenic activity of anaerobic granular sludge.

    PubMed

    Guo, Mengfei; Xian, Ping; Yang, Longhui; Liu, Xi; Zhan, Longhui; Bu, Guanghui

    2015-01-01

    In order to find out the effects of humic acid (HA) in anaerobic-treated landfill leachate on granular sludge, the anaerobic biodegradability of HA as well as the influences of HA on the total cumulative methane production, the anaerobic methanization process and the specific methanogenic activity (SMA) of granular sludge are studied in this paper. Experimental results show that as a non-biodegradable organic pollutant, HA is also difficult to be decomposed by microbes in the anaerobic reaction process. Presence of HA and changes in the concentration have no significant influences on the total cumulative methane production and the anaerobic methanization process of granular sludge. Besides, the total cumulative methane production cannot reflect the inhibition of toxics on the methanogenic activity of granular sludge on the premise of sufficient reaction time. Results also show that HA plays a promoting role on SMA of granular sludge. Without buffering agent the SMA value increased by 19.2% on average due to the buffering and regulating ability of HA, while with buffering agent the SMA value increased by 5.4% on average due to the retaining effect of HA on the morphology of the sludge particles. However, in the presence of leachate the SMA value decreased by 27.6% on average, because the toxic effect of the toxics in the leachate on granular sludge is much larger than the promoting effect of HA.

  13. Long-Term Kinetics of Serum and Xanthoma Cholesterol Radioactivity in Patients with Hypercholesterolemia

    PubMed Central

    Samuel, Paul; Perl, William; Holtzman, Charles M.; Rochman, Norman D.; Lieberman, Sidney

    1972-01-01

    In four patients with hypercholesterolemia (type II hyperlipoproteinemia) and xanthomatosis the decay of serum cholesterol specific activity was followed for 53-63 wk after pulse labeling. Specific activity of biopsied xanthoma cholesterol was measured four times in the course of the study. The xanthoma specific activity curve crossed and thereafter remained above the serum specific activity curve. The average ratio of xanthoma to serum specific activity was 4.7 at the end of the study. The final half-time of the xanthoma decay curves was significantly greater (average: 200 days) than the slowest half-time of serum specific activity decay (average: 93 days). The data were analyzed by input-output analysis and yielded the following results. The average value for the total input rate of body cholesterol (IT) (sum of dietary and biosynthesized cholesterol) was 1.29 g/day. The average size of the rapidly miscible pool of cholesterol (Ma) was 55.7 g. and of the total exchangeable body mass of cholesterol (M) 116.5 g. The average value of M - Ma (remaining exchangeable mass of cholesterol) was 60.8 g. The derived values for exchangeable masses of cholesterol, in the present patients with marked hypercholesterolemia, were significantly larger than in a group of patients with normal serum lipids in previous studies. One of the four patients died of a sudden acute myocardial infarction 53 wk after pulse labeling. Specific activity of aortic wall and atheroma cholesterol was 3.12 times that of serum. The ratio was close to 2 for adipose tissue and spleen, and was slightly above 1 or was close to unity in most other organs studied, with the exception of brain which showed a ratio of 0.19. PMID:5009114

  14. 40 CFR 133.101 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... arithmetic mean of pollutant parameter values for samples collected in a period of 7 consecutive days. (b) 30-day average. The arithmetic mean of pollutant parameter values of samples collected in a period of 30... percentile value for the 30-day average effluent quality achieved by a treatment works in a period of at...

  15. Cost effectiveness of adopted quality requirements in hospital laboratories.

    PubMed

    Hamza, Alneil; Ahmed-Abakur, Eltayib; Abugroun, Elsir; Bakhit, Siham; Holi, Mohamed

    2013-01-01

    The present study was designed in quasi-experiment to assess adoption of the essential clauses of particular clinical laboratory quality management requirements based on international organization for standardization (ISO 15189) in hospital laboratories and to evaluate the cost effectiveness of compliance to ISO 15189. The quality management intervention based on ISO 15189 was conceded through three phases; pre - intervention phase, Intervention phase and Post-intervention phase. In pre-intervention phase the compliance to ISO 15189 was 49% for study group vs. 47% for control group with P value 0.48, while the post intervention results displayed 54% vs. 79% for study group and control group respectively in compliance to ISO 15189 and statistically significant difference (P value 0.00) with effect size (Cohen's d) of (0.00) in pre-intervention phase and (0.99) in post - intervention phase. The annual average cost per-test for the study group and control group was 1.80 ± 0.25 vs. 1.97 ± 0.39, respectively with P value 0.39 whereas the post-intervention results showed that the annual average total costs per-test for study group and control group was 1.57 ± 0.23 vs 2.08 ± 0.38, P value 0.019 respectively, with cost-effectiveness ratio of (0.88) in pre -intervention phase and (0.52) in post-intervention phase. The planned adoption of quality management requirements (QMS) in clinical laboratories had great effect to increase the compliance percent with quality management system requirement, raise the average total cost effectiveness, and improve the analytical process capability of the testing procedure.

  16. Land Reclamation in Brazilian Amazônia: A chronosequence study of floristic development in the national forest of Jamiri-RO mined areas

    NASA Astrophysics Data System (ADS)

    Fengler, Felipe; Ribeiro, Admilson; Longo, Regina; Merides, Marcela; Soares, Herlon; Melo, Wanderley

    2017-04-01

    Although reclamation techniques for forest ecosystems recovery have been developed over the past decades, there is still a great difficulty in the establishment on environment assessment, especially when compared to the non-disturbed ecosystems. This work evaluated the results and limitations on cassiterite-mined areas in reclamation, at Brazilian Amazônia. Floristic variables from 29 plots located on 15-year-old native species reforestation sites and two plots from preserved open/closed canopy forests were analyzed in a chronosequece way (2010-2015). Regeneration density, species richness, average girth, and average height were evaluated every year, by means of cluster analysis (Euclidian distance, Ward method) and submitted to multiscale bootstrap resampling (a=5%). It was conduced the regression analysis for each identified group in 2015 in order to verify differences between the chronosequece development. The results showed the existence of two main groups in 2010, one witch all mined plots were allocated and other with open/closed canopy plots. After 2011 some mined areas became allocated in the open/closed canopy plots group. From 2013 and on open/closed canopy plots appeared shuffled in the formed groups, indicating the reclamation sites conditions became similar to natural areas. Finally, in 2015 three main groups were formed. The regression analysis showed that group three had a higher trend of development for regeneration density, with higher angular coefficient and higher values. For species richness all the groups had a similar trend, with values lower than open/closed canopy forest. In average girth higher trends were observed in group one and all values were near to open canopy forest in 2015. Average height showed better trends and higher values in group two. It was concluded that all mined sites had a forest recovery process. However, different responses to reclamation process were observed due to the differences in the degraded soils characteristics. Keywords: Recovery, Restoration, Forest, Chronosequece, Cassiterite.

  17. Analysis of Yearly Traffic Fluctuation on Latvian Highways

    NASA Astrophysics Data System (ADS)

    Freimanis, A.; Paeglı¯tis, A.

    2015-11-01

    Average annual daily traffic and average annual truck traffic are two most used metrics for road management decisions. They are calculated from data gathered by continuous counting stations embedded in road pavement, manual counting sessions or mobile counting devices. Last two usually do not last longer than a couple of weeks so the information gathered is influenced by yearly traffic fluctuations. Data containing a total of 8,186,871 vehicles or 1989 days from 4 WIM stations installed on highways in Latvia were used in this study. Each of the files was supposed to contain data from only 1 day and additional data were deleted. No other data cleaning steps were performed, which increased the number of vehicles as counting systems sometimes split vehicles into two. Weekly traffic and weekly truck traffic was normalized against respective average values. Each weekly value was then plotted against its number in a year for better visual perception. Weekly traffic amplitudes were used to assess differences between different locations and standard deviations for fluctuation comparison of truck and regular traffic at the same location. Results show that truck traffic fluctuates more than regular traffic during a year, especially around holidays. Differences between counting locations were larger for regular traffic than truck traffic. These results show that average annual daily traffic could be influenced more if short term counting results are adjusted by factors derived from unsuitable continuous counting stations, but truck traffic is more influenced by the time of year in which counting is done.

  18. Dosimetric Consistency of Co-60 Teletherapy Unit- a ten years Study

    PubMed Central

    Baba, Misba H; Mohib-ul-Haq, M.; Khan, Aijaz A.

    2013-01-01

    Objective The goal of the Radiation standards and Dosimetry is to ensure that the output of the Teletherapy Unit is within ±2% of the stated one and the output of the treatment dose calculation methods are within ±5%. In the present paper, we studied the dosimetry of Cobalt-60 (Co-60) Teletherapy unit at Sher-I-Kashmir Institute of Medical Sciences (SKIMS) for last 10 years. Radioactivity is the phenomenon of disintegration of unstable nuclides called radionuclides. Among these radionuclides, Cobalt-60, incorporated in Telecobalt Unit, is commonly used in therapeutic treatment of cancer. Cobalt-60 being unstable decays continuously into Ni-60 with half life of 5.27 years thereby resulting in the decrease in its activity, hence dose rate (output). It is, therefore, mandatory to measure the dose rate of the Cobalt-60 source regularly so that the patient receives the same dose every time as prescribed by the radiation oncologist. The under dosage may lead to unsatisfactory treatment of cancer and over dosage may cause radiation hazards. Our study emphasizes the consistency between actual output and output obtained using decay method. Methodology The methodology involved in the present study is the calculations of actual dose rate of Co-60 Teletherapy Unit by two techniques i.e. Source to Surface Distance (SSD) and Source to Axis Distance (SAD), used for the External Beam Radiotherapy, of various cancers, using the standard methods. Thereby, a year wise comparison has been made between average actual dosimetric output (dose rate) and the average expected output values (obtained by using decay method for Co-60.) Results The present study shows that there is a consistency in the average output (dose rate) obtained by the actual dosimetry values and the expected output values obtained using decay method. The values obtained by actual dosimetry are within ±2% of the expected values. Conclusion The results thus obtained in a year wise comparison of average output by actual dosimetry done regularly as a part of Quality Assurance of the Telecobalt Radiotherapy Unit and its deviation from the expected output data is within the permissible limits. Thus our study shows a trend towards uniformity and a better dose delivery. PMID:23559901

  19. A Long-Term Comparison of GPS Carrierphase Frequency Transfer and Two-Way Satellite Time/Frequency Transfer

    DTIC Science & Technology

    2007-01-01

    and frequency transfer ( TWSTFT ) were performed along three transatlantic links over the 6-month period 29 January – 31 July 2006. The GPSCPFT and... TWSTFT results were subtracted in order to estimate the combined uncertainty of the methods. The frequency values obtained from GPSCPFT and TWSTFT ...values were equal to or less than the frequency-stability values σy(GPSCPFT) – y( TWSTFT ) (τ) (or TheoBR (τ)) computed for the corresponding averaging

  20. Model for forecasting Olea europaea L. airborne pollen in South-West Andalusia, Spain

    NASA Astrophysics Data System (ADS)

    Galán, C.; Cariñanos, Paloma; García-Mozo, Herminia; Alcázar, Purificación; Domínguez-Vilches, Eugenio

    Data on predicted average and maximum airborne pollen concentrations and the dates on which these maximum values are expected are of undoubted value to allergists and allergy sufferers, as well as to agronomists. This paper reports on the development of predictive models for calculating total annual pollen output, on the basis of pollen and weather data compiled over the last 19 years (1982-2000) for Córdoba (Spain). Models were tested in order to predict the 2000 pollen season; in addition, and in view of the heavy rainfall recorded in spring 2000, the 1982-1998 data set was used to test the model for 1999. The results of the multiple regression analysis show that the variables exerting the greatest influence on the pollen index were rainfall in March and temperatures over the months prior to the flowering period. For prediction of maximum values and dates on which these values might be expected, the start of the pollen season was used as an additional independent variable. Temperature proved the best variable for this prediction. Results improved when the 5-day moving average was taken into account. Testing of the predictive model for 1999 and 2000 yielded fairly similar results. In both cases, the difference between expected and observed pollen data was no greater than 10%. However, significant differences were recorded between forecast and expected maximum and minimum values, owing to the influence of rainfall during the flowering period.

  1. [Spectrum simulation based on data derived from red tide].

    PubMed

    Liu, Zhen-Yu; Cui, Ting-Wei; Yue, Jie; Jiang, Tao; Cao, Wen-Xi; Ma, Yi

    2011-11-01

    The present paper utilizes the absorption data of red tide water measured during the growing and dying course to retrieve imaginary part of the index of refraction based on Mie theory, carries out the simulation and analysis of average absorption efficiency factors, average backscattering efficiency factors and scattering phase function. The analysis of the simulation shows that Mie theory can be used to reproduce the absorption property of Chaetoceros socialis with an average error of 11%; the average backscattering efficiency factors depend on the value of absorption whose maximum value corresponds to the wavelength range from 400 to 700 nanometer; the average backscattering efficiency factors showed a maximum value on 17th with a low value during the outbreak of red tide and the minimum on 21th; the total scattering, weakly depending on the absorption, is proportional to the size parameters which represent the relative size of cell diameter with respect to the wavelength, while the angle scattering intensity is inversely proportional to wavelength.

  2. Astrometric observations of visual binaries using 26-inch refractor during 2007-2014 at Pulkovo

    NASA Astrophysics Data System (ADS)

    Izmailov, I. S.; Roshchina, E. A.

    2016-04-01

    We present the results of 15184 astrometric observations of 322 visual binaries carried out in 2007-2014 at Pulkovo observatory. In 2007, the 26-inch refractor ( F = 10413 mm, D = 65 cm) was equipped with the CCD camera FLI ProLine 09000 (FOV 12' × 12', 3056 × 3056 pixels, 0.238 arcsec pixel-1). Telescope automation and weather monitoring system installation allowed us to increase the number of observations significantly. Visual binary and multiple systems with an angular distance in the interval 1."1-78."6 with 7."3 on average were included in the observing program. The results were studied in detail for systematic errors using calibration star pairs. There was no detected dependence of errors on temperature, pressure, and hour angle. The dependence of the 26-inch refractor's scale on temperature was taken into account in calculations. The accuracy of measurement of a single CCD image is in the range of 0."0005 to 0."289, 0."021 on average along both coordinates. Mean errors in annual average values of angular distance and position angle are equal to 0."005 and 0.°04 respectively. The results are available here http://izmccd.puldb.ru/vds.htmand in the Strasbourg Astronomical Data Center (CDS). In the catalog, the separations and position angles per night of observation and annual average as well as errors for all the values and standard deviations of a single observation are presented. We present the results of comparison of 50 pairs of stars with known orbital solutions with ephemerides.

  3. The value of vital sign trends for detecting clinical deterioration on the wards

    PubMed Central

    Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P

    2016-01-01

    Aim Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient’s current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Methods Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). Results A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC −0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Conclusion Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. PMID:26898412

  4. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, Jr., David (Inventor)

    2016-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  5. High-Precision Half-Life and Branching Ratio Measurements for the Superallowed β+ Emitter 26Alm

    NASA Astrophysics Data System (ADS)

    Finlay, P.; Svensson, C. E.; Demand, G. A.; Garrett, P. E.; Green, K. L.; Leach, K. G.; Phillips, A. A.; Rand, E. T.; Ball, G.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Sumithrarachchi, C. S.; Williams, S. J.; Triambak, S.

    2013-03-01

    High-precision half-life and branching-ratio measurements for the superallowed β+ emitter 26Alm were performed at the TRIUMF-ISAC radioactive ion beam facility. An upper limit of ≤ 15 ppm at 90% C.L. was determined for the sum of all possible non-analogue β+/EC decay branches of 26Alm, yielding a superallowed branching ratio of 100.0000+0-0.0015%. A value of T1/2 = 6:34654(76) s was determined for the 26Alm half-life which is consistent with, but 2.5 times more precise than, the previous world average. Combining these results with world-average measurements yields an ft value of 3037.58(60) s, the most precisely determined for any superallowed emitting nucleus to date. This high-precision ft value for 26Alm provides a new benchmark to refine theoretical models of isospin-symmetry-breaking effects in superallowed β decays.

  6. Data Point Averaging for Computational Fluid Dynamics Data

    NASA Technical Reports Server (NTRS)

    Norman, David, Jr. (Inventor)

    2014-01-01

    A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.

  7. Re-evaluation of forest biomass carbon stocks and lessons from the world's most carbon-dense forests.

    PubMed

    Keith, Heather; Mackey, Brendan G; Lindenmayer, David B

    2009-07-14

    From analysis of published global site biomass data (n = 136) from primary forests, we discovered (i) the world's highest known total biomass carbon density (living plus dead) of 1,867 tonnes carbon per ha (average value from 13 sites) occurs in Australian temperate moist Eucalyptus regnans forests, and (ii) average values of the global site biomass data were higher for sampled temperate moist forests (n = 44) than for sampled tropical (n = 36) and boreal (n = 52) forests (n is number of sites per forest biome). Spatially averaged Intergovernmental Panel on Climate Change biome default values are lower than our average site values for temperate moist forests, because the temperate biome contains a diversity of forest ecosystem types that support a range of mature carbon stocks or have a long land-use history with reduced carbon stocks. We describe a framework for identifying forests important for carbon storage based on the factors that account for high biomass carbon densities, including (i) relatively cool temperatures and moderately high precipitation producing rates of fast growth but slow decomposition, and (ii) older forests that are often multiaged and multilayered and have experienced minimal human disturbance. Our results are relevant to negotiations under the United Nations Framework Convention on Climate Change regarding forest conservation, management, and restoration. Conserving forests with large stocks of biomass from deforestation and degradation avoids significant carbon emissions to the atmosphere, irrespective of the source country, and should be among allowable mitigation activities. Similarly, management that allows restoration of a forest's carbon sequestration potential also should be recognized.

  8. Re-evaluation of forest biomass carbon stocks and lessons from the world's most carbon-dense forests

    PubMed Central

    Keith, Heather; Mackey, Brendan G.; Lindenmayer, David B.

    2009-01-01

    From analysis of published global site biomass data (n = 136) from primary forests, we discovered (i) the world's highest known total biomass carbon density (living plus dead) of 1,867 tonnes carbon per ha (average value from 13 sites) occurs in Australian temperate moist Eucalyptus regnans forests, and (ii) average values of the global site biomass data were higher for sampled temperate moist forests (n = 44) than for sampled tropical (n = 36) and boreal (n = 52) forests (n is number of sites per forest biome). Spatially averaged Intergovernmental Panel on Climate Change biome default values are lower than our average site values for temperate moist forests, because the temperate biome contains a diversity of forest ecosystem types that support a range of mature carbon stocks or have a long land-use history with reduced carbon stocks. We describe a framework for identifying forests important for carbon storage based on the factors that account for high biomass carbon densities, including (i) relatively cool temperatures and moderately high precipitation producing rates of fast growth but slow decomposition, and (ii) older forests that are often multiaged and multilayered and have experienced minimal human disturbance. Our results are relevant to negotiations under the United Nations Framework Convention on Climate Change regarding forest conservation, management, and restoration. Conserving forests with large stocks of biomass from deforestation and degradation avoids significant carbon emissions to the atmosphere, irrespective of the source country, and should be among allowable mitigation activities. Similarly, management that allows restoration of a forest's carbon sequestration potential also should be recognized. PMID:19553199

  9. Simulated Keratometry Repeatability in Subjects with & without Down Syndrome

    PubMed Central

    Ravikumar, Ayeswarya; Marsack, Jason D.; Benoit, Julia S.; Anderson, Heather A.

    2016-01-01

    Purpose To assess the repeatability of simulated keratometry measures obtained with Zeiss Atlas topography for subjects with and without Down syndrome (DS). Methods Corneal topography was attempted on 140 subjects with DS and 138 controls (aged 7 to 59 years). Subjects who had at least 3 measures in each eye were included in analysis (DS: n=140 eyes (70 subjects) and controls: n=264 eyes (132 subjects)). For each measurement the steep corneal power (K), corneal astigmatism, flat K orientation, power vector representation of astigmatism (J0, J45), and astigmatic dioptric difference were determined for each measurement (collectively termed keratometry values here). For flat K orientation comparisons, only eyes with >0.50 DC of astigmatism were included (DS: n=131 eyes (68 subjects) and control: n=217 eyes (119 subjects)). Repeatability was assessed using 1) group mean variability (average standard deviation (SD) across subjects), 2) coefficient of repeatability (COR) 3) coefficient of variation (COV), and 4) intraclass correlation coefficient (ICC). Results The keratometry values showed good repeatability as evidenced by low group mean variability for DS vs control eyes (≤0.26D vs ≤0.09D for all dioptric values; 4.51° vs 3.16° for flat K orientation); however, the group mean variability was significantly higher in DS eyes than control eyes for all parameters (p≤0.03). On average, group mean variability was 2.5× greater in the DS eyes compared to control eyes across the keratometry values. Other metrics of repeatability also indicated good repeatability for both populations for each keratometry value, although repeatability was always better in the control eyes. Conclusions DS eyes showed more variability (on average: 2.5×) compared to controls for all keratometry values. Although differences were statistically significant, on average 91% of DS eyes had variability ≤0.50D for steep K and astigmatism, and 75% of DS eyes had variability ≤5 degrees for flat K orientation. PMID:27741083

  10. 40 CFR Table 2 to Subpart Mmmmm of... - Operating Limits for New or Reconstructed Flame Lamination Affected Sources

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... scrubber, maintain the daily average pressure drop across the venturi within the operating range value... . . . You must . . . 1. Scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performance test. b. Maintain the daily average scrubber effluent pH...

  11. 40 CFR Table 2 to Subpart Mmmmm of... - Operating Limits for New or Reconstructed Flame Lamination Affected Sources

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... . . . You must . . . 1. Scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performance test. b. Maintain the daily average scrubber effluent pH... scrubber, maintain the daily average pressure drop across the venturi within the operating range value...

  12. Comment on ``Annual variation of geomagnetic activity'' by Alicia L. Clúa de Gonzales et al.

    NASA Astrophysics Data System (ADS)

    Sonnemann, G. R.

    2002-10-01

    Clúa de Gonzales et al. (J. Atmos. Terr. Phys. 63 (2001) 367) analyzed the monthly means of the geomagnetic /aa-index available since 1868 and found enhanced geomagnetic activity in July outside of the known seasonal course of semiannual variation. They pointed out that this behavior is mainly caused by the high values of the geomagnetic activity. Their analysis confirmed results obtained from an analysis of Ap-values nearly 30 years ago but widely unknown to the scientific community. At that time the entire year was analyzed using running means of the activity values averaged to the same date. Aside from the July period, the calculations revealed distinct deviations from the seasonal course-called geomagnetic singularities. The most marked singularity occurs from the middle of March to the end of March characterized by a strong increase from, on average, relatively calm values to the actually strongest ones during the entire year. Some typical time patterns around and after equinox are repeated half a year later. An analysis in 1998 on the basis of the available /aa-values confirmed the findings derived from Ap-values and the local activity index Ak from Niemegk, Germany available since 1890. The new results will be presented and discussed. Special attention is paid to the statistical problem of the persistence of geomagnetic perturbations. The main problem under consideration is that the variation of the mean activity is not caused by an accidental accumulation of strong perturbations occurring within certain intervals of days. We assume that the most marked variations of the mean value are not accidental and result from internal processes within the earth's atmosphere but different, particularly small-scale features, are most probably accidental.

  13. Magnetic resonance imaging DTI-FT study on schizophrenic patients with typical negative first symptoms.

    PubMed

    Gu, Chengyu; Zhang, Ying; Wei, Fuquan; Cheng, Yougen; Cao, Yulin; Hou, Hongtao

    2016-09-01

    Magnetic resonance imaging (MRI) with diffusion-tensor imaging (DTI) together with a white matter fiber tracking (FT) technique was used to assess different brain white matter structures and functionalities in schizophrenic patients with typical first negative symptoms. In total, 30 schizophrenic patients with typical first negative symptoms, comprising an observation group were paired 1:1 according to gender, age, right-handedness, and education, with 30 healthy individuals in a control group. Individuals in each group underwent routine MRI and DTI examination of the brain, and diffusion-tensor tractography (DTT) data were obtained through whole brain analysis based on voxel and tractography. The results were expressed by fractional anisotropy (FA) values. The schizophrenic patients were evaluated using a positive and negative symptom scale (PANSS) as well as a Global Assessment Scale (GAS). The results of the study showed that routine MRIs identified no differences between the two groups. However, compared with the control group, the FA values obtained by DTT from the deep left prefrontal cortex, the right deep temporal lobe, the white matter of the inferior frontal gyrus and part of the corpus callosum were significantly lower in the observation group (P<0.05). The PANSS positive scale value in the observation group averaged 7.7±1.5, and the negative scale averaged 46.6±5.9, while the general psychopathology scale averaged 65.4±10.3, and GAS averaged 53.8±19.2. The Pearson statistical analysis, the left deep prefrontal cortex, the right deep temporal lobe, the white matter of the inferior frontal gyrus and the FA value of part of the corpus callosum in the observation group was negatively correlated with the negative scale (P<0.05), and positively correlated with GAS (P<0.05). In conclusion, a decrease in the FA values of the left deep prefrontal cortex, the right deep temporal lobe, the white matter of the inferior frontal gyrus and part of the corpus callosum may be associated with schizophrenia with typical first negative symptoms and the application of MRI DTI-FT can improve diagnostic accuracy.

  14. Dosimetric Consistency of Co-60 Teletherapy Unit- a ten years Study.

    PubMed

    Baba, Misba H; Mohib-Ul-Haq, M; Khan, Aijaz A

    2013-01-01

    The goal of the Radiation standards and Dosimetry is to ensure that the output of the Teletherapy Unit is within ±2% of the stated one and the output of the treatment dose calculation methods are within ±5%. In the present paper, we studied the dosimetry of Cobalt-60 (Co-60) Teletherapy unit at Sher-I-Kashmir Institute of Medical Sciences (SKIMS) for last 10 years. Radioactivity is the phenomenon of disintegration of unstable nuclides called radionuclides. Among these radionuclides, Cobalt-60, incorporated in Telecobalt Unit, is commonly used in therapeutic treatment of cancer. Cobalt-60 being unstable decays continuously into Ni-60 with half life of 5.27 years thereby resulting in the decrease in its activity, hence dose rate (output). It is, therefore, mandatory to measure the dose rate of the Cobalt-60 source regularly so that the patient receives the same dose every time as prescribed by the radiation oncologist. The under dosage may lead to unsatisfactory treatment of cancer and over dosage may cause radiation hazards. Our study emphasizes the consistency between actual output and output obtained using decay method. The methodology involved in the present study is the calculations of actual dose rate of Co-60 Teletherapy Unit by two techniques i.e. Source to Surface Distance (SSD) and Source to Axis Distance (SAD), used for the External Beam Radiotherapy, of various cancers, using the standard methods. Thereby, a year wise comparison has been made between average actual dosimetric output (dose rate) and the average expected output values (obtained by using decay method for Co-60.). The present study shows that there is a consistency in the average output (dose rate) obtained by the actual dosimetry values and the expected output values obtained using decay method. The values obtained by actual dosimetry are within ±2% of the expected values. The results thus obtained in a year wise comparison of average output by actual dosimetry done regularly as a part of Quality Assurance of the Telecobalt Radiotherapy Unit and its deviation from the expected output data is within the permissible limits. Thus our study shows a trend towards uniformity and a better dose delivery.

  15. Center-to-Limb Variation of Deprojection Errors in SDO/HMI Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Falconer, David; Moore, Ronald; Barghouty, Nasser; Tiwari, Sanjiv K.; Khazanov, Igor

    2015-04-01

    For use in investigating the magnetic causes of coronal heating in active regions and for use in forecasting an active region’s productivity of major CME/flare eruptions, we have evaluated various sunspot-active-region magnetic measures (e.g., total magnetic flux, free-magnetic-energy proxies, magnetic twist measures) from HMI Active Region Patches (HARPs) after the HARP has been deprojected to disk center. From a few tens of thousand HARP vector magnetograms (of a few hundred sunspot active regions) that have been deprojected to disk center, we have determined that the errors in the whole-HARP magnetic measures from deprojection are negligibly small for HARPS deprojected from distances out to 45 heliocentric degrees. For some purposes the errors from deprojection are tolerable out to 60 degrees. We obtained this result by the following process. For each whole-HARP magnetic measure: 1) for each HARP disk passage, normalize the measured values by the measured value for that HARP at central meridian; 2) then for each 0.05 Rs annulus, average the values from all the HARPs in the annulus. This results in an average normalized value as a function of radius for each measure. Assuming no deprojection errors and that, among a large set of HARPs, the measure is as likely to decrease as to increase with HARP distance from disk center, the average of each annulus is expected to be unity, and, for a statistically large sample, the amount of deviation of the average from unity estimates the error from deprojection effects. The deprojection errors arise from 1) errors in the transverse field being deprojected into the vertical field for HARPs observed at large distances from disk center, 2) increasingly larger foreshortening at larger distances from disk center, and 3) possible errors in transverse-field-direction ambiguity resolution.From the compiled set of measured vales of whole-HARP magnetic nonpotentiality parameters measured from deprojected HARPs, we have examined the relation between each nonpotentiality parameter and the speed of CMEs from the measured active regions. For several different nonpotentiality parameters we find there is an upper limit to the CME speed, the limit increasing as the value of the parameter increases.

  16. Impact of increasing heat waves on U.S. ozone episodes in the 2050s: Results from a multimodel analysis using extreme value theory

    NASA Astrophysics Data System (ADS)

    Shen, L.; Mickley, L. J.; Gilleland, E.

    2016-04-01

    We develop a statistical model using extreme value theory to estimate the 2000-2050 changes in ozone episodes across the United States. We model the relationships between daily maximum temperature (Tmax) and maximum daily 8 h average (MDA8) ozone in May-September over 2003-2012 using a Point Process (PP) model. At ~20% of the sites, a marked decrease in the ozone-temperature slope occurs at high temperatures, defined as ozone suppression. The PP model sometimes fails to capture ozone-Tmax relationships, so we refit the ozone-Tmax slope using logistic regression and a generalized Pareto distribution model. We then apply the resulting hybrid-extreme value theory model to projections of Tmax from an ensemble of downscaled climate models. Assuming constant anthropogenic emissions at the present level, we find an average increase of 2.3 d a-1 in ozone episodes (>75 ppbv) across the United States by the 2050s, with a change of +3-9 d a-1 at many sites.

  17. SU-E-I-09: Application of LiF:Mg,Cu (TLD-100H) Dosimeters for in Diagnostic Radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sina, S; Zeinali, B; Karimipourfard, M

    Purpose: Accurate dosimetery is very essential in diagnostic radiology. The goal of this study is to verify the application of LiF:Mg,Cu,P (TLD100H) in obtaining the Entrance skin dose (ESD) of patients undergoing diagnostic radiology. The results of dosimetry performed by TLD-100H, were compared with those obtained by TLD100, which is a common dosimeter in diagnostic radiology. Methods: In this study the ESD values were measured using two types of Thermoluminescence dosimeters (TLD-100, and TLD-100H) for 16 patients undergoing diagnostic radiology (lumbar spine imaging). The ESD values were also obtained by putting the two types of TLDs at the surface ofmore » Rando phantom for different imaging techniques and different views (AP, and lateral). The TLD chips were annealed with a standard procedure, and the ECC values for each TLD was obtained by exposing the chips to equal amount of radiation. Each time three TLD chips were covered by thin dark plastic covers, and were put at the surface of the phantom or the patient. The average reading of the three chips was used for obtaining the dose. Results: The results show a close agreement between the dose measuered by the two dosimeters.According to the results of this study, the TLD-100H dosimeters have higher sensitivities (i.e.signal(nc)/dose) than TLD-100.The ESD values varied between 2.71 mGy and 26.29 mGy with the average of 11.89 mGy for TLD-100, and between 2.55 mGy and 27.41 mGy with the average of 12.32 mGy for measurements. Conclusion: The TLD-100H dosimeters are suggested as effective dosimeters for dosimetry in low dose fields because of their higher sensitivities.« less

  18. Influence of Lumber Volume Maximization on Value in Sawing Hardwood Sawlogs

    Treesearch

    Philip H. Steele; Francis G. Wagner; Lalit Kumar; Philip A. Araman

    1992-01-01

    Research based on applying volume-maximizing sawing solutions to idealized hardwood log forms has shown that average lumber yield can be increased by 6 percent. It is possible, however, that a lumber volume-maximizing solution may result in a decrease in lumber grade and a net reduction in total value of sawn lumber. The objective of this study was to determine the...

  19. Clustering techniques: measuring the performance of contract service providers.

    PubMed

    Cruz, Antonio Miguel; Perilla, Sandra Patricia Usaquén; Pabón, Nidia Nelly Vanegas

    2010-01-01

    This paper investigates the use of clustering technique to characterize the providers of maintenance services in a health-care institution according to their performance. A characterization of the inventory of equipment from seven pilot areas was carried out first (including 264 medical devices). The characterization study concluded that the inventory on a whole is old [exploitation time (ET)/useful life (UL) average is 0.78] and has high maintenance service costs relative to the original cost of acquisition (service cost /acquisition cost average 8.61%). A monitoring of the performance of maintenance service providers was then conducted. The variables monitored were response time (RT), service time (ST), availability, and turnaround time (TAT). Finally, the study grouped maintenance service providers into clusters according to performance. The study grouped maintenance service providers into the following clusters. Cluster 0: Identified with the best performance, the lowest values of TAT, RT, and ST, with an average TAT value of 1.46 days; Clusters 1 and 2: Identified with the poorest performance, highest values of TAT, RT, and ST, and an average TAT value of 9.79 days; and Cluster 3: Identified by medium-quality performance, intermediate values of TAT, RT, and ST, and an average TAT value of 2.56 days.

  20. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  1. Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Co-Axial Supersonic Free-Jet Experiment

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Edwards, J. R.

    2009-01-01

    Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The baseline value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was noted when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid simulation results showed the same trends as the baseline Reynolds-averaged predictions. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions are suggested as a remedy to this dilemma. Comparisons between resolved second-order turbulence statistics and their modeled Reynolds-averaged counterparts were also performed.

  2. Effect of Build Angle on Surface Properties of Nickel Superalloys Processed by Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Covarrubias, Ernesto E.; Eshraghi, Mohsen

    2018-03-01

    Aerospace, automotive, and medical industries use selective laser melting (SLM) to produce complex parts through solidifying successive layers of powder. This additive manufacturing technique has many advantages, but one of the biggest challenges facing this process is the resulting surface quality of the as-built parts. The purpose of this research was to study the surface properties of Inconel 718 alloys fabricated by SLM. The effect of build angle on the surface properties of as-built parts was investigated. Two sets of sample geometries including cube and rectangular artifacts were considered in the study. It was found that, for angles between 15° and 75°, theoretical calculations based on the "stair-step" effect were consistent with the experimental results. Downskin surfaces showed higher average roughness values compared to the upskin surfaces. No significant difference was found between the average roughness values measured from cube and rectangular test artifacts.

  3. Analysis of aquifer tests in the Punjab region of West Pakistan

    USGS Publications Warehouse

    Bennett, Gordon D.; ,; Sheikh, Ijaz Ahmed; Alr, Sabire

    1967-01-01

    The results of 141 pumping tests in the Punjab Plain of West Pakistan are reported. Methods of test analysis are described in detail, and an outline of the theory underlying these methods is given. The lateral permeability of the screened interval is given for all tests; the specific yield of the material at water-table depth is given for 1(6 tests; and the vertical permeability of the material between the water table and the top of the screen is given for 14 tests. The lateral permeabilities are predominantly in the range 0.001 to 0.006 cfs per sq ft; the average value is 0.0032 cfs per sq ft. Specific yields generally range from 0.02 to 0.26; the average value is 0.14. All vertical permeability results fall in the range 10 -5 to 10 -3 cfs per sq ft.

  4. Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems

    NASA Astrophysics Data System (ADS)

    Shahab, Azin

    In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.

  5. Measuring radio-signal power accurately

    NASA Technical Reports Server (NTRS)

    Goldstein, R. M.; Newton, J. W.; Winkelstein, R. A.

    1979-01-01

    Absolute value of signal power in weak radio signals is determined by computer-aided measurements. Equipment operates by averaging received signal over several-minute period and comparing average value with noise level of receiver previously calibrated.

  6. Outcome measures in stapes surgery: postoperative results are independent from preoperative parameters.

    PubMed

    Koopmann, Mario; Weiss, Daniel; Savvas, Eleftherios; Rudack, Claudia; Stenner, Markus

    2015-09-01

    The aim of this study was to compare audiometric results before and after stapes surgery and identify potential prognostic factors to appropriately select patients with otosclerosis who will most likely benefit from surgery. We enrolled 126 patients with otosclerosis (162 consecutive ears) in our study who underwent stapes surgery between 2007 and 2012 at our institution. Preoperative and postoperative data including pure-tone audiometry, speech audiometry, stapedial reflex audiometry and surgical data were analyzed. The average preoperative air-bone gap (ABG) was 28.9 ± 8.6 dB. Male patients and patients older than 45 years of age had greater preoperative ABGs in comparison to females and younger patients. Postoperative ABGs were 11.2 ± 7.4 dB. The average ABG gain was 17.7 ± 11.1 dB. Preoperative audiometric data, age, gender and type of surgery did not influence the postoperative results. Stapes surgery offers predictable results independent from disease progression or patient-related factors. While absolute values of hearing improvement are instrumental in reflecting audiometric results of a cohort, relative values better reflect individual's audiometric data resembling the patient's benefit.

  7. Relationships and redundancies of selected hemodynamic and structural parameters for characterizing virtual treatment of cerebral aneurysms with flow diverter devices.

    PubMed

    Karmonik, C; Anderson, J R; Beilner, J; Ge, J J; Partovi, S; Klucznik, R P; Diaz, O; Zhang, Y J; Britz, G W; Grossman, R G; Lv, N; Huang, Q

    2016-07-26

    To quantify the relationship and to demonstrate redundancies between hemodynamic and structural parameters before and after virtual treatment with a flow diverter device (FDD) in cerebral aneurysms. Steady computational fluid dynamics (CFD) simulations were performed for 10 cerebral aneurysms where FDD treatment with the SILK device was simulated by virtually reducing the porosity at the aneurysm ostium. Velocity and pressure values proximal and distal to and at the aneurysm ostium as well as inside the aneurysm were quantified. In addition, dome-to-neck ratios and size ratios were determined. Multiple correlation analysis (MCA) and hierarchical cluster analysis (HCA) were conducted to demonstrate dependencies between both structural and hemodynamic parameters. Velocities in the aneurysm were reduced by 0.14m/s on average and correlated significantly (p<0.05) with velocity values in the parent artery (average correlation coefficient: 0.70). Pressure changes in the aneurysm correlated significantly with pressure values in the parent artery and aneurysm (average correlation coefficient: 0.87). MCA found statistically significant correlations between velocity values and between pressure values, respectively. HCA sorted velocity parameters, pressure parameters and structural parameters into different hierarchical clusters. HCA of aneurysms based on the parameter values yielded similar results by either including all (n=22) or only non-redundant parameters (n=2, 3 and 4). Hemodynamic and structural parameters before and after virtual FDD treatment show strong inter-correlations. Redundancy of parameters was demonstrated with hierarchical cluster analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Statistical strategies for averaging EC50 from multiple dose-response experiments.

    PubMed

    Jiang, Xiaoqi; Kopp-Schneider, Annette

    2015-11-01

    In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.

  9. [Relationships between landscape structure and rocky desertification in karst region of northwestern Guangxi].

    PubMed

    Zhang, Xiao-nan; Wang, Ke-lin; Chen, Hong-song; Zhang, Wei

    2008-11-01

    By using canonical correspondence analysis (CCA), sixteen landscape indices were adopted to quantitatively analyze the relationships between the landscape structure and rocky desertification in karst region of Huanjiang County, Guangxi Province. The results showed that the first and the second ordination axis of CCA were strongly correlated to the factors of average patch area, average dry land patch area, landscape shape index, and landscape aggregation index. The potential rocky desertification in the region was highly positively correlated with the average dry land patch area and the average fractal dimensions of dry land and shrub land, but negatively correlated with the patch numbers of dry land. Light rocky desertification had obvious positive correlations with the fractal dimension index, average fractal dimension of unused land, and patch numbers of shrub land; while moderate and strong rocky desertification had high positive correlations with the average unused land patch area but negative correlation with the average fractal dimension of shrub land. To some extent, rocky desertification degree might be represented by the values of landscape indices. The gradient variation in karst rocky desertification along landscape structure was clearly presented by the results of CCA.

  10. The Accuracy of Stated Energy Contents of Reduced-Energy, Commercially Prepared Foods

    PubMed Central

    Urban, Lorien E.; Dallal, Gerard E.; Robinson, Lisa M.; Ausman, Lynne M.; Saltzman, Edward; Roberts, Susan B.

    2010-01-01

    The accuracy of stated energy contents of reduced-energy restaurant foods and frozen meals purchased from supermarkets was evaluated. Measured energy values of 29 quick-serve and sit-down restaurant foods averaged 18% more than stated values, and measured energy values of 10 frozen meals purchased from supermarkets averaged 8% more than originally stated. These differences substantially exceeded laboratory measurement error but did not achieve statistical significance due to considerable variability in the degree of underreporting. Some individual restaurant items contained up to 200% of stated values and, in addition, free side dishes increased provided energy to an average of 245% of stated values for the entrees they accompanied. These findings suggest that stated energy contents of reduced-energy meals obtained from restaurants and supermarkets are not consistently accurate, and in this study averaged more than measured values, especially when free side dishes were taken into account. If widespread, this phenomenon could hamper efforts to self-monitor energy intake to control weight, and could also reduce the potential benefit of recent policy initiatives to disseminate information on food energy content at the point of purchase. PMID:20102837

  11. The impact of reforestation in the northeast United States on precipitation and surface temperature

    NASA Astrophysics Data System (ADS)

    Clark, Allyson

    Since the 1920s, forest coverage in the northeastern United States has recovered from disease, clearing for agricultural and urban development, and the demands of the timber industry. Such a dramatic change in ground cover can influence heat and moisture fluxes to the atmosphere, as measured in altered landscapes in Australia, Israel, and the Amazon. In this study, the impacts of recent reforestation in the northeastern United States on summertime precipitation and surface temperature were quantified by comparing average modern values to 1950s values. Weak positive (negative) relationships between reforestation and average monthly precipitation and daily minimum temperatures (average daily maximum surface temperature) were found. There was no relationship between reforestation and average surface temperature. Results of the observational analysis were compared with results obtained from reforestation scenarios simulated with the BUGS5 global climate model. The single difference between the model runs was the amount of forest coverage in the northeast United States; three levels of forest were defined - a grassland state, with 0% forest coverage, a completely forested state, with approximately 100% forest coverage, and a control state, with forest coverage closely resembling modern forest coverage. The three simulations were compared, and had larger magnitude average changes in precipitation and in all temperature variables. The difference in magnitudes between the model simulations observations was much larger than the difference in the amount of reforestation in each case. Additionally, unlike in observations, a negative relationship was found between average daily minimum temperature and amount of forest coverage, implying that additional factors influence temperature and precipitation in the real world that are not accounted for in the model.

  12. Comparative statistical analysis of carcinogenic and non-carcinogenic effects of uranium in groundwater samples from different regions of Punjab, India.

    PubMed

    Saini, Komal; Singh, Parminder; Bajwa, Bikramjit Singh

    2016-12-01

    LED flourimeter has been used for microanalysis of uranium concentration in groundwater samples collected from six districts of South West (SW), West (W) and North East (NE) Punjab, India. Average value of uranium content in water samples of SW Punjab is observed to be higher than WHO, USEPA recommended safe limit of 30µgl -1 as well as AERB proposed limit of 60µgl -1 . Whereas, for W and NE region of Punjab, average level of uranium concentration was within AERB recommended limit of 60µgl -1 . Average value observed in SW Punjab is around 3-4 times the value observed in W Punjab, whereas its value is more than 17 times the average value observed in NE region of Punjab. Statistical analysis of carcinogenic as well as non carcinogenic risks due to uranium have been evaluated for each studied district. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. [Study of the reliability in one dimensional size measurement with digital slit lamp microscope].

    PubMed

    Wang, Tao; Qi, Chaoxiu; Li, Qigen; Dong, Lijie; Yang, Jiezheng

    2010-11-01

    To study the reliability of digital slit lamp microscope as a tool for quantitative analysis in one dimensional size measurement. Three single-blinded observers acquired and repeatedly measured the images with a size of 4.00 mm and 10.00 mm on the vernier caliper, which simulatated the human eye pupil and cornea diameter under China-made digital slit lamp microscope in the objective magnification of 4 times, 10 times, 16 times, 25 times, 40 times and 4 times, 10 times, 16 times, respectively. The correctness and precision of measurement were compared. The images with 4 mm size were measured by three investigators and the average values were located between 3.98 to 4.06. For the images with 10.00 mm size, the average values fell within 10.00 ~ 10.04. Measurement results of 4.00 mm images showed, except A4, B25, C16 and C25, significant difference was noted between the measured value and the true value. Regarding measurement results of 10.00 mm iamges indicated, except A10, statistical significance was found between the measured value and the true value. In terms of comparing the results of the same size measured at different magnifications by the same investigator, except for investigators A's measurements of 10.00 mm dimension, the measurement results by all the remaining investigators presented statistical significance at different magnifications. Compared measurements of the same size with different magnifications, measurements of 4.00 mm in 4-fold magnification had no significant difference among the investigators', the remaining results were statistically significant. The coefficient of variation of all measurement results were less than 5%; as magnification increased, the coefficient of variation decreased. The measurement of digital slit lamp microscope in one-dimensional size has good reliability,and should be performed for reliability analysis before used for quantitative analysis to reduce systematic errors.

  14. 50/50 JP5/ATJ5 Specification and Fit-for-Purpose Test Results

    DTIC Science & Technology

    2014-07-02

    identical to the average CRC handbook JP-5 values. The minor discrepancies between these results are within the experimental error of the method and...50/50 JP5/ATJ5 SPECIFICATION AND FIT-FOR-PURPOSE TEST RESULTS NAVAIR SYSCOM REPORT 441/14-011 2 July 2014 Prepared By: Kristin L. Weisser...3 3.0 RESULTS & DISCUSSION

  15. Statistical Considerations of Data Processing in Giovanni Online Tool

    NASA Technical Reports Server (NTRS)

    Suhung, Shen; Leptoukh, G.; Acker, J.; Berrick, S.

    2005-01-01

    The GES DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni) is a web-based interface for the rapid visualization and analysis of gridded data from a number of remote sensing instruments. The GES DISC currently employs several Giovanni instances to analyze various products, such as Ocean-Giovanni for ocean products from SeaWiFS and MODIS-Aqua; TOMS & OM1 Giovanni for atmospheric chemical trace gases from TOMS and OMI, and MOVAS for aerosols from MODIS, etc. (http://giovanni.gsfc.nasa.gov) Foremost among the Giovanni statistical functions is data averaging. Two aspects of this function are addressed here. The first deals with the accuracy of averaging gridded mapped products vs. averaging from the ungridded Level 2 data. Some mapped products contain mean values only; others contain additional statistics, such as number of pixels (NP) for each grid, standard deviation, etc. Since NP varies spatially and temporally, averaging with or without weighting by NP will be different. In this paper, we address differences of various weighting algorithms for some datasets utilized in Giovanni. The second aspect is related to different averaging methods affecting data quality and interpretation for data with non-normal distribution. The present study demonstrates results of different spatial averaging methods using gridded SeaWiFS Level 3 mapped monthly chlorophyll a data. Spatial averages were calculated using three different methods: arithmetic mean (AVG), geometric mean (GEO), and maximum likelihood estimator (MLE). Biogeochemical data, such as chlorophyll a, are usually considered to have a log-normal distribution. The study determined that differences between methods tend to increase with increasing size of a selected coastal area, with no significant differences in most open oceans. The GEO method consistently produces values lower than AVG and MLE. The AVG method produces values larger than MLE in some cases, but smaller in other cases. Further studies indicated that significant differences between AVG and MLE methods occurred in coastal areas where data have large spatial variations and a log-bimodal distribution instead of log-normal distribution.

  16. Unveiling signatures of interdecadal climate changes by Hilbert analysis

    NASA Astrophysics Data System (ADS)

    Zappalà, Dario; Barreiro, Marcelo; Masoller, Cristina

    2017-04-01

    A recent study demonstrated that, in a class of networks of oscillators, the optimal network reconstruction from dynamics is obtained when the similarity analysis is performed not on the original dynamical time series, but on transformed series obtained by Hilbert transform. [1] That motivated us to use Hilbert transform to study another kind of (in a broad sense) "oscillating" series, such as the series of temperature. Actually, we found that Hilbert analysis of SAT (Surface Air Temperature) time series uncovers meaningful information about climate and is therefore a promising tool for the study of other climatological variables. [2] In this work we analysed a large dataset of SAT series, performing Hilbert transform and further analysis with the goal of finding signs of climate change during the analysed period. We used the publicly available ERA-Interim dataset, containing reanalysis data. [3] In particular, we worked on daily SAT time series, from year 1979 to 2015, in 16380 points arranged over a regular grid on the Earth surface. From each SAT time series we calculate the anomaly series and also, by using the Hilbert transform, we calculate the instantaneous amplitude and instantaneous frequency series. Our first approach is to calculate the relative variation: the difference between the average value on the last 10 years and the average value on the first 10 years, divided by the average value over all the analysed period. We did this calculations on our transformed series: frequency and amplitude, both with average values and standard deviation values. Furthermore, to have a comparison with an already known analysis methods, we did these same calculations on the anomaly series. We plotted these results as maps, where the colour of each site indicates the value of its relative variation. Finally, to gain insight in the interpretation of our results over real SAT data, we generated synthetic sinusoidal series with various levels of additive noise. By applying Hilbert analysis to the synthetic data, we uncovered a clear trend between mean amplitude and mean frequency: as the noise level grows, the amplitude increases while the frequency decreases. Research funded in part by AGAUR (Generalitat de Catalunya), EU LINC project (Grant No. 289447) and Spanish MINECO (FIS2015-66503-C3-2-P).

  17. Measurement of natural and {sup 137}Cs radioactivity concentrations at Izmit Bay (Marmara Sea), Turkey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Öksüz, İ., E-mail: ibrahim-ksz@yahoo.com; Güray, R. T., E-mail: tguray@kocaeli.edu.tr; Özkan, N., E-mail: nozkan@kocaeli.edu.tr

    In order to determine the radioactivity level at Izmit Bay Marmara Sea, marine sediment samples were collected from five different locations. The radioactivity concentrations of naturally occurring {sup 238}U, {sup 232}Th and {sup 40}K isotopes and also that of an artificial isotope {sup 137}Cs were measured by using gamma-ray spectroscopy. Preliminary results show that the radioactivity concentrations of {sup 238}U and {sup 232}Th isotopes are lower than the average worldwide values while the radioactivity concentrations of the {sup 40}K are higher than the average worldwide value. A small amount of {sup 137}Cs contamination, which might be caused by the Chernobylmore » accident, was also detected.« less

  18. Methods to determine pumped irrigation-water withdrawals from the Snake River between Upper Salmon Falls and Swan Falls Dams, Idaho, using electrical power data, 1990-95

    USGS Publications Warehouse

    Maupin, Molly A.

    1999-01-01

    Pumped withdrawals compose most of the irrigation-water diversions from the Snake River between Upper Salmon Falls and Swan Falls Dams in southwestern Idaho. Pumps at 32 sites along the reach lift water as high as 745 feet to irrigate croplands on plateaus north and south of the river. The number of pump sites at which withdrawals are being continuously measured has been steadily decreasing, from 32 in 1990 to 7 in 1998. A cost-effective and accurate means of estimating annual irrigation-water withdrawals at pump sites that are no longer continuously measured was needed. Therefore, the U.S. Geological Survey began a study in 1998, as part of its Water-Use Program, to determine power-consumption coeffi- cients (PCCs) for each pump site so that withdrawals could be estimated by using electrical powerconsumption and total head data. PCC values for each pump site were determined by using withdrawal data that were measured by the U.S. Geological Survey during 1990–92 and 1994–95, energy data reported by Idaho Power Company during the same period, and total head data collected at each site during a field inventory in 1998. Individual average annual withdrawals for the 32 pump sites ranged from 1,120 to 44,480 acre-feet; average PCC values ranged from 103 to 1,248 kilowatthours per acre-foot. During the 1998 field season, power demand, total head, and withdrawal at 18 sites were measured to determine 1998 PCC values. Most of the 1998 PCC values were within 10 percent of the 5-year average, which demonstrates that withdrawals for a site that is no longer continuously measured can be calculated with reasonable accuracy by using the PCC value determined from this study and annual power-consumption data. K-factors, coefficients that describe the amount of energy necessary to lift water, were determined for each pump site by using values of PCC and total head and ranged from 1.11 to 1.89 kilowatthours per acre-foot per foot. Statistical methods were used to define the relations among PCC values and selected pumpsite characteristics. Multiple correlation analysis between average PCC values and total head, total horsepower, and total number of pumps revealed the strongest correlation was between average PCC and total head. Linear regression of these two variables resulted in a strong coefficient of determination R2=0 .9 86) and a representative K-factor of 1.463. Pump sites were subdivided into two groups on the basis of total head—0 to 300 feet and greater than 300 feet. Regression of average PCC values for eight pump sites with total head less than 300 feet produced a good correlation of determination (R2=0.870) and a representative K-factor of 1.682. The second group consisted of 10 pump sites with total head greater than 300 feet; regression produced a correlation of R2=0.939 and a representative K-factor of 1.405. Data on pump-site characteristics were successfully used to determine individual PCC and K-factor values. Statistical relations between pumpsite characteristics and PCC values were defined and used to determine regression equations that resulted in good coefficients of determination and representative K-factors. The individual PCC values will be used in the future to calculate irrigation- water withdrawals at sites that are no longer continuously measured. The representative K-factors and regression equations will be used to calculate irrigation-water withdrawals at sites that have not been previously measured and where total head and power consumption are known.

  19. Association between costs and quality of acute myocardial infarction care hospitals under the Korea National Health Insurance program.

    PubMed

    Kang, Hee-Chung; Hong, Jae-Seok

    2017-08-01

    If cost reductions produce a cost-quality trade-off, healthcare policy makers need to be more circumspect about the use of cost-effective initiatives. Additional empirical evidence about the relationship between cost and quality is needed to design a value-based payment system. We examined the association between cost and quality performances for acute myocardial infarction (AMI) care at the hospital level.In 2008, this cross-sectional study examined 69 hospitals with 6599 patients hospitalized under the Korea National Health Insurance (KNHI) program. We separately estimated hospital-specific effects on cost and quality using the fixed effect models adjusting for average patient risk. The analysis examined the association between the estimated hospital effects against the treatment cost and quality. All hospitals were distributed over the 4 cost × quality quadrants rather than concentrated in only the trade-off quadrants (i.e., above-average cost and above-average quality, below-average cost and below-average quality). We found no significant trade-off between cost and quality among hospitals providing AMI care in Korea.Our results further contribute to formulating a rationale for value-based hospital-level incentive programs by supporting the necessity of different approaches depending on the quality location of a hospital in these 4 quadrants.

  20. Variability of dental cone beam CT grey values for density estimations

    PubMed Central

    Pauwels, R; Nackaerts, O; Bellaiche, N; Stamatakis, H; Tsiklakis, K; Walker, A; Bosmans, H; Bogaerts, R; Jacobs, R; Horner, K

    2013-01-01

    Objective The aim of this study was to investigate the use of dental cone beam CT (CBCT) grey values for density estimations by calculating the correlation with multislice CT (MSCT) values and the grey value error after recalibration. Methods A polymethyl methacrylate (PMMA) phantom was developed containing inserts of different density: air, PMMA, hydroxyapatite (HA) 50 mg cm−3, HA 100, HA 200 and aluminium. The phantom was scanned on 13 CBCT devices and 1 MSCT device. Correlation between CBCT grey values and CT numbers was calculated, and the average error of the CBCT values was estimated in the medium-density range after recalibration. Results Pearson correlation coefficients ranged between 0.7014 and 0.9996 in the full-density range and between 0.5620 and 0.9991 in the medium-density range. The average error of CBCT voxel values in the medium-density range was between 35 and 1562. Conclusion Even though most CBCT devices showed a good overall correlation with CT numbers, large errors can be seen when using the grey values in a quantitative way. Although it could be possible to obtain pseudo-Hounsfield units from certain CBCTs, alternative methods of assessing bone tissue should be further investigated. Advances in knowledge The suitability of dental CBCT for density estimations was assessed, involving a large number of devices and protocols. The possibility for grey value calibration was thoroughly investigated. PMID:23255537

  1. Spatialization of instantaneous and daily average net radiation and soil heat flux in the territory of Itaparica, Northeast Brazil

    NASA Astrophysics Data System (ADS)

    Lopes, Helio L.; Silva, Bernardo B.; Teixeira, Antônio H. C.; Accioly, Luciano J. O.

    2012-09-01

    This work has as aim to quantify the energy changes between atmosphere and surface by modeling both net radiation and soil heat flux related to land use and cover. The methodology took into account modeling and mapping of physical and biophysical parameters using MODIS images and SEBAL algorithm in an area of native vegetation and irrigated crops. The results showed that there are variations in the values of the estimated parameters for different land cover types and mainly in caatinga cover. The dense caatinga presents mean values of soil heat flux (Go) of 124.9 Wm-2 while sparse caatinga with incidence of erosion, present average value of 132.6 Wm-2. For irrigated plots cultivated with banana, coconut, and papaya the mean Go values were 103.8, 98.6, 113.9 Wm-2, respectively. With regard to the instantaneous net radiation (Rn), dense caatinga presented mean value of 626.1 Wm-2, while sparse caatinga a mean value of 575.2 Wm-2. Irrigated areas cultivated with banana, coconut, and papaya presented Rn of 658.1, 647.4 and 617.9 W m-2 respectively. Applying daily mean net radiation (RnDAve) it was found that dense caatinga had a mean value of 417.1 W m-2, while sparse caatinga had a mean value of 379.9 W m-2. For the irrigated crops of banana, coconut and papaya the RnDAve values were 430.9, 431.3 and 411.6 W m-2, respectively. Sinusoidal model can be applied to determine the maximum and RnDAve considering the diverse classes of LULC; however, there is a need to compare the results with field data for validation of this model.

  2. Indoor 222Rn measurement and hazards indices in houses of Al-Najaf province - Iraq

    NASA Astrophysics Data System (ADS)

    Ebrahiem, Sameera A.; Falih, Esraa H.; Mahdi, Hind Abdul Majeed; Shaban, Auday H.

    2018-05-01

    In this paper, the measurement the 222Rn concentrations for different houses in ten reigns for Al-Najaf province, 222Rn concentrations were measurement by using RAD-7 detector. The results indicate that, the less value of 222Rn concentrations was found in Al-Motanaby region which was (88 Bq/m3), while the highest value of 222Rn concentrations was found in Al-Forat region which was (193 Bq/m3), with average value of (143.4±27.6 Bq/m3), all the results are less than the recommended range from value of (200-300 Bq/m3). The radiation hazard indices [PAEC, EP, AED and CPPP] also found to be less than the global limit

  3. A comparative study of DIGNET, average, complete, single hierarchical and k-means clustering algorithms in 2D face image recognition

    NASA Astrophysics Data System (ADS)

    Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.

    2014-06-01

    The study in this paper belongs to a more general research of discovering facial sub-clusters in different ethnicity face databases. These new sub-clusters along with other metadata (such as race, sex, etc.) lead to a vector for each face in the database where each vector component represents the likelihood of participation of a given face to each cluster. This vector is then used as a feature vector in a human identification and tracking system based on face and other biometrics. The first stage in this system involves a clustering method which evaluates and compares the clustering results of five different clustering algorithms (average, complete, single hierarchical algorithm, k-means and DIGNET), and selects the best strategy for each data collection. In this paper we present the comparative performance of clustering results of DIGNET and four clustering algorithms (average, complete, single hierarchical and k-means) on fabricated 2D and 3D samples, and on actual face images from various databases, using four different standard metrics. These metrics are the silhouette figure, the mean silhouette coefficient, the Hubert test Γ coefficient, and the classification accuracy for each clustering result. The results showed that, in general, DIGNET gives more trustworthy results than the other algorithms when the metrics values are above a specific acceptance threshold. However when the evaluation results metrics have values lower than the acceptance threshold but not too low (too low corresponds to ambiguous results or false results), then it is necessary for the clustering results to be verified by the other algorithms.

  4. Using Copulas in the Estimation of the Economic Project Value in the Mining Industry, Including Geological Variability

    NASA Astrophysics Data System (ADS)

    Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal

    2017-12-01

    Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.

  5. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel; Fita, Ignacio

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothingmore » effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.« less

  6. Thermal motion in proteins: Large effects on the time-averaged interaction energies

    NASA Astrophysics Data System (ADS)

    Goethe, Martin; Fita, Ignacio; Rubi, J. Miguel

    2016-03-01

    As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.

  7. Height and body mass index values of nineteenth-century New York legislators.

    PubMed

    Bodenhorn, Howard

    2010-03-01

    Previous studies of mid-nineteenth-century American BMI values have used data created by military academies and penitentiaries. This paper uses an alternative data set, constructed from legislative documents in which the heights and weights of New York State legislators were recorded. The results reveal that middle- to upper-middle class Americans maintained BMI values closer to the modern standard than did students and prisoners. The average BMI value among this group was 24 and their height-weight combinations did not greatly diverge from historical mortality risk optima. Copyright 2009 Elsevier B.V. All rights reserved.

  8. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    PubMed Central

    Belley, Matthew D.; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J.; Chen, Benny J.; Dewhirst, Mark W.; Yoshizumi, Terry T.

    2014-01-01

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigning a single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs. PMID:24593746

  9. Evaluating the Regional Impact of Aircraft Emissions on Climate

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Wuebbles, D. J.; Khodayari, A.

    2017-12-01

    Unlike other transportation sectors where pollutant emissions usually occur only near the Earth's surface, aviation emissions happen primarily at altitudes of 8-12 km above the surface, impacting the upper troposphere and the lower stratosphere (UTLS). At these altitudes, the pollutants can contribute significantly to greenhouse gas (GHGs) concentration and to the formation of secondary aerosols, which can have an impact on climate change. This study examines the regional effects on climate forcing resulting from aviation emissions. Most previous studies have focused on aviation effects on climate using globally-averaged metric values, which do not give information about the spatial variability of the effects. While aviation emissions have significant spatial variability in the sign and magnitude of response, the strength of regional effects is hidden due to the global averaging of climate change effects. In this study, the chemistry-climate Community Atmosphere Model (CAM-chem5) is used in analyses to examine the regional climate effects based on 4 different latitude bands (90oS-28oS, 28oS-28oN, 28oN-60oN, 60oN-90oN) and 3 regions (contiguous United States, Europe and East Asia). The most regionally important aviation emissions are short-lived species, such as black carbon (BC) and sulfates, emitted from aircraft directly, and O3-short induced by NOx emission indirectly. The regionality of these short-lived impacts are explored and compared to the globally-averaged effects. The results indicate that BC and sulfates have more regionality than O3. The radiative forcings for short-lived agents over the United States, Europe and East Asia are around 2-4 times of their corresponding global values. The results also suggest that the climate forcings will be the most underestimated over the United States when using globally-averaged values without considering regional heterogeneity.

  10. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belley, Matthew D.; Wang, Chu; Nguyen, Giao

    2014-03-15

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application formore » tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.« less

  11. Prevalence and characteristics of atherosclerosis and peripheral arterial disease in a Chinese population of Inner Mongolia.

    PubMed

    Wang, Y; Li, J; Zhao, D; Wei, Y; Hou, L; Hu, D; Xu, Y

    2011-01-01

    To investigate the relationship between brachial ankle pulse wave velocity (BaPWV), radial augmentation index (radial AI), ankle-brachial index (ABI), and carotid intima-media thickness (carotid-IMT), and to study the prevalence and characteristics of atherosclerosis and peripheral arterial disease in a Chinese population of Inner Mongolia. Participants were recruited from Inner Mongolia in China through cluster multistage and random sampling. BaPWV, radial AI, ABI, and carotid-IMT values were measured in each subject. A total of 1,236 participants from natural population of Inner Mongolia in China were included in this study. The average ABI value was 1.082 ± 0.093. The average values of common carotid, internal carotid, and carotid artery bifurcation IMT were 0.70 ± 0.21, 0.77 ± 0.24, and 0.78 ± 0.25 mm, respectively. The average value of BaPWV was 1450.5 ± 301.5 cm/s. The average value of radial AI was 78.9 ± 16.8 %. BaPWV, radial AI, and carotid-IMT values were positively correlated with ages significantly. BaPWV values were positively correlated with radial AI significantly. BaPWV values were positively correlated with values of common carotid, internal carotid, and carotid artery bifurcation IMT respectively. Radial AI values were positively correlated with the values of common carotid, internal carotid, and carotid artery bifurcation IMT respectively. A U-shaped relationship was observed that radial AI values were decreased at first and then increased as ABI values increased. The data suggests that BaPWV, radial AI, and carotid-IMT values are positively correlated with each other, and AI values are correlated with ABI values in a U-shaped curve in a Chinese population of Inner Mongolia.

  12. The Effort to Reduce a Muscle Fatigue Through Gymnastics Relaxation and Ergonomic Approach for Computer Users in Central Building State University of Medan

    NASA Astrophysics Data System (ADS)

    Gultom, Syamsul; Darma Sitepu, Indra; Hasibuan, Nurman

    2018-03-01

    Fatigue due to long and continuous computer usage can lead to problems of dominant fatigue associated with decreased performance and work motivation. Specific targets in the first phase have been achieved in this research such as: (1) Identified complaints on workers using computers, using the Bourdon Wiersma test kit. (2) Finding the right relaxation & work posture draft for a solution to reduce muscle fatigue in computer-based workers. The type of research used in this study is research and development method which aims to produce the products or refine existing products. The final product is a prototype of back-holder, monitoring filter and arranging a relaxation exercise as well as the manual book how to do this while in front of the computer to lower the fatigue level for computer users in Unimed’s Administration Center. In the first phase, observations and interviews have been conducted and identified the level of fatigue on the employees of computer users at Uniemd’s Administration Center using Bourdon Wiersma test and has obtained the following results: (1) The average velocity time of respondents in BAUK, BAAK and BAPSI after working with the value of interpretation of the speed obtained value of 8.4, WS 13 was in a good enough category, (2) The average of accuracy of respondents in BAUK, in BAAK and in BAPSI after working with interpretation value accuracy obtained Value of 5.5, WS 8 was in doubt-category. This result shows that computer users experienced a significant tiredness at the Unimed Administration Center, (3) the consistency of the average of the result in measuring tiredness level on computer users in Unimed’s Administration Center after working with values in consistency of interpretation obtained Value of 5.5 with WS 8 was put in a doubt-category, which means computer user in The Unimed Administration Center suffered an extreme fatigue. In phase II, based on the results of the first phase in this research, the researcher offers solutions such as the prototype of Back-Holder, monitoring filter, and design a proper relaxation exercise to reduce the fatigue level. Furthermore, in in order to maximize the exercise itself, a manual book will be given to employees whom regularly work in front of computers at Unimed’s Administration Center

  13. 40 CFR 86.1862-04 - Maintenance of records and submittal of information relevant to compliance with fleet average NOX...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Control of Air Pollution From New and In-Use Light-Duty Vehicles, Light-Duty Trucks, and Complete Otto... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... NOX value achieved; and (iv) All values used in calculating the fleet average NOX value achieved. (2...

  14. 40 CFR 86.1862-04 - Maintenance of records and submittal of information relevant to compliance with fleet average NOX...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Control of Air Pollution From New and In-Use Light-Duty Vehicles, Light-Duty Trucks, and Complete Otto... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... NOX value achieved; and (iv) All values used in calculating the fleet average NOX value achieved. (2...

  15. 40 CFR 86.1862-04 - Maintenance of records and submittal of information relevant to compliance with fleet average NOX...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Control of Air Pollution From New and In-Use Light-Duty Vehicles, Light-Duty Trucks, and Complete Otto... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... NOX value achieved; and (iv) All values used in calculating the fleet average NOX value achieved. (2...

  16. 40 CFR 86.1862-04 - Maintenance of records and submittal of information relevant to compliance with fleet average NOX...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Control of Air Pollution From New and In-Use Light-Duty Vehicles, Light-Duty Trucks, and Complete Otto... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... NOX value achieved; and (iv) All values used in calculating the fleet average NOX value achieved. (2...

  17. Low-complexity peak-to-average power ratio reduction scheme for flip-orthogonal frequency division multiplexing visible light communication system based on μ-law mapping

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhang, Peiran; Lu, Huimin; Feng, LiFang

    2017-06-01

    An orthogonal frequency division multiplexing (OFDM) technique called flipped OFDM (flip-OFDM) is apposite for a visible light communication system that needs the transmitted signal to be real and positive. Flip-OFDM uses two consecutive OFDM subframes to transmit the positive and negative parts of the signal. However, peak-to-average power ratio (PAPR) for flip-OFDM is increased tremendously due to the low value of total average power that arises from many zero values in both the positive and flipped frames. We first analyze the performance of flip-OFDM and perform a comparison with the conventional DC-biased OFDM (DCO-OFDM); then we propose a flip-OFDM scheme combined with μ-law mapping to reduce the high PAPR. The simulation results show that the PAPR of the system is reduced about 17.2 and 5.9 dB when compared with the normal flip-OFDM and DCO-OFDM signals, respectively.

  18. Assessment of radionuclides in the soil of residential areas of the Chittagong metropolitan city, Bangladesh and evaluation of associated radiological risk

    PubMed Central

    Rashed-Nizam, Quazi Muhammad; Rahman, Md. Mashiur; Kamal, Masud; Chowdhury, Mantazul Islam

    2015-01-01

    Soil samples from the three residential hubs of Chittagong city, Bangladesh were analyzed using gamma spectrometry to estimate radiation hazard due to natural radioactive sources and anthropogenic nuclide 137Cs. The activity concentration of 226Ra was found to be in the range 11–25 Bq.kg−1, 232Th in the range 38–59 Bq.kg−1 and 40K in the range 246–414 Bq.kg−1. These results were used to calculate the radiological hazard parameters including Excess of Lifetime Cancer Risk (ELCR). The estimated outdoor gamma exposure rates were 40.6–63.8 nGy.h−1. The radiation hazard index (radium equivalent activity) ranged from 90–140 Bq.kg−1. The average value of the ELCR was found to be 0.21 × 10−3, which is lower than the world average. Sporadic fallout of 137Cs was observed with an average value of 2.0 Bq.kg−1. PMID:25237039

  19. Calculated SAR distributions in a human voxel phantom due to the reflection of electromagnetic fields from a ground plane between 65 MHz and 2 GHz

    NASA Astrophysics Data System (ADS)

    Findlay, R. P.; Dimbylow, P. J.

    2008-05-01

    If an electromagnetic field is incident normally onto a perfectly conducting ground plane, the field is reflected back into the domain. This produces a standing wave above the ground plane. If a person is present within the domain, absorption of the field in the body may cause problems regarding compliance with electromagnetic guidelines. To investigate this, the whole-body averaged specific energy absorption rate (SAR), localised SAR and ankle currents in the voxel model NORMAN have been calculated for a variety of these exposures under grounded conditions. The results were normalised to the spatially averaged field, a technique used to determine a mean value for comparison with guidelines when the field varies along the height of the body. Additionally, the external field values required to produce basic restrictions for whole-body averaged SAR have been calculated. It was found that in all configurations studied, the ICNIRP reference levels and IEEE MPEs provided a conservative estimate of these restrictions.

  20. Contouring Variability of the Penile Bulb on CT Images: Quantitative Assessment Using a Generalized Concordance Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carillo, Viviana; Cozzarini, Cesare; Perna, Lucia

    2012-11-01

    Purpose: Within a multicenter study (DUE-01) focused on the search of predictors of erectile dysfunction and urinary toxicity after radiotherapy for prostate cancer, a dummy run exercise on penile bulb (PB) contouring on computed tomography (CT) images was carried out. The aim of this study was to quantitatively assess interobserver contouring variability by the application of the generalized DICE index. Methods and Materials: Fifteen physicians from different Institutes drew the PB on CT images of 10 patients. The spread of DICE values was used to objectively select those observers who significantly disagreed with the others. The analyses were performed withmore » a dedicated module in the VODCA software package. Results: DICE values were found to significantly change among observers and patients. The mean DICE value was 0.67, ranging between 0.43 and 0.80. The statistics of DICE coefficients identified 4 of 15 observers who systematically showed a value below the average (p value range, 0.013 - 0.059): Mean DICE values were 0.62 for the 4 'bad' observers compared to 0.69 of the 11 'good' observers. For all bad observers, the main cause of the disagreement was identified. Average DICE values were significantly worse from the average in 2 of 10 patients (0.60 vs. 0.70, p < 0.05) because of the limited visibility of the PB. Excluding the bad observers and the 'bad' patients,' the mean DICE value increased from 0.67 to 0.70; interobserver variability, expressed in terms of standard deviation of DICE spread, was also reduced. Conclusions: The obtained values of DICE around 0.7 shows an acceptable agreement, considered the small dimension of the PB. Additional strategies to improve this agreement are under consideration and include an additional tutorial of the so-called bad observers with a recontouring procedure, or the recontouring by a single observer of the PB for all patients included in the DUE-01 study.« less

  1. SU-E-T-548: How To Decrease Spine Dose In Patients Who Underwent Sterotactic Spine Radiosurgery?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acar, H; Altinok, A; Kucukmorkoc, E

    2014-06-01

    Purpose: Stereotactic radiosurgery for spine metastases involves irradiation using a single high dose fraction. The purpose of this study was to dosimetrically compare stereotactic spine radiosurgery(SRS) plans using a recently new volumetric modulated arc therapy(VMAT) technique against fix-field intensity-modulated radiotherapy(IMRT). Plans were evaluated for target conformity and spinal cord sparing. Methods: Fifteen previously treated patients were replanned using the Eclipse 10.1 TPS AAA calculation algorithm. IMRT plans with 7 fields were generated. The arc plans used 2 full arc configurations. Arc and IMRT plans were normalized and prescribed to deliver 16.0 Gy in a single fraction to 90% of themore » planning target volume(PTV). PTVs consisted of the vertebral body expanded by 3mm, excluding the PRV-cord, where the cord was expanded by 2mm.RTOG 0631 recommendations were applied for treatment planning. Partial spinal cord volume was defined as 5mm above and below the radiosurgery target volume. Plans were compared for conformity and gradient index as well as spinal cord sparing. Results: The conformity index values of fifteen patients for two different treatment planning techniques were shown in table 1. Conformity index values for 2 full arc planning (average CI=0.84) were higher than that of IMRT planning (average CI=0.79). The gradient index values of fifteen patients for two different treatment planning techniques were shown in table 2. Gradient index values for 2 full arc planning (average GI=3.58) were higher than that of IMRT planning (average GI=2.82).The spinal cord doses of fifteen patients for two different treatment planning techniques were shown in table 3. D0.35cc, D0.03cc and partial spinal cord D10% values in 2 full arc plannings (average D0.35cc=819.3cGy, D0.03cc=965.4cGy, 10%partial spinal=718.1cGy) were lower than IMRT plannings (average D0.35cc=877.4cGy, D0.03c=1071.4cGy, 10%partial spinal=805.1cGy). Conclusions: The two arc VMAT technique is superior to 7 field IMRT technique in terms of both spinal cord sparing and better conformity and gradient indexes.« less

  2. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  3. A Web-based tool for UV irradiance data: predictions for European and Southeast Asian sites.

    PubMed

    Kift, Richard; Webb, Ann R; Page, John; Rimmer, John; Janjai, Serm

    2006-01-01

    There are a range of UV models available, but one needs significant pre-existing knowledge and experience in order to be able to use them. In this article a comparatively simple Web-based model developed for the SoDa (Integration and Exploitation of Networked Solar Radiation Databases for Environment Monitoring) project is presented. This is a clear-sky model with modifications for cloud effects. To determine if the model produces realistic UV data the output is compared with 1 year sets of hourly measurements at sites in the United Kingdom and Thailand. The accuracy of the output depends on the input, but reasonable results were obtained with the use of the default database inputs and improved when pyranometer instead of modeled data provided the global radiation input needed to estimate the UV. The average modeled values of UV for the UK site were found to be within 10% of measurements. For the tropical sites in Thailand the average modeled values were within 1120% of measurements for the four sites with the use of the default SoDa database values. These results improved when pyranometer data and TOMS ozone data from 2002 replaced the standard SoDa database values, reducing the error range for all four sites to less than 15%.

  4. The analysis of corneal asphericity (Q value) and its related factors of 1,683 Chinese eyes older than 30 years.

    PubMed

    Xiong, Ying; Li, Jing; Wang, Ningli; Liu, Xue; Wang, Zhao; Tsai, Frank F; Wan, Xiuhua

    2017-01-01

    To determine corneal Q value and its related factors in Chinese subjects older than 30 years. Cross sectional study. 1,683 participants (1,683 eyes) from the Handan Eye Study were involved, including 955 female and 728 male with average age of 53.64 years old (range from 30 to 107 years). The corneal Q values of anterior and posterior surfaces were measured at 3.0, 5.0 and 7.0mm aperture diameters using Bausch & Lomb Orbscan IIz (software version 3.12). Age, gender and refractive power were recorded. The average Q values of the anterior surface at 3.0, 5.0 and 7.0mm aperture diameters were -0.28±0.18, -0.28±0.18, and -0.29±0.18, respectively. The average Q value of the anterior surface at the 5.0mm aperture diameter was negatively correlated with age (B = -0.003, p<0.01) and the refractive power (B = -0.013, p = 0.016). The average Q values of the posterior surface at 3.0, 5.0, and 7.0mm were -0.26±0.216, -0.26±0.214, and -0.26±0.215, respectively. The average Q value of the posterior surface at the 5.0mm aperture diameter was positively correlated with age (B = 0.002, p = 0.036) and the refractive power (B = 0.016, p = 0.043). The corneal Q value of the elderly Chinese subjects is different from that of previously reported European and American subjects, and the Q value appears to be correlated with age and refractive power.

  5. Modeling the Effect of Summertime Heating on Urban Runoff Temperature

    NASA Astrophysics Data System (ADS)

    Thompson, A. M.; Gemechu, A. L.; Norman, J. M.; Roa-Espinosa, A.

    2007-12-01

    Urban impervious surfaces absorb and store thermal energy, particularly during warm summer months. During a rainfall/runoff event, thermal energy is transferred from the impervious surface to the runoff, causing it to become warmer. As this higher temperature runoff enters receiving waters, it can be harmful to coldwater habitat. A simple model has been developed for the net energy flux at the impervious surfaces of urban areas to account for the heat transferred to runoff. Runoff temperature is determined as a function of the physical characteristics of the impervious areas, the weather, and the heat transfer between the moving film of runoff and the heated impervious surfaces that commonly exist in urban areas. Runoff from pervious surfaces was predicted using the Green- Ampt Mein-Larson infiltration excess method. Theoretical results were compared to experimental results obtained from a plot-scale field study conducted at the University of Wisconsin's West Madison Agricultural Research Station. Surface temperatures and runoff temperatures from asphalt and sod plots were measured throughout 15 rainfall simulations under various climatic conditions during the summers of 2004 and 2005. Average asphalt runoff temperatures ranged from 23.2°C to 37.1°C. Predicted asphalt runoff temperatures were in close agreement with measured values for most of the simulations (average RMSE = 4.0°C). Average pervious runoff temperatures ranged from 19.7° to 29.9°C and were closely approximated by the rainfall temperature (RMSE = 2.8°C). Predicted combined asphalt and sod runoff temperatures using a flow-weighted average were in close agreement with observed values (average RMSE = 3.5°C).

  6. 7 CFR 760.641 - Adjustments made to NAMP to reflect loss of quality.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the FSA State committee. The adjustment factor will be based on the average actual market price... market price of a crop due to a reduction in the intrinsic characteristics of the production resulting... crop for which the value is reduced due to excess moisture resulting from a disaster related condition...

  7. Phase 1 of the near term hybrid passenger vehicle development program. Appendix D: Sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Traversi, M.

    1979-01-01

    Data are presented on the sensitivity of: (1) mission analysis results to the boundary values given for number of passenger cars and average annual vehicle miles traveled per car; (2) vehicle characteristics and performance to specifications; and (3) tradeoff study results to the expected parameters.

  8. Appropriateness of selecting different averaging times for modelling chronic and acute exposure to environmental odours

    NASA Astrophysics Data System (ADS)

    Drew, G. H.; Smith, R.; Gerard, V.; Burge, C.; Lowe, M.; Kinnersley, R.; Sneath, R.; Longhurst, P. J.

    Odour emissions are episodic, characterised by periods of high emission rates, interspersed with periods of low emissions. It is frequently the short term, high concentration peaks that result in annoyance in the surrounding population. Dispersion modelling is accepted as a useful tool for odour impact assessment, and two approaches can be adopted. The first approach of modelling the hourly average concentration can underestimate total odour concentration peaks, resulting in annoyance and complaints. The second modelling approach involves the use of short averaging times. This study assesses the appropriateness of using different averaging times to model the dispersion of odour from a landfill site. We also examine perception of odour in the community in conjunction with the modelled odour dispersal, by using community monitors to record incidents of odour. The results show that with the shorter averaging times, the modelled pattern of dispersal reflects the pattern of observed odour incidents recorded in the community monitoring database, with the modelled odour dispersing further in a north easterly direction. Therefore, the current regulatory method of dispersion modelling, using hourly averaging times, is less successful at capturing peak concentrations, and does not capture the pattern of odour emission as indicated by the community monitoring database. The use of short averaging times is therefore of greater value in predicting the likely nuisance impact of an odour source and in framing appropriate regulatory controls.

  9. Evaluation of Hologic Aptima HIV-1 Quant Dx Assay on the Panther System on HIV Subtypes

    PubMed Central

    Hack, Holly R.; Nair, Sangeetha V.; Worlock, Andrew; Malia, Jennifer A.; Peel, Sheila A.; Jagodzinski, Linda L.

    2016-01-01

    Quantitation of the HIV-1 viral load in plasma is the current standard of care for clinical monitoring of HIV-infected individuals undergoing antiretroviral therapy. This study evaluated the analytical and clinical performances of the Aptima HIV-1 Quant Dx assay (Hologic, San Diego, CA) for monitoring viral load by using 277 well-characterized subtype samples, including 171 cultured virus isolates and 106 plasma samples from 35 countries, representing all major HIV subtypes, recombinants, and circulating recombinant forms (CRFs) currently in circulation worldwide. Linearity of the Aptima assay was tested on each of 6 major HIV-1 subtypes (A, B, C, D, CRF01_AE, and CRF02_AG) and demonstrated an R2 value of ≥0.996. The performance of the Aptima assay was also compared to those of the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 v.2 (CAP/CTM) and Abbott m2000 RealTime HIV-1 (RealTime) assays on all subtype samples. The Aptima assay values averaged 0.21 log higher than the CAP/CTM values and 0.30 log higher than the RealTime values, and the values were >0.4 log higher than CAP/CTM values for subtypes F and G and than RealTime values for subtypes C, F, and G and CRF02_AG. Two samples demonstrated results with >1-log differences from RealTime results. When the data were adjusted by the average difference, 94.9% and 87.0% of Aptima results fell within 0.5 log of the CAP/CTM and RealTime results, respectively. The linearity and accuracy of the Aptima assay in correctly quantitating all major HIV-1 subtypes, coupled with the completely automated format and high throughput of the Panther system, make this system well suited for reliable measurement of viral load in the clinical laboratory. PMID:27510829

  10. Evaluation of Hologic Aptima HIV-1 Quant Dx Assay on the Panther System on HIV Subtypes.

    PubMed

    Manak, Mark M; Hack, Holly R; Nair, Sangeetha V; Worlock, Andrew; Malia, Jennifer A; Peel, Sheila A; Jagodzinski, Linda L

    2016-10-01

    Quantitation of the HIV-1 viral load in plasma is the current standard of care for clinical monitoring of HIV-infected individuals undergoing antiretroviral therapy. This study evaluated the analytical and clinical performances of the Aptima HIV-1 Quant Dx assay (Hologic, San Diego, CA) for monitoring viral load by using 277 well-characterized subtype samples, including 171 cultured virus isolates and 106 plasma samples from 35 countries, representing all major HIV subtypes, recombinants, and circulating recombinant forms (CRFs) currently in circulation worldwide. Linearity of the Aptima assay was tested on each of 6 major HIV-1 subtypes (A, B, C, D, CRF01_AE, and CRF02_AG) and demonstrated an R(2) value of ≥0.996. The performance of the Aptima assay was also compared to those of the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 v.2 (CAP/CTM) and Abbott m2000 RealTime HIV-1 (RealTime) assays on all subtype samples. The Aptima assay values averaged 0.21 log higher than the CAP/CTM values and 0.30 log higher than the RealTime values, and the values were >0.4 log higher than CAP/CTM values for subtypes F and G and than RealTime values for subtypes C, F, and G and CRF02_AG. Two samples demonstrated results with >1-log differences from RealTime results. When the data were adjusted by the average difference, 94.9% and 87.0% of Aptima results fell within 0.5 log of the CAP/CTM and RealTime results, respectively. The linearity and accuracy of the Aptima assay in correctly quantitating all major HIV-1 subtypes, coupled with the completely automated format and high throughput of the Panther system, make this system well suited for reliable measurement of viral load in the clinical laboratory. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  11. The composition, heating value and renewable share of the energy content of mixed municipal solid waste in Finland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horttanainen, M., E-mail: mika.horttanainen@lut.fi; Teirasvuo, N.; Kapustina, V.

    Highlights: • New experimental data of mixed MSW properties in a Finnish case region. • The share of renewable energy of mixed MSW. • The results were compared with earlier international studies. • The average share of renewable energy was 30% and the average LHVar 19 MJ/kg. • Well operating source separation decreases the renewable energy content of MSW. - Abstract: For the estimation of greenhouse gas emissions from waste incineration it is essential to know the share of the renewable energy content of the combusted waste. The composition and heating value information is generally available, but the renewable energymore » share or heating values of different fractions of waste have rarely been determined. In this study, data from Finnish studies concerning the composition and energy content of mixed MSW were collected, new experimental data on the compositions, heating values and renewable share of energy were presented and the results were compared to the estimations concluded from earlier international studies. In the town of Lappeenranta in south-eastern Finland, the share of renewable energy ranged between 25% and 34% in the energy content tests implemented for two sample trucks. The heating values of the waste and fractions of plastic waste were high in the samples compared to the earlier studies in Finland. These high values were caused by good source separation and led to a low share of renewable energy content in the waste. The results showed that in mixed municipal solid waste the renewable share of the energy content can be significantly lower than the general assumptions (50–60%) when the source separation of organic waste, paper and cardboard is carried out successfully. The number of samples was however small for making extensive conclusions on the results concerning the heating values and renewable share of energy and additional research is needed for this purpose.« less

  12. Ambient air metallic pollutant study at HAF areas during 2013-2014

    NASA Astrophysics Data System (ADS)

    Fang, Guor-Cheng; Kuo, Yu-Chen; Zhuang, Yuan-Jie

    2015-05-01

    This study characterized diurnal variations of the total suspended particulate (TSP) concentrations, dry deposition flux and dry deposition velocity of metallic elements at Taichung Harbor (Harbor), Gong Ming Junior High School (Airport) and Sha lu Farmland (Farmland) sampling sites in central Taiwan between August, 2013 and July, 2014 in this study. The result indicated that: 1) the ambient air particulate concentrations, dry depositions were displayed as Harbor > Farmland > Airport during the day time sampling period. However, dry deposition velocities were shown as Airport > Harbor > Farmland for this study. 2) The ambient air particulate concentrations, dry depositions were displayed as Airport > Harbor > Farmland during the night time sampling period. However, dry deposition velocities were shown as Farmland > Harbor > Airport for this study. 3) The metallic element Zn has the average highest concentrations at Airport, Harbor and Farmland among all the metallic elements during the day time sampling period in this study. 4) There were significant differences for the metallic elements (Cr, Cu, Zn and Pb) in dry depositions at these three characteristic sampling sites (HAF) for the night time sampling period. The only exception is metallic element Cd. It displayed that there were no significant differences for the metallic element Cd at the Airport and Farmland sampling sites during the night time sampling period. 5) The average highest values for the metallic element Cu in TSP among the three characteristic sampling sites occurred during the fall and winter seasons for this study. As for the dry depositions, the average highest values in dry deposition among the three characteristic sampling sites occurred during the spring and summer seasons for this study. 6) The average highest values for the metallic element Cd in TSP among the three characteristic sampling sites occurred during the spring and summer seasons for this study. As for the dry depositions, the average highest values in dry deposition among the three characteristic sampling sites occurred during fall and winter for this study.

  13. Exploring the uncertainty in attributing sediment contributions in fingerprinting studies due to uncertainty in determining element concentrations in source areas.

    NASA Astrophysics Data System (ADS)

    Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David

    2016-04-01

    One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual catchment was constructed, composed by three sub-catchments each of 500 x 500 m size. We assumed that there was no selectivity in sediment detachment or transport. A numerical excercise was performed considering these variables: 1) variability of element concentration: three levels with CVs of 20 %, 50 % and 80 %; 2) sampling density: 10, 25 and 50 "samples" per sub-catchment and element; and 3) number of elements included in the mixing model: two (determined), and five (overdetermined). This resulted in a total of 18 (3 x 3 x 2) possible combinations. The five fingerprinting elements considered in the study were: C, N, 40K, Al and Pavail, and their average values, taken from the literature, were: sub-catchment 1: 4.0 %, 0.35 %, 0.50 ppm, 5.0 ppm, 1.42 ppm, respectively; sub-catchment 2: 2.0 %, 0.18 %, 0.20 ppm, 10.0 ppm, 0.20 ppm, respectively; and sub-catchment 3: 1.0 %, 0.06 %, 1.0 ppm, 16.0 ppm, 7.8 ppm, respectively. For each sub-catchment, three maps of the spatial distribution of each element was generated using the random generator of Mejia and Rodriguez-Iturbe (1974) as described in Freeze (1980), using the average value and the three different CVs defined above. Each map for each source area and property was generated for a 100 x 100 square grid, each grid cell being 5 m x 5 m. Maps were randomly generated for each property and source area. In doing so, we did not consider the possibility of cross correlation among properties. Spatial autocorrelation was assumed to be weak. The reason for generating the maps was to create a "virtual" situation where all the element concentration values at each point are known. Simultaneously, we arbitrarily determined the percentage of sediment coming from sub-catchments. These values were 30 %, 10 % and 60 %, for sub-catchments 1, 2 and 3, respectively. Using these values, we determined the element concentrations in the sediment. The exercise consisted of creating different sampling strategies in a virtual environment to determine an average value for each of the different maps of element concentration and sub-catchment, under different sampling densities: 200 different average values for the "high" sampling density (average of 50 samples); 400 different average values for the "medium" sampling density (average of 25 samples); and 1,000 different average values for the "low" sampling density (average of 10 samples). All these combinations of possible values of element concentrations in the source areas were solved for the concentration in the sediment already determined for the "true" solution using limSolve (Soetaert et al., 2014) in R language. The sediment source solutions found for the different situations and values were analyzed in order to: 1) evaluate the uncertainty in the sediment source attribution; and 2) explore strategies to detect the most probable solutions that might lead to improved methods for constructing the most robust mixing models. Preliminary results on these will be presented and discussed in this communication. Key words: sediment, fingerprinting, uncertainty, variability, mixing model. References Collins, A.L., Zhang, Y., McChesney, D., Walling, D.E., Haley, S.M., Smith, P. 2012. Sediment source tracing in a lowland agricultural catchment in southern England using a modified procedure combining statistical analysis and numerical modelling. Science of the Total Environment 414: 301-317. Freeze, R.A. 1980. A stochastic-conceptual analysis of rainfall-runoff processes on a hillslope. Water Resources Research 16: 391-408.

  14. The effects of noise in cardiac diffusion tensor imaging and the benefits of averaging complex data.

    PubMed

    Scott, Andrew D; Nielles-Vallespin, Sonia; Ferreira, Pedro F; McGill, Laura-Ann; Pennell, Dudley J; Firmin, David N

    2016-05-01

    There is growing interest in cardiac diffusion tensor imaging (cDTI), but, unlike other diffusion MRI applications, there has been little investigation of the effects of noise on the parameters typically derived. One method of mitigating noise floor effects when there are multiple image averages, as in cDTI, is to average the complex rather than the magnitude data, but the phase contains contributions from bulk motion, which must be removed first. The effects of noise on the mean diffusivity (MD), fractional anisotropy (FA), helical angle (HA) and absolute secondary eigenvector angle (E2A) were simulated with various diffusion weightings (b values). The effect of averaging complex versus magnitude images was investigated. In vivo cDTI was performed in 10 healthy subjects with b = 500, 1000, 1500 and 2000 s/mm(2). A technique for removing the motion-induced component of the image phase present in vivo was implemented by subtracting a low-resolution copy of the phase from the original images before averaging the complex images. MD, FA, E2A and the transmural gradient in HA were compared for un-averaged, magnitude- and complex-averaged reconstructions. Simulations demonstrated an over-estimation of FA and MD at low b values and an under-estimation at high b values. The transition is relatively signal-to-noise ratio (SNR) independent and occurs at a higher b value for FA (b = 1000-1250 s/mm(2)) than MD (b ≈ 250 s/mm(2)). E2A is under-estimated at low and high b values with a transition at b ≈ 1000 s/mm(2), whereas the bias in HA is comparatively small. The under-estimation of FA and MD at high b values is caused by noise floor effects, which can be mitigated by averaging the complex data. Understanding the parameters of interest and the effects of noise informs the selection of the optimal b values. When complex data are available, they should be used to maximise the benefit from the acquisition of multiple averages. The combination of complex data is also a valuable step towards segmented acquisitions. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Tectonic setting of Jurassic basins in Central Mongolia: Insights from the geochemistry of Tsagaan-Ovoo oil shale

    NASA Astrophysics Data System (ADS)

    Erdenetsogt, B. O.; Hong, S. K.; Choi, J.; Odgerel, N.; Lee, I.; Ichinnorov, N.; Tsolmon, G.; Munkhnasan, B.

    2017-12-01

    Tsagaan-Ovoo syncline hosting Lower-Middle Jurassic oil shale is a part of Saikhan-Ovoo the largest Jurassic sedimentary basin in Central Mongolia. It is generally accepted that early Mesozoic basins are foreland basins. In total, 18 oil shale samples were collected from an open-pit mine. The contents of organic carbon, and total nitrogen and their isotopic compositions as well as major element concentrations were analyzed. The average TOC content is 12.4±1.2 %, indicating excellent source rock potential. C/N ratios show an average of 30.0±1.2, suggesting terrestrial OM. The average value of δ15N is +3.9±0.2‰, while that of δ13Corg is -25.7±0.1‰. The isotopic compositions argue for OM derived dominantly from land plant. Moreover, changes in δ15N values of analyzed samples reflect variations in algal OM concentration of oil shale. The lowest δ15N value (+2.5‰) was obtained from base section, representing the highest amount of terrestrial OM, whereas higher δ15N values (up to +5.2‰) are recorded at top section, reflecting increased amount of algal OM. On the other hand, changes in δ15N value may also represent changes in redox state of water column in paleolake. The oil shale at bottom of section with low δ15N value was accumulated under oxic condition, when the delivery of land plant OM was high. With increase in subsidence rate through time, lake was deepened and water column was depleted in oxygen probably due to extensive phytoplankton growth, which results increase in algae derived OM contents as well as bulk δ15N of oil shale. The average value of CAI for Tsagan-Ovoo oil shale is 81.6±1.3, reflecting intensive weathering in the source area. The plotted data on A-CN-K diagram displays that oil shale was sourced mainly from Early Permian granodiorite and diorite, which are widely distributed around Tsagaan-Ovoo syncline. To infer tectonic setting, two multi-dimensional discrimination diagrams were used. The results suggest that the tectonic setting of Tsagaan-Ovoo syncline, in which the studied oil shale was deposited, was continental rift. This finding contradicts with generally accepted contractile deformation during early Mesozoic in Mongolia and China. Further detailed study is required to decipher the tectonic settings of central Mongolian Jurassic basins.

  16. Triple oxygen isotopes indicate urbanization affects sources of nitrate in wet and dry atmospheric deposition

    NASA Astrophysics Data System (ADS)

    Nelson, David M.; Tsunogai, Urumu; Ding, Dong; Ohyama, Takuya; Komatsu, Daisuke D.; Nakagawa, Fumiko; Noguchi, Izumi; Yamaguchi, Takashi

    2018-05-01

    Atmospheric nitrate deposition resulting from anthropogenic activities negatively affects human and environmental health. Identifying deposited nitrate that is produced locally vs. that originating from long-distance transport would help inform efforts to mitigate such impacts. However, distinguishing the relative transport distances of atmospheric nitrate in urban areas remains a major challenge since it may be produced locally and/or be transported from upwind regions. To address this uncertainty we assessed spatiotemporal variation in monthly weighted-average Δ17O and δ15N values of wet and dry nitrate deposition during one year at urban and rural sites along the western coast of the northern Japanese island of Hokkaido, downwind of the East Asian continent. Δ17O values of nitrate in wet deposition at the urban site mirrored those of wet and dry deposition at the rural site, ranging between ˜ +23 and +31 ‰ with higher values during winter and lower values in summer, which suggests the greater relative importance of oxidation of NO2 by O3 during winter and OH during summer. In contrast, Δ17O values of nitrate in dry deposition at the urban site were lower (+19 - +25 ‰) and displayed less distinct seasonal variation. Furthermore, the difference between δ15N values of nitrate in wet and dry nitrate deposition was, on average, 3 ‰ greater at the urban than rural site, and Δ17O and δ15N values were correlated for both forms of deposition at both sites with the exception of dry deposition at the urban site. These results suggest that, relative to nitrate in wet and dry deposition in rural environments and wet deposition in urban environments, nitrate in dry deposition in urban environments forms from relatively greater oxidation of NO by peroxy radicals and/or oxidation of NO2 by OH. Given greater concentrations of peroxy radicals and OH in cities, these results imply that dry nitrate deposition results from local NOx emissions more so than wet deposition, which is transported longer distances. These results illustrate the value of stable isotope data for distinguishing the transport distances and reaction pathways of atmospheric nitrate pollution.

  17. Intuitive theories of information: beliefs about the value of redundancy.

    PubMed

    Soll, J B

    1999-03-01

    In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.

  18. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  19. Effects of epidemic threshold definition on disease spread statistics

    NASA Astrophysics Data System (ADS)

    Lagorio, C.; Migueles, M. V.; Braunstein, L. A.; López, E.; Macri, P. A.

    2009-03-01

    We study the statistical properties of SIR epidemics in random networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size sc. Using percolation theory to calculate the average fractional size of an epidemic, we find that the strength of the spanning link percolation cluster P∞ is an upper bound to . For small values of sc, P∞ is no longer a good approximation, and the average fractional size has to be computed directly. We find that the choice of sc is generally (but not always) guided by the network structure and the value of T of the disease in question. If the goal is to always obtain P∞ as the average epidemic size, one should choose sc to be the typical size of the largest percolation cluster at the critical percolation threshold for the transmissibility. We also study Q, the probability that an SIR propagation reaches the epidemic mass sc, and find that it is well characterized by percolation theory. We apply our results to real networks (DIMES and Tracerouter) to measure the consequences of the choice sc on predictions of average outcome sizes of computer failure epidemics.

  20. Role of spatial averaging in multicellular gradient sensing.

    PubMed

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  1. Role of spatial averaging in multicellular gradient sensing

    NASA Astrophysics Data System (ADS)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-06-01

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  2. Acoustical conditions for speech communication in active elementary school classrooms

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi; Bradley, John

    2005-04-01

    Detailed acoustical measurements were made in 34 active elementary school classrooms with typical rectangular room shape in schools near Ottawa, Canada. There was an average of 21 students in classrooms. The measurements were made to obtain accurate indications of the acoustical quality of conditions for speech communication during actual teaching activities. Mean speech and noise levels were determined from the distribution of recorded sound levels and the average speech-to-noise ratio was 11 dBA. Measured mid-frequency reverberation times (RT) during the same occupied conditions varied from 0.3 to 0.6 s, and were a little less than for the unoccupied rooms. RT values were not related to noise levels. Octave band speech and noise levels, useful-to-detrimental ratios, and Speech Transmission Index values were also determined. Key results included: (1) The average vocal effort of teachers corresponded to louder than Pearsons Raised voice level; (2) teachers increase their voice level to overcome ambient noise; (3) effective speech levels can be enhanced by up to 5 dB by early reflection energy; and (4) student activity is seen to be the dominant noise source, increasing average noise levels by up to 10 dBA during teaching activities. [Work supported by CLLRnet.

  3. The Effect of Spatial and Spectral Resolution in Determining NDVI

    NASA Astrophysics Data System (ADS)

    Boelman, N. T.

    2003-12-01

    We explore the impact that varying spatial and spectral resolutions of several sensors (a field portable spectroradiometer, Landsat, MODIS and AVHRR) has in determining the average Normalized Difference Vegetation Index (NDVI) at Imnavait Creek, a small arctic tundra watershed located on the north slope of Alaska. We found that at the field-of-views (FOVs) of less than 20 m2 that were sampled, the average NDVI value for this watershed is 0.65, compared to 0.77 at FOVs equal to and greater than 20 m2. In addition, we found that at FOVs less than 20 m2, the average NDVI value calculated according to each of Landsat, MODIS and AVHRR band definitions (controlled by spectral resolution) was similar. However, at FOVs equal to and greater than 20 m2, the average NDVI value calculated according to AVHRR's broad-band definitions was significantly and consistently higher than that from both Landsat and MODIS's narrow-band NDVI values. We speculate that these differences in NDVI exist because high leaf-area-index vegetation communities associated with watertracks are commonly spaced between 10 and 20 m apart in arctic tundra landscapes and are often only included when spectral sampling is conducted at FOVs greater than tens of square meters. These results suggest that both spatial resolution alone and its interaction with spectral resolution have to be considered when interpreting commonly used global-scale NDVI datasets. This is because traditionally, the fundamental relationships established between NDVI and ecosystem parameters, such as CO2 fluxes, aboveground biomass and net primary productivity, have been established at scales less than 20 m2. Other ecosystems, such as landscapes with isolated tree islands in boreal forest-tundra ecotones, may exhibit similar scaling patterns that need to be considered when interpreting global-scale NDVI datasets.

  4. Excellent amino acid racemization results from Holocene sand dollars

    NASA Astrophysics Data System (ADS)

    Kosnik, M.; Kaufman, D. S.; Kowalewski, M.; Whitacre, K.

    2015-12-01

    Amino acid racemization (AAR) is widely used as a cost-effective method to date molluscs in time-averaging and taphonomic studies, but it has not been attempted for echinoderms despite their paleobiological importance. Here we demonstrate the feasibility of AAR geochronology in Holocene aged Peronella peronii (Echinodermata: Echinoidea) collected from Sydney Harbour (Australia). Using standard HPLC methods we determined the extent of AAR in 74 Peronella tests and performed replicate analyses on 18 tests. We sampled multiple areas of two individuals and identified the outer edge as a good sampling location. Multiple replicate analyses from the outer edge of 18 tests spanning the observed range of D/Ls yielded median coefficients of variation < 4% for Asp, Phe, Ala, and Glu D/L values, which overlaps with the analytical precision. Correlations between D/L values across 155 HPLC injections sampled from 74 individuals are also very high (pearson r2 > 0.95) for these four amino acids. The ages of 11 individuals spanning the observed range of D/L values were determined using 14C analyses, and Bayesian model averaging was used to determine the best AAR age model. The averaged age model was mainly composed of time-dependent reaction kinetics models (TDK, 71%) based on phenylalanine (Phe, 94%). Modelled ages ranged from 14 to 5539 yrs, and the median 95% confidence interval for the 74 analysed individuals is ±28% of the modelled age. In comparison, the median 95% confidence interval for the 11 calibrated 14C ages was ±9% of the median age estimate. Overall Peronella yields exceptionally high-quality AAR D/L values and appears to be an excellent substrate for AAR geochronology. This work opens the way for time-averaging and taphonomic studies of echinoderms similar to those in molluscs.

  5. Variation in the reference Shields stress for bed load transport in gravel‐bed streams and rivers

    USGS Publications Warehouse

    Mueller, Erich R.; Pitlick, John; Nelson, Jonathan M.

    2005-01-01

    The present study examines variations in the reference shear stress for bed load transport (τr) using coupled measurements of flow and bed load transport in 45 gravel‐bed streams and rivers. The study streams encompass a wide range in bank‐full discharge (1–2600 m3/s), average channel gradient (0.0003–0.05), and median surface grain size (0.027–0.21 m). A bed load transport relation was formed for each site by plotting individual values of the dimensionless transport rate W* versus the reach‐average dimensionless shear stress τ*. The reference dimensionless shear stress τ*r was then estimated by selecting the value of τ* corresponding to a reference transport rate of W* = 0.002. The results indicate that the discharge corresponding to τ*r averages 67% of the bank‐full discharge, with the variation independent of reach‐scale morphologic and sediment properties. However, values of τ*r increase systematically with average channel gradient, ranging from 0.025–0.035 at sites with slopes of 0.001–0.006 to values greater than 0.10 at sites with slopes greater than 0.02. A corresponding relation for the bank‐full dimensionless shear stress τ*bf, formulated with data from 159 sites in North America and England, mirrors the relation between τ*r and channel gradient, suggesting that the bank‐full channel geometry of gravel‐ and cobble‐bedded streams is adjusted to a relatively constant excess shear stress, τ*bf − τ*r, across a wide range of slopes.

  6. Poster — Thur Eve — 20: CTDI Measurements using a Radiochromic Film-based clinical protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quintero, C.; Bekerat, H.; DeBlois, F.

    2014-08-15

    The purpose of the study was evaluating accuracy and reproducibility of a radiochromic film-based protocol to measure computer tomography dose index (CTDI) as a part of annual QA on CT scanners and kV-CBCT systems attached to linear accelerators. Energy dependence of Gafchromic XR-QA2 ® film model was tested over imaging beam qualities (50 – 140 kVp). Film pieces were irradiated in air to known values of air-kerma (up to 10 cGy). Calibration curves for each beam quality were created (Film reflectance change Vs. Air-kerma in air). Film responses for same air-kerma values were compared. Film strips were placed into holesmore » of a CTDI phantom and irradiated for several clinical scanning protocols. Film reflectance change was converted into dose to water and used to calculate CTDIvol values. Measured and tabulated CTDIvol values were compared. Average variations of ±5.2% in the mean film reflectance change were observed in the energy range of 80 to 140 keV, and 11.1% between 50 and 140 keV. Measured CTDI values were in average 10% lower than tabulated CTDI values for CT-simulators, and 44% higher for CBCT systems. Results presented a mean variation for the same machine and protocol of 2.6%. Variation of film response is within ±5% resulting in ±15% systematic error in dose estimation if a single calibration curve is used. Relatively large discrepancy between measured and tabulated CTDI values strongly support the trend towards replacing CTDI value with equilibrium dose measurement in the center of cylindrical phantom, as suggested by TG- 111.« less

  7. Color change of the snapper (Pagrus auratus) and Gurnard (Chelidonichthys kumu) skin and eyes during storage: effect of light polarization and contact with ice.

    PubMed

    Balaban, Murat O; Stewart, Kelsie; Fletcher, Graham C; Alçiçek, Zayde

    2014-12-01

    Ten gurnard and 10 snapper were stored on ice. One side always contacted the ice; the other side was always exposed to air. At different intervals for up to 12 d, the fish were placed in a light box, and the images of both sides were taken using polarized and nonpolarized illumination. Image analysis resulted in average L*, a*, and b* values of skin, and average L* values of the eyes. The skin L* value of gurnard changed significantly over time while that of snapper was substantially constant. The a* and b* values of both fish decreased over time. The L* values of eyes were significantly lower for polarized images, and significantly lower for the side of fish exposed to air only. This may be a concern in quality evaluation methods such as QIM. The difference of colors between the polarized and nonpolarized images was calculated to quantify the reflection off the surface of fish. For accurate measurement of surface color and eye color, use of polarized light is recommended. © 2014 Institute of Food Technologists®

  8. Synthesis, characterization and optical studies of conjugated Schiff base polymer containing thieno[3,2-b]thiophene and 1,2,4-triazole groups

    NASA Astrophysics Data System (ADS)

    Cetin, Adnan; Korkmaz, Adem; Kaya, Esin

    2018-02-01

    A conjugated polyschiff base (poly(N-thieno[3,2-b]thiophen-2-yl)methylene)-1H-1,2,4-triazol-5-amine) poly(TTMA)) was synthesized by condensation polymerization between thieno[3,2-b]thiophene-2,5-dicarboxaldehyde and 3,5-diamino-1,2,4-triazole. The poly(TTMA) was characterized by FT-IR, 1H NMR, 13C NMR spectra and thermal analysis. The number average molecular weight (Mn) and polydispersity index of the poly(TTMA) were determined by gel permeation chromatography (GPC). In addition, the optical properties of the poly(TTMA) solutions were investigated at different molarities. The band gap Eg value of the poly(TTMA) decreased with the increasing molarity. The absorption band edge values of the poly(TTMA) decreased as the molarity increased. The average transmittance values of the poly(TTMA) increased with the increasing molarity and the highest values of molar extinction coefficient also were found in the near ultraviolet region. Its values decreased with the increasing molarity. These results showed that the poly(TTMA) can be used for the fabrication of many optoelectronic devices due to its suitable optical properties and low optical band gap.

  9. High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg

    NASA Astrophysics Data System (ADS)

    Dunlop, Michelle

    2017-09-01

    High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.

  10. The value of vital sign trends for detecting clinical deterioration on the wards.

    PubMed

    Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P

    2016-05-01

    Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient's current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC -0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Plasma fatty acid profile and alternative nutrition.

    PubMed

    Krajcovicová-Kudlácková, M; Simoncic, R; Béderová, A; Klvanová, J

    1997-01-01

    Plasma profile of fatty acids was examined in a group of children consisting of 7 vegans, 15 lactoovovegetarians and 10 semivegetarians. The children were 11-15 years old and the average period of alternative nutrition was 3.4 years. The results were compared with a group of 19 omnivores that constituted an average sample with respect to biochemical and hematological parameters from a larger study of health and nutritional status of children in Slovakia. Alternative nutrition groups had significantly lower values of saturated fatty acids. The content of oleic acid was identical to omnivores. A significant increase was observed for linoleic and alpha-linolenic (n-3) acids. The dihomo-gamma-linolenic (n-6) acid and arachidonic (n-6) acid values were comparable to omnivores for all alternative nutrition groups. Values of n-3 polyunsaturated fatty acids in lactoovovegetarians were identical to those of omnivores whereas they were significantly increased in semivegetarians consuming fish twice a week. Due to the total exclusion of animal fats from the diet, vegans had significantly reduced values of palmitoleic acid as well as eicosapentaenoic (n-3) acid and docosahexaenoic (n-3) acid resulting in an increased n-6/n-3 ratio. Values of plasma fatty acids found in alternative nutrition groups can be explained by the higher intake of common vegetable oils (high content of linoleic acid), oils rich in alpha-linolenic acid (cereal germs, soybean oil, walnuts), as well as in n-3 polyunsaturated fatty acids (fish). The results of fatty acids (except n-3 in vegans) and other lipid parameters confirm the beneficial effect of vegetarian nutrition in the prevention of cardiovascular diseases.

  12. Value of Adult Volunteer Leaders in the New Mexico 4-H Program.

    ERIC Educational Resources Information Center

    Hutchins, Julie K.; Seevers, Brenda S.; Van Leeuwen, Dawn

    2002-01-01

    Using data from 187 New Mexico 4-H program volunteers, economic value of volunteer time was determined by calculating the average number of hours spent in 1 year and multiplying the number by the average hourly wage for nonagricultural workers. The data on volunteer activities and their value can be used for program planning, recruitment, and…

  13. 40 CFR 80.825 - How is the refinery or importer annual average toxics value determined?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... number of batches of gasoline produced or imported during the averaging period. i = Individual batch of... toxics value, Ti, of each batch of gasoline is determined using the Phase II Complex Model specified at...

  14. Survey of radiation belt energetic electron pitch angle distributions based on the Van Allen Probes MagEIS measurements: Electron Pitch Angle Distributions

    DOE PAGES

    Shi, Run; Summers, Danny; Ni, Binbin; ...

    2016-12-30

    A statistical survey of electron pitch angle distributions (PADs) is performed based on the pitch angle-resolved flux observations from the Magnetic Electron Ion Spectrometer (MagEIS) instrument on board the Van Allen Probes during the period from 1 October 2012 to 1 May 2015. By fitting the measured PADs to a sin nα form, where α is the local pitch angle and n is the power law index, we investigate the dependence of PADs on electron kinetic energy, magnetic local time (MLT), the geomagnetic Kp index, and L shell. The difference in electron PADs between the inner and outer belt ismore » distinct. In the outer belt, the common averaged n values are less than 1.5, except for large values of the Kp index and high electron energies. The averaged n values vary considerably with MLT, with a peak in the afternoon sector and an increase with increasing L shell. In the inner belt, the averaged n values are much larger, with a common value greater than 2. The PADs show a slight dependence on MLT, with a weak maximum at noon. A distinct region with steep PADs lies in the outer edge of the inner belt where the electron flux is relatively low. The distance between the inner and outer belt and the intensity of the geomagnetic activity together determine the variation of PADs in the inner belt. Finally, besides being dependent on electron energy, magnetic activity, and L shell, the results show a clear dependence on MLT, with higher n values on the dayside.« less

  15. Survey of radiation belt energetic electron pitch angle distributions based on the Van Allen Probes MagEIS measurements: Electron Pitch Angle Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Run; Summers, Danny; Ni, Binbin

    A statistical survey of electron pitch angle distributions (PADs) is performed based on the pitch angle-resolved flux observations from the Magnetic Electron Ion Spectrometer (MagEIS) instrument on board the Van Allen Probes during the period from 1 October 2012 to 1 May 2015. By fitting the measured PADs to a sin nα form, where α is the local pitch angle and n is the power law index, we investigate the dependence of PADs on electron kinetic energy, magnetic local time (MLT), the geomagnetic Kp index, and L shell. The difference in electron PADs between the inner and outer belt ismore » distinct. In the outer belt, the common averaged n values are less than 1.5, except for large values of the Kp index and high electron energies. The averaged n values vary considerably with MLT, with a peak in the afternoon sector and an increase with increasing L shell. In the inner belt, the averaged n values are much larger, with a common value greater than 2. The PADs show a slight dependence on MLT, with a weak maximum at noon. A distinct region with steep PADs lies in the outer edge of the inner belt where the electron flux is relatively low. The distance between the inner and outer belt and the intensity of the geomagnetic activity together determine the variation of PADs in the inner belt. Finally, besides being dependent on electron energy, magnetic activity, and L shell, the results show a clear dependence on MLT, with higher n values on the dayside.« less

  16. [Influence of traffic restriction on road and construction fugitive dust].

    PubMed

    Tian, Gang; Li, Gang; Qin, Jian-Ping; Fan, Shou-Bin; Huang, Yu-Hu; Nie, Lei

    2009-05-15

    By monitoring the road and construction dust fall continuously during the "Good Luck Beijing" sport events, the reduction of road and construction dust fall caused by traffic restriction was studied. The contribution rate of road and construction dust to particulate matter of Beijing atmosphere environment, and the emission ratio of it to total local PM10 emission were analyzed. The results show that the traffic restriction reduces road and construction dust fall significantly. The dust fall average value of ring roads was 0.27 g x (m2 x d)(-1) in the "traffic restriction" period, and the values were 0.81 and 0.59 g x (m2 x d)(-1) 1 month and 7 days before. The dust fall average value of major arterial and minor arterial was 0.21 g x (m2 x d)(-1) in the "traffic restriction" period, and the values were 0.54 and 0.58 g x (m2 x d)(-1) 1 month and 7 days before. The roads emission reduced 60%-70% compared with before traffic restriction. The dust fall average values of civil architecture and utility architecture were 0.61 and 1.06 g x (m2 x d)(-1) in the "traffic restriction" period, and the values were 1.15 and 1.55 g x (m2 x d)(-1) 20 days before. The construction dust reduced 30%-47% compared with 20 days before traffic restriction. Road and construction dust emission are the main source of atmosphere particulate matter in Beijing, and its contribution to ambient PM10 concentration is 21%-36%. PM10 emitted from roads and constructions account for 42%-72% and 30%-51% of local emission while the local PM10 account for 50% and 70% of the total emission.

  17. DFT study of gases adsorption on sharp tip nano-catalysts surface for green fertilizer synthesis

    NASA Astrophysics Data System (ADS)

    Yahya, Noorhana; Irfan, Muhammad; Shafie, Afza; Soleimani, Hassan; Alqasem, Bilal; Rehman, Zia Ur; Qureshi, Saima

    2016-11-01

    The energy minimization and spin modifications of sorbates with sorbents in magnetic induction method (MIM) play a vital role in yield of fertilizer. Hence, in this article the focus of study is the interaction of sorbates/reactants (H2, N2 and CO2) in term of average total adsorption energies, average isosteric heats of adsorption energies, magnetic moments, band gaps energies and spin modifications over identical cone tips nanocatalyst (sorbents) of Fe2O3, Fe3O4 (magnetic), CuO and Al2O3 (non-magnetic) for green nano-fertilizer synthesis. Study of adsorption energy, band structures and density of states of reactants with sorbents are purely classical and quantum mechanical based concepts that are vividly illustrated and supported by ADSORPTION LOCATOR and Cambridge Seriel Total Energy Package (CASTEP) modules following classical and first principle DFT simulation study respectively. Maximum values of total average energies, total average adsorption energies and average adsorption energies of H2, N2 and CO2 molecules are reported as -14.688 kcal/mol, -13.444 kcal/mol, -3.130 kcal/mol, - kcal/mol and -6.348 kcal/mol over Al2O3 cone tips respectively and minimum over magnetic cone tips. Whereas, the maximum and average minimum values of average isosteric heats of adsorption energies of H2, N2 and CO2 molecules are figured out to be 3.081 kcal/mol, 4.842 kcal/mol and 6.848 kcal/mol, 0.988 kcal/mol, 1.554 kcal/mol and 2.236 kcal/mol over aluminum oxide and Fe3O4 cone tips respectively. In addition to the adsorption of reactants over identical cone sorbents the maximum and minimum values of net spin, electrons and number of bands for magnetite and aluminum oxide cone structures are attributed to 82 and zero, 260 and 196, 206 and 118 for Fe3O4 and Al2O3 cones respectively. Maximum and least observed values of band gap energies are figured out to be 0.188 eV and 0.018 eV with Al2O3 and Fe3O4 cone structures respectively. Ultimately, with the adsorption of reactants an identical increment of 14 electrons each in up and down spins is resulted.

  18. [A comparative evaluation of the methods for determining nitrogen dioxide in an industrial environment].

    PubMed

    Panev, T

    1991-01-01

    The present work has the purpose to make a comparative evaluation of the different types detector tubes--for analysis, long-term and passive for determination of NO2 and the results to be compared, with those received by the spectrophotometric method and the reagent of Zaltsman. Studies were performed in the hall of the garage for repair of diesel buses during one working shift. The results point out that the analysing tubes for NO2 give good results with the spectrophotometric method. The measured average-shift concentrations of NO2 by long-term and passive tubes are juxtaposed with the average-received values with the analytical tubes and with the analytical method.

  19. Consistency of the structure of Legendre transform in thermodynamics with the Kolmogorov-Nagumo average

    NASA Astrophysics Data System (ADS)

    Scarfone, A. M.; Matsuzoe, H.; Wada, T.

    2016-09-01

    We show the robustness of the structure of Legendre transform in thermodynamics against the replacement of the standard linear average with the Kolmogorov-Nagumo nonlinear average to evaluate the expectation values of the macroscopic physical observables. The consequence of this statement is twofold: 1) the relationships between the expectation values and the corresponding Lagrange multipliers still hold in the present formalism; 2) the universality of the Gibbs equation as well as other thermodynamic relations are unaffected by the structure of the average used in the theory.

  20. Testosterone Trajectories and Reference Ranges in a Large Longitudinal Sample of Male Adolescents

    PubMed Central

    Khairullah, Ammar; Cousino Klein, Laura; Ingle, Suzanne M.; May, Margaret T.; Whetzel, Courtney A.; Susman, Elizabeth J.; Paus, Tomáš

    2014-01-01

    Purpose Pubertal dynamics plays an important role in physical and psychological development of children and adolescents. We aim to provide reference ranges of plasma testosterone in a large longitudinal sample. Furthermore, we describe a measure of testosterone trajectories during adolescence that can be used in future investigations of development. Methods We carried out longitudinal measurements of plasma testosterone in 2,216 samples obtained from 513 males (9 to 17 years of age) from the Avon Longitudinal Study of Parents and Children. We used integration of a model fitted to each participant’s testosterone trajectory to calculate a measure of average exposure to testosterone over adolescence. We pooled these data with corresponding values reported in the literature to provide a reference range of testosterone levels in males between the ages of 6 and 19 years. Results The average values of total testosterone in the ALSPAC sample range from 0.82 nmol/L (Standard Deviation [SD]: 0.09) at 9 years of age to 16.5 (SD: 2.65) nmol/L at 17 years of age; these values are congruent with other reports in the literature. The average exposure to testosterone is associated with different features of testosterone trajectories such as Peak Testosterone Change, Age at Peak Testosterone Change, and Testosterone at 17 years of age as well as the timing of the growth spurt during puberty. Conclusions The average exposure to testosterone is a useful measure for future investigations using testosterone trajectories to examine pubertal dynamics. PMID:25268961

  1. The spatial distribution of fossil fuel CO2 traced by Δ(14)C in the leaves of gingko (Ginkgo biloba L.) in Beijing City, China.

    PubMed

    Niu, Zhenchuan; Zhou, Weijian; Zhang, Xiaoshan; Wang, Sen; Zhang, Dongxia; Lu, Xuefeng; Cheng, Peng; Wu, Shugang; Xiong, Xiaohu; Du, Hua; Fu, Yunchong

    2016-01-01

    Atmospheric fossil fuel CO2 (CO2ff ) information is an important reference for local government to formulate energy-saving and emission reduction in China. The CO2ff spatial distribution in Beijing City was traced by Δ(14)C in the leaves of gingko (Ginkgo biloba L.) from late March to September in 2009. The Δ(14)C values were in the range of -35.2 ± 2.8∼15.5 ± 3.2 ‰ (average 3.4 ± 11.8 ‰), with high values found at suburban sites (average 12.8 ± 3.1 ‰) and low values at road sites (average -8.4 ± 18.1 ‰). The CO2ff concentrations varied from 11.6 ± 3.7 to 32.5 ± 9.0 ppm, with an average of 16.4 ± 4.9 ppm. The CO2ff distribution in Beijing City showed spatial heterogeneity. CO2ff hotspots were found at road sites resulted from the emission from vehicles, while low CO2ff concentrations were found at suburban sites because of the less usage of fossil fuels. Additionally, CO2ff concentrations in the northwest area were generally higher than those in the southeast area due to the disadvantageous topography.

  2. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  3. [Acid volatile sulfide and bioaccumulation of Cr in sediments from a municipal polluted river].

    PubMed

    Li, Feng; Wen, Yan-Mao; Zhu, Ping-Ting; Jin, Hui; Song, Wei-Wei; Dai, Rui-Zhi

    2009-03-15

    Samples of sediment, overlying water, pore water, and benthic invertebrate were collected at 13 stations along a typical municipal polluted river in the Pearl River Delta. The samples were analyzed to study relationships between acid volatile sulfide (AVS) versus Cr(III) and Cr(VI) in sediment, overlying water, and pore water as well as Cr in Limnodrilus sp.. Based on the "Cr hypothesis", the relationship between AVS and bioavailability of Cr in heavily polluted areas was explored to extend the utility of AVS measurements as sediment assessments. The mean value of total Cr in sediment was 329.57 mg/kg, which was 9.4 times of background value (35 mg/kg). The result indicated that the study area has been seriously polluted by Cr. The concentrations of Cr(VI) in sediment and overlying water were low, indicating that most of Cr was in the form of Cr(III). In the study area, the value of AVS was relatively high with an average value of 650.38 mg/kg, while Cr in the pore water was low with the average of 68.42 microg/L. Cr(VI) in the pore water was below the detection limit except at Z1 station. The range of Cr concentrations in Limnodrilus sp. was from 12.46 mg/kg to 38.99 mg/kg of dried weight, with the average of 25.85 mg/kg, which was higher than other similar results in the literature. The result showed that the amount of Cr accumulation in Limnodrilus sp. was significant. A further analysis showed a significant correlation between Cr in Limnodrilus sp. and Cr in the pore water (r = 0.614, p < 0.05). Since most of Cr in pore water was in the form of Cr(III), the toxicity of Cr(III) in pore water to organism can not be neglected in the heavily polluted river.

  4. Analysis of the choice of food products and the energy value of diets of female middle- and long-distance runners depending on the self-assessment of their nutritional habits

    PubMed

    Głąbska, Dominika; Jusińska, Marta

    2018-01-01

    Properly balanced diet is especially important in the case of young athletes, as it influences not only their physical development, but also influences results obtained during trainings and competitions. The aim of the study was to assess the choice of food products and the energy value of diets of female middleand long-distance runners, depending on the self-assessment of their nutritional habits. The study was conducted in the group of 40 female middle- and long-distance runners, aged 15-25, who declared average diet (n=15, 37.5%) or outstanding diet (n=25, 62.5%). Participants conducted three-day dietary record of the consumed dishes and drunk beverages, that was based on the self-reported data. The choice of products, the energy value of diets as well as macronutrients intake were compared depending on the self-assessment of the nutritional habits. Runners declaring outstanding diet were characterized by significantly lower intake of dairy beverages, than runners declaring average diet (p=0.0459), but simultaneously, by higher intake of mushrooms (p=0.0453). No difference of energy value of diets was stated between groups of runners depending on the self-assessment of their nutritional habits. Runners declaring outstanding diet were characterized by significantly lower intake of lactose, than runners declaring average diet (p=0.0119), but simultaneously, by higher intake of cholesterol (p=0.0307). The female middle- and long-distance runners analysed in the presented study do not assess the quality of their diet reliably, so they probably do not have the sufficient nutritional knowledge. There is a need to implement nutritional education among professional runners and their coaches, in order to improve the quality of diet of professional runners and, as a results maybe also to improve their sport results.

  5. High-precision branching ratio measurement for the superallowed β+ emitter Ga62

    NASA Astrophysics Data System (ADS)

    Finlay, P.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Towner, I. S.; Austin, R. A. E.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Kanungo, R.; Leach, K. G.; Mattoon, C. M.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Ressler, J. J.; Sarazin, F.; Savajols, H.; Schumaker, M. A.; Wong, J.

    2008-08-01

    A high-precision branching ratio measurement for the superallowed β+ decay of Ga62 was performed at the Isotope Separator and Accelerator (ISAC) radioactive ion beam facility. The 8π spectrometer, an array of 20 high-purity germanium detectors, was employed to detect the γ rays emitted following Gamow-Teller and nonanalog Fermi β+ decays of Ga62, and the SCEPTAR plastic scintillator array was used to detect the emitted β particles. Thirty γ rays were identified following Ga62 decay, establishing the superallowed branching ratio to be 99.858(8)%. Combined with the world-average half-life and a recent high-precision Q-value measurement for Ga62, this branching ratio yields an ft value of 3074.3±1.1 s, making Ga62 among the most precisely determined superallowed ft values. Comparison between the superallowed ft value determined in this work and the world-average corrected F tmacr value allows the large nuclear-structure-dependent correction for Ga62 decay to be experimentally determined from the CVC hypothesis to better than 7% of its own value, the most precise experimental determination for any superallowed emitter. These results provide a benchmark for the refinement of the theoretical description of isospin-symmetry breaking in A⩾62 superallowed decays.

  6. Understanding the potential sources and environmental impacts of dissolved and suspended organic carbon in the diversified Ramganga River, Ganges Basin, India

    NASA Astrophysics Data System (ADS)

    Khan, Mohd Yawar Ali; Tian, Fuqiang

    2018-06-01

    The river network is one of the important transporters of nutrients from the environment and land masses to the oceans and regularly provides storage for several compounds. The variations in suspended and dissolved discharge of the river are more substantial than the changes in water discharge. (Suspended and dissolved) organic carbons (SOC and DOC) are imperative segments in the carbon cycle and fill in as essential food sources for amphibian sustenance networks. In the present study, 26 samples of water were collected from different locations over the 642 km stretch of the Ramganga River and its adjoining tributaries to observe the spatial variation of DOC, dissolved inorganic carbon (DIC), SOC and suspended inorganic carbon (SIC) in river water. The DOC and DIC values of Ramganga River goes between 1.49 to 4.65 and 9.61 to 36.6 mg L-1 with an average convergence of 2.5 and 20 mg L-1, individually, while in case of tributaries, these values extends between 0.09 to 4.52 and 4.61 to 42.36 mg L-1 with an average convergence of 2.13 and 21.1 mg L-1, separately. The estimations of SOC and SIC in the Ramganga River extend between 1.31 to 22.15 and 1.27 to 10.14 g kg-1 with an average convergence of 6.29 and 4.24 g kg-1, individually, though in tributaries, these values run between 0.80 to 47.23 and 0.31 to 22.94 g kg-1 with an average convergence of 9.25 and 5.14 g kg-1, separately. The results also show the higher values of DOC as compared with SOC and these values shows an increasing pattern with a decrease in elevation.

  7. On the Relationship Between Global Land-Ocean Temperature and Various Descriptors of Solar-Geomagnetic Activity and Climate

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2014-01-01

    Examined are sunspot cycle- (SC-) length averages of the annual January-December values of the Global Land-Ocean Temperature Index () in relation to SC-length averages of annual values of various descriptors of solar-geomagnetic activity and climate, incorporating lags of 0-5 yr. For the overall interval SC12-SC23, the is inferred to correlate best against the parameter incorporating lag = 5 yr, where the parameter refers to the resultant aa value having removed that portion of the annual aa average value due to the yearly variation of sunspot number (SSN). The inferred correlation between the and is statistically important at confidence level cl > 99.9%, having a coefficient of linear correlation r = 0.865 and standard error of estimate se = 0.149 degC. Excluding the most recent cycles SC22 and SC23, the inferred correlation is stronger, having r = 0.969 and se = 0.048 degC. With respect to the overall trend in the , which has been upwards towards warmer temperatures since SC12 (1878-1888), solar-geomagnetic activity parameters are now trending downwards (since SC19). For SC20-SC23, in contrast, comparison of the against SC-length averages of the annual value of the Mauna Loa carbon dioxide () index is found to be highly statistically important (cl >> 99.9%), having r = 0.9994 and se = 0.012 degC for lag = 2 yr. On the basis of the inferred preferential linear correlation between the and , the current ongoing SC24 is inferred to have warmer than was seen in SC23 (i.e., >0.526 degC), probably in excess of 0.68 degC (relative to the 1951-1980 base period).

  8. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    PubMed

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2018-01-01

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  9. Hybrid Stochastic Forecasting Model for Management of Large Open Water Reservoir with Storage Function

    NASA Astrophysics Data System (ADS)

    Kozel, Tomas; Stary, Milos

    2017-12-01

    The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for all numbers from interval. Resulted course of management was compared with course, which was obtained from using GE + real flow series. Comparing results showed that fuzzy model with forecasted values has been able to manage main malfunction and artificially disorders made by model were founded essential, after values of water volume during management were evaluated. Forecasting model in combination with fuzzy model provide very good results in management of water reservoir with storage function and can be recommended for this purpose.

  10. Analysis of financing efficiency of big data industry in Guizhou province based on DEA models

    NASA Astrophysics Data System (ADS)

    Li, Chenggang; Pan, Kang; Luo, Cong

    2018-03-01

    Taking 20 listed enterprises of big data industry in Guizhou province as samples, this paper uses DEA method to evaluate the financing efficiency of big data industry in Guizhou province. The results show that the pure technical efficiency of big data enterprise in Guizhou province is high, whose mean value reaches to 0.925. The mean value of scale efficiency reaches to 0.749. The average value of comprehensive efficiency reaches 0.693. The comprehensive financing efficiency is low. According to the results of the study, this paper puts forward some policy and recommendations to improve the financing efficiency of the big data industry in Guizhou.

  11. Simulating maize yield and bomass with spatial variability of soil field capacity

    USGS Publications Warehouse

    Ma, Liwang; Ahuja, Lajpat; Trout, Thomas; Nolan, Bernard T.; Malone, Robert W.

    2015-01-01

    Spatial variability in field soil properties is a challenge for system modelers who use single representative values, such as means, for model inputs, rather than their distributions. In this study, the root zone water quality model (RZWQM2) was first calibrated for 4 yr of maize (Zea mays L.) data at six irrigation levels in northern Colorado and then used to study spatial variability of soil field capacity (FC) estimated in 96 plots on maize yield and biomass. The best results were obtained when the crop parameters were fitted along with FCs, with a root mean squared error (RMSE) of 354 kg ha–1 for yield and 1202 kg ha–1 for biomass. When running the model using each of the 96 sets of field-estimated FC values, instead of calibrating FCs, the average simulated yield and biomass from the 96 runs were close to measured values with a RMSE of 376 kg ha–1 for yield and 1504 kg ha–1 for biomass. When an average of the 96 FC values for each soil layer was used, simulated yield and biomass were also acceptable with a RMSE of 438 kg ha–1 for yield and 1627 kg ha–1 for biomass. Therefore, when there are large numbers of FC measurements, an average value might be sufficient for model inputs. However, when the ranges of FC measurements were known for each soil layer, a sampled distribution of FCs using the Latin hypercube sampling (LHS) might be used for model inputs.

  12. Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system

    NASA Astrophysics Data System (ADS)

    Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola

    2018-03-01

    We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.

  13. Multi-documents summarization based on clustering of learning object using hierarchical clustering

    NASA Astrophysics Data System (ADS)

    Mustamiin, M.; Budi, I.; Santoso, H. B.

    2018-03-01

    The Open Educational Resources (OER) is a portal of teaching, learning and research resources that is available in public domain and freely accessible. Learning contents or Learning Objects (LO) are granular and can be reused for constructing new learning materials. LO ontology-based searching techniques can be used to search for LO in the Indonesia OER. In this research, LO from search results are used as an ingredient to create new learning materials according to the topic searched by users. Summarizing-based grouping of LO use Hierarchical Agglomerative Clustering (HAC) with the dependency context to the user’s query which has an average value F-Measure of 0.487, while summarizing by K-Means F-Measure only has an average value of 0.336.

  14. Economic value of U.S. fossil fuel electricity health impacts.

    PubMed

    Machol, Ben; Rizk, Sarah

    2013-02-01

    Fossil fuel energy has several externalities not accounted for in the retail price, including associated adverse human health impacts, future costs from climate change, and other environmental damages. Here, we quantify the economic value of health impacts associated with PM(2.5) and PM(2.5) precursors (NO(x) and SO(2)) on a per kilowatt hour basis. We provide figures based on state electricity profiles, national averages and fossil fuel type. We find that the economic value of improved human health associated with avoiding emissions from fossil fuel electricity in the United States ranges from a low of $0.005-$0.013/kWh in California to a high of $0.41-$1.01/kWh in Maryland. When accounting for the adverse health impacts of imported electricity, the California figure increases to $0.03-$0.07/kWh. Nationally, the average economic value of health impacts associated with fossil fuel usage is $0.14-$0.35/kWh. For coal, oil, and natural gas, respectively, associated economic values of health impacts are $0.19-$0.45/kWh, $0.08-$0.19/kWh, and $0.01-$0.02/kWh. For coal and oil, these costs are larger than the typical retail price of electricity, demonstrating the magnitude of the externality. When the economic value of health impacts resulting from air emissions is considered, our analysis suggests that on average, U.S. consumers of electricity should be willing to pay $0.24-$0.45/kWh for alternatives such as energy efficiency investments or emission-free renewable sources that avoid fossil fuel combustion. The economic value of health impacts is approximately an order of magnitude larger than estimates of the social cost of carbon for fossil fuel electricity. In total, we estimate that the economic value of health impacts from fossil fuel electricity in the United States is $361.7-886.5 billion annually, representing 2.5-6.0% of the national GDP. Published by Elsevier Ltd.

  15. 78 FR 1835 - Hand Trucks and Certain Parts Thereof From the People's Republic of China: Preliminary Results of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-09

    ... Integration (Xiamen) Co., Ltd. (New-Tec) were below normal value (NV). In addition, we are not rescinding this... November 30, 2011: Weighted- Manufacturer/exporter average margin (percent) New-Tec Integration (Xiamen) Co...

  16. Near Real-Time Event Detection & Prediction Using Intelligent Software Agents

    DTIC Science & Technology

    2006-03-01

    value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite

  17. Cost-benefit analysis of the 55-mph speed limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forester, T.H.; McNown, R.F.; Singell, L.D.

    1984-01-01

    This article presents the results of an empirical study which estimates the number of reduced fatalities as a result of the imposed 55-mph speed limit. Time series data for the US from 1952 to 1979 is employed in a regression model capturing the relation between fatalities, average speed, variability of speed, and the speed limit. Also discussed are the alternative approaches to valuing human life and the value of time. Provided is a series of benefit-cost ratios based on alternative measures of the benefits and costs from life saving. The paper concludes that the 55-mph speed limit is not costmore » efficient unless additional time on the highway is valued significantly below levels estimated in the best reasearch on the value of time. 12 references, 1 table.« less

  18. Limitations of signal averaging due to temporal correlation in laser remote-sensing measurements.

    PubMed

    Menyuk, N; Killinger, D K; Menyuk, C R

    1982-09-15

    Laser remote sensing involves the measurement of laser-beam transmission through the atmosphere and is subject to uncertainties caused by strong fluctuations due primarily to speckle, glint, and atmospheric-turbulence effects. These uncertainties are generally reduced by taking average values of increasing numbers of measurements. An experiment was carried out to directly measure the effect of signal averaging on back-scattered laser return signals from a diffusely reflecting target using a direct-detection differential-absorption lidar (DIAL) system. The improvement in accuracy obtained by averaging over increasing numbers of data points was found to be smaller than that predicted for independent measurements. The experimental results are shown to be in excellent agreement with a theoretical analysis which considers the effect of temporal correlation. The analysis indicates that small but long-term temporal correlation severely limits the improvement available through signal averaging.

  19. [Telemetry data based on comparative study of physical activity in patients with resynchronization device].

    PubMed

    Melczer, Csaba; Melczer, László; Goják, Ilona; Kónyi, Attila; Szabados, Sándor; Raposa, L Bence; Oláh, András; Ács, Pongrác

    2017-05-01

    The effect of regular physical activity on health is widely recognized, but several studies have shown its key importance for heart patients. The present study aimed to define the PA % values, and to convert them into metabolic equivalent values (MET), which describes oxygen consumption during physical activity. A total of seventeen patients with heart disease; 3 females and 14 males; age: 57.35 yrs ± 9.54; body mass 98.71 ± 9.89 kg; average BMI 36.69 ± 3.67 were recruited into the study. The measured values from Cardiac Resynchronisation Therapy devices and outer accelerometers (ActiGraph GT3X+) were studied over a 7-day time period. Using the two sets of values describing physical performance, linear regression was calculated providing a mathematical equation, thus, the Physical Activity values in percentage were converted into MET values. During the 6-minute walk test the patients achieved an average of 416.6 ± 48.2 m. During 6MWT the measured values averaged at 1.85 ± 0.18 MET's, and MET values averaged at 1.12 ± 0.06 per week. It clearly shows that this test is a challenge for the patients compared to their daily regular physical activity levels. With our method, based on the values received from the physical activity sensor implanted into the resynchronisation devices, changes in patients' health status could be monitored telemetrically with the assistance from the implanted electronic device. Orv Hetil. 2017; 158(17): 748-753.

  20. Seasonal comparison of two spatially distributed evapotranspiration mapping methods

    NASA Astrophysics Data System (ADS)

    Kisfaludi, Balázs; Csáki, Péter; Péterfalvi, József; Primusz, Péter

    2017-04-01

    More rainfall is disposed of through evapotranspiration (ET) on a global scale than through runoff and storage combined. In Hungary, about 90% of the precipitation evapotranspirates from the land and only 10% goes to surface runoff and groundwater recharge. Therefore, evapotranspiration is a very important element of the water balance, so it is a suitable parameter for the calibration of hydrological models. Monthly ET values of two MODIS-data based ET products were compared for the area of Hungary and for the vegetation period of the year 2008. The differences were assessed by land cover types and by elevation zones. One ET map was the MOD16, aiming at global coverage and provided by the MODIS Global Evaporation Project. The other method is called CREMAP, it was developed at the Budapest University of Technology and Economics for regional scale ET mapping. CREMAP was validated for the area of Hungary with good results, but ET maps were produced only for the period of 2000-2008. The aim of this research was to evaluate the performance of the MOD16 product compared to the CREMAP method. The average difference between the two products was the highest during summer, CREMAP estimating higher ET values by about 25 mm/month. In the spring and autumn, MOD16 ET values were higher by an average of 6 mm/month. The differences by land cover types showed a similar seasonal pattern to the average differences, and they correlated strongly with each other. Practically the same difference values could be calculated for arable lands and forests that together cover nearly 75% of the area of the country. Therefore, it can be said that the seasonal changes had the same effect on the two method's ET estimations in each land cover type areas. The analysis by elevation zones showed that on elevations lower than 200 m AMSL the trends of the difference values were similar to the average differences. The correlation between the values of these elevation zones was also strong. However weaker correlation was found between the values of the elevation zones under and above 200 m AMSL. Therefore, elevation has some effect on the differences between the ET values of the CREMAP and the MOD16 method. This research was supported by the "Agroclimate.2" (VKSZ_12-1-2013-0034) EU-national joint funded research project.

  1. [Clinical use of continuous glucose monitoring system in gestational diabetes mellitus and type 2 diabetes complicated with pregnancy].

    PubMed

    Song, Yilin; Yang, Huixia

    2014-08-01

    To compare the clinical use of continuous glucose monitoring system (CGMS) and self-monitoring blood glucose (SMBG) when monitoring blood glucose level of patients with gestational diabetes mellitus (GDM) or type 2 diabetes mellitus (DM) complicated with pregnancy. A total of 99 patients with GDM (n = 70) and type 2 DM complicated with pregnancy (n = 29) that whether hospitalized or in clinical of Peking University First Hospital were recruited from Aug 2012 to Apr 2013. The CGMS was used to monitor their blood glucose level during the 72-hour time period, while the SMBG was also taken seven times daily. The correlation between these blood glucose levels and their glycosylated hemoglobin (HbA1c) levels were analyzed by comparing the average value, the maximum and the minimum value of blood glucose, and the appeared time of these extremum values in these two monitoring methods, and the amount of insulin usage was recorded as well. (1) The maximum, minimum and the average blood glucose value in the GDM group were (8.7 ± 1.2), (4.5 ± 0.6)and (6.3 ± 0.6)mmol/L of SMBG vs. (10.1 ± 1.7), (3.1 ± 0.7), (6.0 ± 0.6) mmol/L of CGMS. These values in DM group were(10.1 ± 2.2), (4.5 ± 1.0), (6.9 ± 1.1)mmol/L of SMBG vs.(12.2 ± 2.6), (2.8 ± 0.8), (6.6 ± 1.1) mmol/L of CGMS. By using the two methods, the maximum and the average value of the two groups showed significant differences (P < 0.01) while the minimum value showed no significant differences (P > 0.05). (2) In the GDM group, the average blood glucose values of CGMS and SMBG were significantly correlated (r = 0.864, P < 0.01). The maximum values presented the same result (r = 0.734, P < 0.01). Correlation was not found in the minimum values of CGMS and SMBG (r = 0.138, P > 0.05). In the DM group, the average valves of two methods were significantly correlated (r = 0.962, P < 0.01), the maximum values showed the same result (r = 0.831, P < 0.01).It can also be observed in the minimum values (r = 0.460, P < 0.05). (3) There was significant correlation between the average value of CGMS and HbA1c level (r = 0.400, P < 0.01), and the average value of SMBG and HbA1c level were correlated (r = 0.031, P < 0.05) in the GDM group; the average values of CGMS (r = 0.695, P < 0.01) and SMBG (r = 0.673, P < 0.01) were both significantly correlated with the HbA1c level in the DM group. (4) In the GDM group, 37% (26/70) of the minimum values of SMBG appeared 30 minutes before breakfast, while 34% (24/70) of them appeared 30 minutes before lunch; 86% (60/70) of the maximum values of SMBG were evenly distributed 2 hours after each of the three meals. In the DM group, 41% (12/29) of the minimum values of SMBG presented 30 minutes before lunch, while 21% (6/29) and 14% (4/29) of them were showed 30 minutes before breakfast and dinner respectively; about 30% of the maximum values of SMBG appeared 2 hours after each of the three meals. (5) In the GDM group, 23% (16/70) of the minimum values of CGMS occurred between 0:00-2:59 am., and most of the other minimum values of CGMS were evenly distributed in the rest of the day, except for 3% (2/70) of them were found during 18:00- 20:59 pm. 43% (30/70) of the maximum values of CGMS appeared during 6:00-8:59 am., only 1% (1/70) and 3% (2/70) of them presented during 0:00-2:59 am. and 21:00-23:59 pm., and the rest were evenly distributed for the other times of the day. In the DM group, 34% (10/29) of the minimum values of CGMS were found during 0:00-2:59 am., 14% (4/29) of them appeared during 9:00-11:59 am. and 15:00-17:59 pm., 45% (13/29) of the maximum values of the CGMS presented during 6:00-8:59 am., none was found during 21:00-23:59 pm.,0:00-2:59 am. and 3:00-5:59 am., and the rest were evenly distributed for the other times of the day. (6) 64% (45/70) of the patients in the GDM group did not require for insulin treatment, while 36% (25/70) of them did. For those patients who received insulin treatment, after CGMS, 64% (16/25) of them adjusted the insulin dosage according to their blood glucose levels. In the DM group, 14% (4/29) of them did not receive insulin treatment, while for the others who did (86%, 25/29); 60% (15/25) of them adjusted the insulin dosage according to their blood glucose levels after CGMS. Both CGMS and SMBG could correctly reflect patients' blood glucose levels. It was more difficult to control the blood glucose levels in patients with type 2 DM complicated with pregnancy than the GDM patients. Compared with SMBG, CGMS could detect postprandial hyperglycemia and nocturnal hypoglycemia more effectively.

  2. Mode 1 and Mode 2 Analysis of Graphite/Epoxy Composites Using Double Cantilever Beam and End-Notched Flexure Tests

    NASA Technical Reports Server (NTRS)

    Hufnagel, Kathleen P.

    1995-01-01

    The critical strain energy release rates associated with debonding of the adhesive bondlines in graphite/epoxy IM6/3501-6 interlaminar fracture specimens were investigated. Two panels were manufactured for this investigation; however, panel two was layed-up incorrectly. As a result, data collected from Panel Two serves no real purpose in this investigation. Double Cantilever Beam (DCB) specimens were used to determine the opening Mode I interlaminar fracture toughness, G1(sub c), of uni-directional fiber re-inforced composites. The five specimens tested from Panel One had an average value of 946.42J/sq m for G1(sub c) with an acceptable coefficient of variation. The critical strain energy release rate, G2(sub c), for initiation of delamination under inplane shear loading was investigated using the End-Notched Flexure (ENF) Test. Four specimens were tested from Panel One and an average value of 584.98J/sq m for G2(sub c) was calculated. Calculations from the DCB and ENF test results for Panel One represent typical values of G1(sub c) and G2(sub c) for the adhesive debonding in the material studied in this investigation.

  3. Sizing procedures for sun-tracking PV system with batteries

    NASA Astrophysics Data System (ADS)

    Nezih Gerek, Ömer; Başaran Filik, Ümmühan; Filik, Tansu

    2017-11-01

    Deciding optimum number of PV panels, wind turbines and batteries (i.e. a complete renewable energy system) for minimum cost and complete energy balance is a challenging and interesting problem. In the literature, some rough data models or limited recorded data together with low resolution hourly averaged meteorological values are used to test the sizing strategies. In this study, active sun tracking and fixed PV solar power generation values of ready-to-serve commercial products are recorded throughout 2015-2016. Simultaneously several outdoor parameters (solar radiation, temperature, humidity, wind speed/direction, pressure) are recorded with high resolution. The hourly energy consumption values of a standard 4-person household, which is constructed in our campus in Eskisehir, Turkey, are also recorded for the same period. During sizing, novel parametric random process models for wind speed, temperature, solar radiation, energy demand and electricity generation curves are achieved and it is observed that these models provide sizing results with lower LLP through Monte Carlo experiments that consider average and minimum performance cases. Furthermore, another novel cost optimization strategy is adopted to show that solar tracking PV panels provide lower costs by enabling reduced number of installed batteries. Results are verified over real recorded data.

  4. [Tracing Sources of Sulfate Aerosol in Nanjing Northern Suburb Using Sulfur and Oxygen Isotopes].

    PubMed

    Wei, Ying; Guo, Zhao-bing; Ge, Xin; Zhu, Sheng-nan; Jiang, Wen-juan; Shi, Lei; Chen, Shu

    2015-04-01

    Abstract: To trace the sources of sulfate contributing to atmospheric aerosol, PM2.5 samples for isotopic analysis were collected in Nanjing northern suburb during January 2014. The sulfur and oxygen isotopic compositions of sulfate from these samples were determined by EA-IRMS. Source identification and apportionment were carried out using stable isotopic and chemical evidences, combined with absolute principal component analysis (APCA) method. The Δ34S values of aerosol sulfate ranged from 2.7 per thousand to 6.4 per thousand, with an average of 5.0 per thousand ± 0.9 per thousand, while the Δ18O values ranged from 10.6 per thousand to 16.1 per thousand, with an average of 12.5 per thousand ± 1.37 per thousand. In conjunction with air mass trajectories, the results suggested that aerosol sulfates were controlled by a dominance of local anthropogenic sulfate, followed by the contributions of long-distance transported sulfate. There was a minor effect of some other low-Δ34S valued sulfates, which might be expected from biogenic sources. Absolute principal component analysis results showed that the contributions of anthropogenic sulfate and long-distance transported sulfate were 46.74% and 31.54%, respectively.

  5. Modelling audiovisual integration of affect from videos and music.

    PubMed

    Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V

    2018-05-01

    Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.

  6. Optimal size for heating efficiency of superparamagnetic dextran-coated magnetite nanoparticles for application in magnetic fluid hyperthermia

    NASA Astrophysics Data System (ADS)

    Shaterabadi, Zhila; Nabiyouni, Gholamreza; Soleymani, Meysam

    2018-06-01

    Dextran-coated magnetite (Fe3O4) nanoparticles with average particle sizes of 4 and 19 nm were synthesized through in situ and semi-two-step co-precipitation methods, respectively. The experimental results confirm the formation of pure phase of magnetite as well as the presence of dextran layer on the surface of modified magnetite nanoparticles. The results also reveal that both samples have the superparamagnetic behavior. Furthermore, calorimetric measurements show that the dextran-coated Fe3O4 nanoparticles with an average size of 4 nm cannot produce any appreciable heat under a biologically safe alternating magnetic field used in hyperthermia therapy; whereas, the larger ones (average size of 19 nm) are able to increase the temperature of their surrounding medium up to above therapeutic range. In addition, measured specific absorption rate (SAR) values confirm that magnetite nanoparticles with an average size of 19 nm are very excellent candidates for application in magnetic hyperthermia therapy.

  7. Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography

    PubMed Central

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.

    2014-01-01

    Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303

  8. Scattering of electromagnetic wave by the layer with one-dimensional random inhomogeneities

    NASA Astrophysics Data System (ADS)

    Kogan, Lev; Zaboronkova, Tatiana; Grigoriev, Gennadii., IV.

    A great deal of attention has been paid to the study of probability characteristics of electro-magnetic waves scattered by one-dimensional fluctuations of medium dielectric permittivity. However, the problem of a determination of a density of a probability and average intensity of the field inside the stochastically inhomogeneous medium with arbitrary extension of fluc-tuations has not been considered yet. It is the purpose of the present report to find and to analyze the indicated functions for the plane electromagnetic wave scattered by the layer with one-dimensional fluctuations of permittivity. We assumed that the length and the amplitude of individual fluctuations as well the interval between them are random quantities. All of indi-cated fluctuation parameters are supposed as independent random values possessing Gaussian distribution. We considered the stationary time cases both small-scale and large-scale rarefied inhomogeneities. Mathematically such problem can be reduced to the solution of integral Fred-holm equation of second kind for Hertz potential (U). Using the decomposition of the field into the series of multiply scattered waves we obtained the expression for a probability density of the field of the plane wave and determined the moments of the scattered field. We have shown that all odd moments of the centered field (U-¡U¿) are equal to zero and the even moments depend on the intensity. It was obtained that the probability density of the field possesses the Gaussian distribution. The average field is small compared with the standard fluctuation of scattered field for all considered cases of inhomogeneities. The value of average intensity of the field is an order of a standard of fluctuations of field intensity and drops with increases the inhomogeneities length in the case of small-scale inhomogeneities. The behavior of average intensity is more complicated in the case of large-scale medium inhomogeneities. The value of average intensity is the oscillating function versus the average fluctuations length if the standard of fluctuations of inhomogeneities length is greater then the wave length. When the standard of fluctuations of medium inhomogeneities extension is smaller then the wave length, the av-erage intensity value weakly depends from the average fluctuations extension. The obtained results may be used for analysis of the electromagnetic wave propagation into the media with the fluctuating parameters caused by such factors as leafs of trees, cumulus, internal gravity waves with a chaotic phase and etc. Acknowledgment: This work was supported by the Russian Foundation for Basic Research (projects 08-02-97026 and 09-05-00450).

  9. Variability of ionospheric TEC during solar and geomagnetic minima (2008 and 2009): external high speed stream drivers

    NASA Astrophysics Data System (ADS)

    Verkhoglyadova, O. P.; Tsurutani, B. T.; Mannucci, A. J.; Mlynczak, M. G.; Hunt, L. A.; Runge, T.

    2013-02-01

    We study solar wind-ionosphere coupling through the late declining phase/solar minimum and geomagnetic minimum phases during the last solar cycle (SC23) - 2008 and 2009. This interval was characterized by sequences of high-speed solar wind streams (HSSs). The concomitant geomagnetic response was moderate geomagnetic storms and high-intensity, long-duration continuous auroral activity (HILDCAA) events. The JPL Global Ionospheric Map (GIM) software and the GPS total electron content (TEC) database were used to calculate the vertical TEC (VTEC) and estimate daily averaged values in separate latitude and local time ranges. Our results show distinct low- and mid-latitude VTEC responses to HSSs during this interval, with the low-latitude daytime daily averaged values increasing by up to 33 TECU (annual average of ~20 TECU) near local noon (12:00 to 14:00 LT) in 2008. In 2009 during the minimum geomagnetic activity (MGA) interval, the response to HSSs was a maximum of ~30 TECU increases with a slightly lower average value than in 2008. There was a weak nighttime ionospheric response to the HSSs. A well-studied solar cycle declining phase interval, 10-22 October 2003, was analyzed for comparative purposes, with daytime low-latitude VTEC peak values of up to ~58 TECU (event average of ~55 TECU). The ionospheric VTEC changes during 2008-2009 were similar but ~60% less intense on average. There is an evidence of correlations of filtered daily averaged VTEC data with Ap index and solar wind speed. We use the infrared NO and CO2 emission data obtained with SABER on TIMED as a proxy for the radiation balance of the thermosphere. It is shown that infrared emissions increase during HSS events possibly due to increased energy input into the auroral region associated with HILDCAAs. The 2008-2009 HSS intervals were ~85% less intense than the 2003 early declining phase event, with annual averages of daily infrared NO emission power of ~ 3.3 × 1010 W and 2.7 × 1010 W in 2008 and 2009, respectively. The roles of disturbance dynamos caused by high-latitude winds (due to particle precipitation and Joule heating in the auroral zones) and of prompt penetrating electric fields (PPEFs) in the solar wind-ionosphere coupling during these intervals are discussed. A correlation between geoeffective interplanetary electric field components and HSS intervals is shown. Both PPEF and disturbance dynamo mechanisms could play important roles in solar wind-ionosphere coupling during prolonged (up to days) external driving within HILDCAA intervals.

  10. Preliminary evaluation of magnitude and frequency of floods in selected small drainage basins in Ohio

    USGS Publications Warehouse

    Kolva, J.R.

    1985-01-01

    A previous study of flood magitudes and frequencies in Ohio concluded that existing regionalized flood equations may not be adequate for estimating peak flows in small basins that are heavily forested, surface mined, or located in northwestern Ohio. In order to provide a large data base for improving estimation of flood peaks in these basins, 30 crest-stage gages were installed in 1977, in cooperation with the Ohio Department of Transportation, to provide a 10-year record of flood data The study area consists of two distinct parts: Northwestern Ohio, which contains 8 sites, and southern and eastern Ohio, which contains 22 sites in small forested or surface-mined drainage basins. Basin characteristics were determined for all 30 sites for 1978 conditions. Annual peaks were recorded or estimated for all 30 sites for water years 1978-82; an additional year of peak discharges was available at four sites. The 2-year (Q2) and 5-year (Q5) flood peaks were determined from these annual peaks.Q2 and Q5 values also were calculated using published regionalized regression equations for Ohio. The ratios of the observed to predicted 2-year (R2) and 5-year (R5) values were then calculated. This study found that observed flood peaks aree lower than estimated peaks by a significant amount in surface-mined basins. The average ratios of observed to predicted R2 values are 0.51 for basins with more than 40 percent surface-minded land, and 0.68 for sites with any surface-mined land. The average R5 value is 0.55 for sites with more than 40 percent surface-minded land, and 0.61 for sites with any surface-mined land. Estimated flood peaks from forested basins agree with the observed values fairly well. R2 values average 0.87 for sites with 20 percent or more forested land, but no surface-mined land, and R5 values average 0.96. If all sites with more than 20 percent forested land and some surface-mined land are considered, R2 the values average 0.86, and the R5 values average 0.82.

  11. Computational estimation of magnetically induced electric fields in a rotating head

    NASA Astrophysics Data System (ADS)

    Ilvonen, Sami; Laakso, Ilkka

    2009-01-01

    Change in a magnetic field, or similarly, movement in a strong static magnetic field induces electric fields in human tissues, which could potentially cause harmful effects. In this paper, the fields induced by different rotational movements of a head in a strong homogeneous magnetic field are computed numerically. Average field magnitudes near the retinas and inner ears are studied in order to gain insight into the causes of phosphenes and vertigo-like effects, which are associated with extremely low-frequency (ELF) magnetic fields. The induced electric fields are calculated in four different anatomically realistic head models using an efficient finite-element method (FEM) solver. The results are compared with basic restriction limits by IEEE and ICNIRP. Under rotational movement of the head, with a magnetic flux rate of change of 1 T s-1, the maximum IEEE-averaged electric field and maximum ICNIRP-averaged current density were 337 mV m-1 and 8.84 mA m-2, respectively. The limits by IEEE seem significantly stricter than those by ICNIRP. The results show that a magnetic flux rate of change of 1 T s-1 may induce electric field in the range of 50 mV m-1 near retinas, and possibly even larger values near the inner ears. These results provide information for approximating the threshold electric field values of phosphenes and vertigo-like effects.

  12. Georgia fishery study: implications for dose calculations. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, M.D.S.

    Fish consumption will contribute a major portion of the estimated individual and population doses from L-Reactor liquid releases and Cs-137 remobilization in Steel Creek. It is therefore important that the values for fish consumption used in dose calculations be as realistic as possible. Since publication of the L-Reactor Environmental Information Document (EID), data have become available on sport fishing in the Savannah River. These data provide SRP with a site-specific sport fish harvest and consumption values for use in dose calculations. The Georgia fishery data support the total population fish consumption and calculated dose reported in the EID. The datamore » indicate, however, that both the EID average and maximum individual fish consumption have been underestimated, although each to a different degree. The average fish consumption value used in the EID is approximately 3% below the lower limit of the fish consumption range calculated using the Georgia data. Maximum fish consumption in the EID has been underestimated by approximately 60%, and doses to the maximum individual should also be recalculated. Future dose calculations should utilize an average adult fish consumption value of 11.3 kg/yr, and a maximum adult fish consumption value of 34 kg/yr. Consumption values for the teen and child age groups should be increased proportionally: (1) teen average = 8.5; maximum = 25.9 kg/yr; and (2) child average = 3.6; maximum = 11.2 kg/yr. 8 refs.« less

  13. Design e-learning with flipped learning model to improve layout understanding the concepts basic of the loop control structure

    NASA Astrophysics Data System (ADS)

    Handayani, D. P.; Sutarno, H.; Wihardi, Y.

    2018-05-01

    This study aimed in design and build e-learning with classroom flipped model to improve the concept of understanding of SMK students on the basic programming subject. Research and development obtained research data from survey questionnaire given to students of SMK class X RPL in SMK Negeri 2 Bandung and interviews to RPL productive teacher. Data also obtained from questionnaire of expert validation and students' assessment from e-learning with flipped classroom models. Data also obtained from multiple-choice test to measure improvements in conceptual understanding. The results of this research are: 1) Developed e- learning with flipped classroom model considered good and worthy of use by the average value of the percentage of 86,3% by media experts, and 85,5% by subjects matter experts, then students gave judgment is very good on e-learning either flipped classroom model with a percentage of 79,15% votes. 2) e-learning with classroom flipped models show an increase in the average value of pre-test before using e-learning 26.67 compared to the average value post-test after using e- learning at 63.37 and strengthened by the calculation of the index gains seen Increased understanding of students 'concepts by 50% with moderate criteria indicating that students' understanding is improving.

  14. Lear jet boundary layer/shear layer laser propagation experiments

    NASA Technical Reports Server (NTRS)

    Gilbert, K.

    1980-01-01

    Optical degradations of aircraft turbulent boundary layers with shear layers generated by aerodynamic fences are analyzed. A collimated 2.5 cm diameter helium-neon laser (0.63 microns) traversed the approximate 5 cm thick natural aircraft boundary layer in double pass via a reflective airfoil. In addition, several flights examined shear layer-induced optical degradation. Flight altitudes ranged from 1.5 to 12 km, while Mach numbers were varied from 0.3 to 0.8. Average line spread function (LSF) and Modulation Transfer Function (MTF) data were obtained by averaging a large number of tilt-removed curves. Fourier transforming the resulting average MTF yields an LSF, thus affording a direct comparison of the two optical measurements. Agreement was good for the aerodynamic fence arrangement, but only fair in the case of a turbulent boundary layer. Values of phase variance inferred from the LSF instrument for a single pass through the random flow and corrected for a large aperture ranged from 0.08 to 0.11 waves (lambda = .63 microns) for the boundary layer. Corresponding values for the fence vary from 0.08 to 0.16 waves. Extrapolation of these values to 10.6 microns suggests negligible degradation for a CO2 laser transmitted through a 5 cm thick, subsonic turbulent boundary layer.

  15. Effects of Temperature and Strain Rate on Tensile Deformation Behavior of 9Cr-0.5Mo-1.8W-VNb Ferritic Heat-Resistant Steel

    NASA Astrophysics Data System (ADS)

    Guo, Xiaofeng; Weng, Xiaoxiang; Jiang, Yong; Gong, Jianming

    2017-09-01

    A series of uniaxial tensile tests were carried out at different strain rate and different temperatures to investigate the effects of temperature and strain rate on tensile deformation behavior of P92 steel. In the temperature range of 30-700 °C, the variations of flow stress, average work-hardening rate, tensile strength and ductility with temperature all show three temperature regimes. At intermediate temperature, the material exhibited the serrated flow behavior, the peak in flow stress, the maximum in average work-hardening rate, and the abnormal variations in tensile strength and ductility indicates the occurrence of DSA, whereas the sharp decrease in flow stress, average work-hardening rate as well as strength values, and the remarkable increase in ductility values with increasing temperature from 450 to 700 °C imply that dynamic recovery plays a dominant role in this regime. Additionally, for the temperature ranging from 550 to 650 °C, a significant decrease in flow stress values is observed with decreasing in strain rate. This phenomenon suggests the strain rate has a strong influence on flow stress. Based on the experimental results above, an Arrhenius-type constitutive equation is proposed to predict the flow stress.

  16. Climatic irregular staircases: generalized acceleration of global warming

    PubMed Central

    De Saedeleer, Bernard

    2016-01-01

    Global warming rates mentioned in the literature are often restricted to a couple of arbitrary periods of time, or of isolated values of the starting year, lacking a global view. In this study, we perform on the contrary an exhaustive parametric analysis of the NASA GISS LOTI data, and also of the HadCRUT4 data. The starting year systematically varies between 1880 and 2002, and the averaging period from 5 to 30 yr — not only decades; the ending year also varies . In this way, we uncover a whole unexplored space of values for the global warming rate, and access the full picture. Additionally, stairstep averaging and linear least squares fitting to determine climatic trends have been sofar exclusive. We propose here an original hybrid method which combines both approaches in order to derive a new type of climatic trend. We find that there is an overall acceleration of the global warming whatever the value of the averaging period, and that 99.9% of the 3029 Earth’s climatic irregular staircases are rising. Graphical evidence is also given that choosing an El Niño year as starting year gives lower global warming rates — except if there is a volcanic cooling in parallel. Our rates agree and generalize several results mentioned in the literature. PMID:26813867

  17. Short communication: Serum composition of milk subjected to re-equilibration by dialysis at different temperatures, after pH adjustments.

    PubMed

    Zhao, Zhengtao; Corredig, Milena

    2016-04-01

    The objective of this work was to investigate the properties of casein micelles after pH adjustment and their re-equilibration to the original pH and serum composition. Re-equilibration was carried out by dialyzing against skim milk at 2 different temperatures (4 or 22 °C). Turbidity, the average radius of the casein micelles, and the composition of the soluble phase were measured at different pH values, ranging between 5.5 and 8. Acidification led to the solubilization of colloidal calcium phosphate and decrease of the average radius of the micelles. With re-equilibration, casein dissociation occurred. In milk with pH values greater than 6.0, the average radius was recovered after re-equilibration. At pH values greater than neutral, an increase of the radius of casein micelles and increased dissociation of the casein were found. After re-equilibration, the radius of micelles and soluble protein in the serum decreased. The results were not affected by the temperature of re-equilibration. The changes to the calcium phosphate equilibrium and the dissociation of the micelles will have important consequences to the functionality of casein micelles. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Climatic irregular staircases: generalized acceleration of global warming.

    PubMed

    De Saedeleer, Bernard

    2016-01-27

    Global warming rates mentioned in the literature are often restricted to a couple of arbitrary periods of time, or of isolated values of the starting year, lacking a global view. In this study, we perform on the contrary an exhaustive parametric analysis of the NASA GISS LOTI data, and also of the HadCRUT4 data. The starting year systematically varies between 1880 and 2002, and the averaging period from 5 to 30 yr - not only decades; the ending year also varies . In this way, we uncover a whole unexplored space of values for the global warming rate, and access the full picture. Additionally, stairstep averaging and linear least squares fitting to determine climatic trends have been sofar exclusive. We propose here an original hybrid method which combines both approaches in order to derive a new type of climatic trend. We find that there is an overall acceleration of the global warming whatever the value of the averaging period, and that 99.9% of the 3029 Earth's climatic irregular staircases are rising. Graphical evidence is also given that choosing an El Niño year as starting year gives lower global warming rates - except if there is a volcanic cooling in parallel. Our rates agree and generalize several results mentioned in the literature.

  19. Finite-Temperature Behavior of PdH x Elastic Constants Computed by Direct Molecular Dynamics

    DOE PAGES

    Zhou, X. W.; Heo, T. W.; Wood, B. C.; ...

    2017-05-30

    In this paper, robust time-averaged molecular dynamics has been developed to calculate finite-temperature elastic constants of a single crystal. We find that when the averaging time exceeds a certain threshold, the statistical errors in the calculated elastic constants become very small. We applied this method to compare the elastic constants of Pd and PdH 0.6 at representative low (10 K) and high (500 K) temperatures. The values predicted for Pd match reasonably well with ultrasonic experimental data at both temperatures. In contrast, the predicted elastic constants for PdH 0.6 only match well with ultrasonic data at 10 K; whereas, atmore » 500 K, the predicted values are significantly lower. We hypothesize that at 500 K, the facile hydrogen diffusion in PdH 0.6 alters the speed of sound, resulting in significantly reduced values of predicted elastic constants as compared to the ultrasonic experimental data. Finally, literature mechanical testing experiments seem to support this hypothesis.« less

  20. An improved PRoPHET routing protocol in delay tolerant network.

    PubMed

    Han, Seung Deok; Chung, Yun Won

    2015-01-01

    In delay tolerant network (DTN), an end-to-end path is not guaranteed and packets are delivered from a source node to a destination node via store-carry-forward based routing. In DTN, a source node or an intermediate node stores packets in buffer and carries them while it moves around. These packets are forwarded to other nodes based on predefined criteria and finally are delivered to a destination node via multiple hops. In this paper, we improve the dissemination speed of PRoPHET (probability routing protocol using history of encounters and transitivity) protocol by employing epidemic protocol for disseminating message m, if forwarding counter and hop counter values are smaller than or equal to the threshold values. The performance of the proposed protocol was analyzed from the aspect of delivery probability, average delay, and overhead ratio. Numerical results show that the proposed protocol can improve the delivery probability, average delay, and overhead ratio of PRoPHET protocol by appropriately selecting the threshold forwarding counter and threshold hop counter values.

  1. Long-term biases in geomagnetic K and aa indices

    USGS Publications Warehouse

    Love, J.J.

    2011-01-01

    Analysis is made of the geomagnetic-activity aa index and its source K-index data from groups of ground-based observatories in Britain, and Australia, 1868.0-2009.0, solar cycles 11-23. The K data show persistent biases, especially for high (low) K-activity levels at British (Australian) observatories. From examination of multiple subsets of the K data we infer that the biases are not predominantly the result of changes in observatory location, localized induced magnetotelluric currents, changes in magnetometer technology, or the modernization of K-value estimation methods. Instead, the biases appear to be artifacts of the latitude-dependent scaling used to assign K values to particular local levels of geomagnetic activity. The biases are not effectively removed by weighting factors used to estimate aa. We show that long-term averages of the aa index, such as annual averages, are dominated by medium-level geomagnetic activity levels having K values of 3 and 4. ?? 2011 Author(s).

  2. Assessment of metal contamination in coastal sediments of Al-Khobar area, Arabian Gulf, Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alharbi, Talal; El-Sorogy, Abdelbaset

    2017-05-01

    An assessment of marine pollution due to heavy metals was made to coastal sediments collected from Al-Khobar coastline, in the Arabian Gulf, Saudi Arabia by analyzing of Al, V, Cr, Mn, Cu, Zn, Cd, Pb, Hg, Mo, Sr, Se, As, Fe, Co and Ni using Inductively Coupled Plasma-Mass Spectrometer (ICP-MS). The results indicated that the distribution of most metals was largely controlled by inputs of terrigenous material and most strongly associated with distribution of Al in sediments. In general Sr, Cr, Zn, Cu, V, Hg, Mo and Se show severe enrichment factors. Average values of Cu and Hg highly exceed the ERL and the Canadian ISQG values. Average Ni was higher than the ERL and the ERM values. The severe enrichment of some metals in the studied sediment could be partially attributed to anthropogenic activities, notably oil spills from exploration, transportation and from saline water desalination plants in Al-Khobar coast, and other industrial activities in the region.

  3. [Characteristics of organic carbon forms in the sediment of Wuliangsuhai and Daihai Lakes].

    PubMed

    Mao, Hai-Fang; He, Jiang; Lü, Chang-Wei; Liang, Ying; Liu, Hua-Lin; Wang, Feng-Jiao

    2011-03-01

    The characteristics and differences of organic carbon forms in the sediments of the Wuliangsuhai and the Daihai Lakes with different eutrophication types were discussed in the present study. The results showed that the range of total organic carbon content (TOC) in Wuliangsuhai Lake was 4.50-22.83 g x kg(-1) with the average of 11.80 g x kg(-1). The range of heavy-fraction organic carbon content was 3.38-21.67 g x kg(-1) with the average of 10.76 g x kg(-1). The range of light-fraction organic carbon content was 0.46-1.80 g x kg(-1) with the average of 1.04 g x kg(-1); The range of ROC content was 0.62-3.64 g x kg(-1) with the average of 2.11 g x kg(-1), while the range of total organic carbon content in Daihai lake was 6.84-23.46 g x kg(-1) with the average of 14.94 g x kg(-1). The range of heavy-fraction organic carbon content was 5.27-22.23 g x kg(-1) with the average of 13.89 g x kg(-1). The range of light-fraction organic carbon content was 0.76-1.57 g x kg(-1). The range of ROC content was 1.54-7.08 g x kg(-1) with the average of 3.62 g x kg(-1). The results indicated that the heavy-fraction organic carbon was the major component of the organic carbon and plays an important role in the accumulation of organic carbon in the sediments of two Lakes. The content of light-fraction organic carbon was similar in the sediments of two lakes, whereas, the contents of total organic carbon and heavy-fraction organic carbon in the sediment of Wuliangsuhai Lake were less than those in the sediment of Daihai Lake, and the value of LFOC/TOC in the Wuliangsuhai Lake was larger than that in the Daihai Lake. The humin was the dominant component of the sediment humus, followed by fulvic acid in the two lakes. The values of HM/HS in the sediments of Wuliangsuhai lake range from 43.06% to 77.25% with the average of 62.15% and values of HM/HS in the sediments of Dahai lake range from 49.23% to 73.85% with the average of 65.30%. The tightly combined humus was the dominant form in the sediment humus of two lakes, and the followed was loosely combined humus. As a whole, the carbon storage of two lakes were all relatively stable, but the values of PQ, LFOC/TOC, the ratio of loosely to tightly combined humus and HA/FA revealed that, in the sediment of Wuliangsuhai, the humification degree of organic matter was lower than that of Daihai, while the activity of humus was higher than that of Daihai, thus the carbon storage is less stable than that of Daihai.

  4. Evaluating multicenter DTI data in Huntington's disease on site specific effects: An ex post facto approach☆

    PubMed Central

    Müller, Hans-Peter; Grön, Georg; Sprengelmeyer, Reiner; Kassubek, Jan; Ludolph, Albert C.; Hobbs, Nicola; Cole, James; Roos, Raymund A.C.; Duerr, Alexandra; Tabrizi, Sarah J.; Landwehrmeyer, G. Bernhard; Süssmuth, Sigurd D.

    2013-01-01

    Purpose Assessment of the feasibility to average diffusion tensor imaging (DTI) metrics of MRI data acquired in the course of a multicenter study. Materials and methods Sixty-one early stage Huntington's disease patients and forty healthy controls were studied using four different MR scanners at four European sites with acquisition protocols as close as possible to a given standard protocol. The potential and feasibility of averaging data acquired at different sites was evaluated quantitatively by region-of-interest (ROI) based statistical comparisons of coefficients of variation (CV) across centers, as well as by testing for significant group-by-center differences on averaged fractional anisotropy (FA) values between patients and controls. In addition, a whole-brain based statistical between-group comparison was performed using FA maps. Results The ex post facto statistical evaluation of CV and FA-values in a priori defined ROIs showed no differences between sites above chance indicating that data were not systematically biased by center specific factors. Conclusion Averaging FA-maps from DTI data acquired at different study sites and different MR scanner types does not appear to be systematically biased. A suitable recipe for testing on the possibility to pool multicenter DTI data is provided to permit averaging of DTI-derived metrics to differentiate patients from healthy controls at a larger scale. PMID:24179771

  5. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    PubMed

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. 40 CFR 80.90 - Conventional gasoline baseline emissions determination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Conventional gasoline baseline... gasoline baseline emissions determination. (a) Annual average baseline values. For any facility of a refiner or importer of conventional gasoline, the annual average baseline values of the facility's exhaust...

  7. 40 CFR 80.90 - Conventional gasoline baseline emissions determination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Conventional gasoline baseline... gasoline baseline emissions determination. (a) Annual average baseline values. For any facility of a refiner or importer of conventional gasoline, the annual average baseline values of the facility's exhaust...

  8. 40 CFR 80.90 - Conventional gasoline baseline emissions determination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Conventional gasoline baseline... gasoline baseline emissions determination. (a) Annual average baseline values. For any facility of a refiner or importer of conventional gasoline, the annual average baseline values of the facility's exhaust...

  9. 40 CFR 80.90 - Conventional gasoline baseline emissions determination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Conventional gasoline baseline... gasoline baseline emissions determination. (a) Annual average baseline values. For any facility of a refiner or importer of conventional gasoline, the annual average baseline values of the facility's exhaust...

  10. 40 CFR 80.90 - Conventional gasoline baseline emissions determination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Conventional gasoline baseline... gasoline baseline emissions determination. (a) Annual average baseline values. For any facility of a refiner or importer of conventional gasoline, the annual average baseline values of the facility's exhaust...

  11. The effect of geographical indices on left ventricular structure in healthy Han Chinese population

    NASA Astrophysics Data System (ADS)

    Cen, Minyi; Ge, Miao; Liu, Yonglin; Wang, Congxia; Yang, Shaofang

    2017-02-01

    The left ventricular posterior wall thickness (LVPWT) and interventricular septum thickness (IVST) are generally regarded as the functional parts of the left ventricular (LV) structure. This paper aims to examine the effects of geographical indices on healthy Han adults' LV structural indices and to offer a scientific basis for developing a unified standard for the reference values of adults' LV structural indices in China. Fifteen terrain, climate, and soil indices were examined as geographical explanatory variables. Statistical analysis was performed using correlation analysis. Moreover, a back propagation neural network (BPNN) and a support vector regression (SVR) were applied to developing models to predict the values of two indices. After the prediction models were built, distribution maps were produced. The results show that LV structural indices are characteristically associated with latitude, longitude, altitude, average temperature, average wind velocity, topsoil sand fraction, topsoil silt fraction, topsoil organic carbon, and topsoil sodicity. The model test analyses show the BPNN model possesses better simulative and predictive ability in comparison with the SVR model. The distribution maps of the LV structural indices show that, in China, the values are higher in the west and lower in the east. These results demonstrate that the reference values of the adults' LV structural indices will be different affected by different geographical environment. The reference values of LV structural indices in one region can be calculated by setting up a BPNN, which showed better applicability in this study. The distribution of the reference values of the LV structural indices can be seen clearly on the geographical distribution map.

  12. The effect of geographical indices on left ventricular structure in healthy Han Chinese population.

    PubMed

    Cen, Minyi; Ge, Miao; Liu, Yonglin; Wang, Congxia; Yang, Shaofang

    2017-02-01

    The left ventricular posterior wall thickness (LVPWT) and interventricular septum thickness (IVST) are generally regarded as the functional parts of the left ventricular (LV) structure. This paper aims to examine the effects of geographical indices on healthy Han adults' LV structural indices and to offer a scientific basis for developing a unified standard for the reference values of adults' LV structural indices in China. Fifteen terrain, climate, and soil indices were examined as geographical explanatory variables. Statistical analysis was performed using correlation analysis. Moreover, a back propagation neural network (BPNN) and a support vector regression (SVR) were applied to developing models to predict the values of two indices. After the prediction models were built, distribution maps were produced. The results show that LV structural indices are characteristically associated with latitude, longitude, altitude, average temperature, average wind velocity, topsoil sand fraction, topsoil silt fraction, topsoil organic carbon, and topsoil sodicity. The model test analyses show the BPNN model possesses better simulative and predictive ability in comparison with the SVR model. The distribution maps of the LV structural indices show that, in China, the values are higher in the west and lower in the east. These results demonstrate that the reference values of the adults' LV structural indices will be different affected by different geographical environment. The reference values of LV structural indices in one region can be calculated by setting up a BPNN, which showed better applicability in this study. The distribution of the reference values of the LV structural indices can be seen clearly on the geographical distribution map.

  13. A Conversion Formula for Comparing Pulse Oximeter Desaturation Rates Obtained with Different Averaging Times

    PubMed Central

    Vagedes, Jan; Bialkowski, Anja; Wiechers, Cornelia; Poets, Christian F.; Dietz, Klaus

    2014-01-01

    Objective The number of desaturations determined in recordings of pulse oximeter saturation (SpO2) primarily depends on the time over which values are averaged. As the averaging time in pulse oximeters is not standardized, it varies considerably between centers. To make SpO2 data comparable, it is thus desirable to have a formula that allows conversion between desaturation rates obtained using different averaging times for various desaturation levels and minimal durations. Methods Oxygen saturation was measured for 170 hours in 12 preterm infants with a mean number of 65 desaturations <90% per hour of arbitrary duration by using a pulse oximeter in a 2–4 s averaging mode. Using 7 different averaging times between 3 and 16 seconds, the raw red-to-infrared data were reprocessed to determine the number of desaturations (D). The whole procedure was carried out for 7 different minimal desaturation durations (≥1, ≥5, ≥10, ≥15, ≥20, ≥25, ≥30 s) below SpO2 threshold values of 80%, 85% or 90% to finally reach a conversion formula. The formula was validated by splitting the infants into two groups of six children each and using one group each as a training set and the other one as a test set. Results Based on the linear relationship found between the logarithm of the desaturation rate and the logarithm of the averaging time, the conversion formula is: D2 = D1 (T2/T1)c, where D2 is the desaturation rate for the desired averaging time T2, and D1 is the desaturation rate for the original averaging time T1, with the exponent c depending on the desaturation threshold and the minimal desaturation duration. The median error when applying this formula was 2.6%. Conclusion This formula enables the conversion of desaturation rates between different averaging times for various desaturation thresholds and minimal desaturation durations. PMID:24489887

  14. [Applying the clustering technique for characterising maintenance outsourcing].

    PubMed

    Cruz, Antonio M; Usaquén-Perilla, Sandra P; Vanegas-Pabón, Nidia N; Lopera, Carolina

    2010-06-01

    Using clustering techniques for characterising companies providing health institutions with maintenance services. The study analysed seven pilot areas' equipment inventory (264 medical devices). Clustering techniques were applied using 26 variables. Response time (RT), operation duration (OD), availability and turnaround time (TAT) were amongst the most significant ones. Average biomedical equipment obsolescence value was 0.78. Four service provider clusters were identified: clusters 1 and 3 had better performance, lower TAT, RT and DR values (56 % of the providers coded O, L, C, B, I, S, H, F and G, had 1 to 4 day TAT values:

  15. Comparing the Value of Nonprofit Hospitals' Tax Exemption to Their Community Benefits.

    PubMed

    Herring, Bradley; Gaskin, Darrell; Zare, Hossein; Anderson, Gerard

    2018-01-01

    The tax-exempt status of nonprofit hospitals has received increased attention from policymakers interested in examining the value they provide instead of paying taxes. We use 2012 data from the Internal Revenue Service (IRS) Form 990, Centers for Medicare and Medicaid Services (CMS) Hospital Cost Reports, and American Hospital Association's (AHA) Annual Survey to compare the value of community benefits with the tax exemption. We contrast nonprofit's total community benefits to what for-profits provide and distinguish between charity and other community benefits. We find that the value of the tax exemption averages 5.9% of total expenses, while total community benefits average 7.6% of expenses, incremental nonprofit community benefits beyond those provided by for-profits average 5.7% of expenses, and incremental charity alone average 1.7% of expenses. The incremental community benefit exceeds the tax exemption for only 62% of nonprofits. Policymakers should be aware that the tax exemption is a rather blunt instrument, with many nonprofits benefiting greatly from it while providing relatively few community benefits.

  16. Environmental radiation and potential ecological risk levels in the intertidal zone of southern region of Tamil Nadu coast (HBRAs), India.

    PubMed

    Punniyakotti, J; Ponnusamy, V

    2018-02-01

    Natural radioactivity content and heavy metal concentration in the intertidal zone sand samples from the southern region of Tamil Nadu coast, India, have been analyzed using gamma ray spectrometer and ICP-OES, respectively. From gamma spectral analysis, the average radioactivity contents of 238 U, 232 Th, and 40 K in the intertidal zone sand samples are 12.13±4.21, 59.03±4.26, and 197.03±26.24Bq/kg, respectively. The average radioactivity content of 232 Th alone is higher than the world average value. From the heavy metal analysis, the average Cd, Cr, Cu, Ni, Pb, and Zn concentrations are 3.1, 80.24, 82.84, 23.66, 91.67, and 137.07ppm, respectively. The average Cr and Ni concentrations are lower, whereas other four metal (Cd, Cu, Pb, and Zn) concentrations are higher than world surface rock average values. From pollution assessment parameter values, the pollution level is "uncontaminated to moderately contaminated" in the study area. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS

    NASA Astrophysics Data System (ADS)

    Tobin, Kenneth J.; Torres, Roberto; Crow, Wade T.; Bennett, Marvin E.

    2017-09-01

    This study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM). Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA's long-lasting AMSR-E mission. Additionally, three other products were obtained from the European Space Agency Climate Change Initiative (CCI). These datasets were blended based on all available satellite observations (CCI-active, CCI-passive, and CCI-combined). All of these products were 0.25° and taken daily. We applied the filter to produce a soil moisture index (SWI) that others have successfully used to estimate RZSM. The only unknown in this approach was the characteristic time of soil moisture variation (T). We examined five different eras (1997-2002; 2002-2005; 2005-2008; 2008-2011; 2011-2014) that represented periods with different satellite data sensors. SWI values were compared with in situ soil moisture data from the International Soil Moisture Network at a depth ranging from 20 to 25 cm. Selected networks included the US Department of Energy Atmospheric Radiation Measurement (ARM) program (25 cm), Soil Climate Analysis Network (SCAN; 20.32 cm), SNOwpack TELemetry (SNOTEL; 20.32 cm), and the US Climate Reference Network (USCRN; 20 cm). We selected in situ stations that had reasonable completeness. These datasets were used to filter out periods with freezing temperatures and rainfall using data from the Parameter elevation Regression on Independent Slopes Model (PRISM). Additionally, we only examined sites where surface and root-zone soil moisture had a reasonably high lagged r value (r > 0. 5). The unknown T value was constrained based on two approaches: optimization of root mean square error (RMSE) and calculation based on the normalized difference vegetation index (NDVI) value. Both approaches yielded comparable results; although, as to be expected, the optimization approach generally outperformed NDVI-based estimates. The best results were noted at stations that had an absolute bias within 10 %. SWI estimates were more impacted by the in situ network than the surface satellite product used to drive the exponential filter. The average Nash-Sutcliffe coefficients (NSs) for ARM ranged from -0. 1 to 0.3 and were similar to the results obtained from the USCRN network (0.2-0.3). NS values from the SCAN and SNOTEL networks were slightly higher (0.1-0.5). These results indicated that this approach had some skill in providing an estimate of RZSM. In terms of RMSE (in volumetric soil moisture), ARM values actually outperformed those from other networks (0.02-0.04). SCAN and USCRN RMSE average values ranged from 0.04 to 0.06 and SNOTEL average RMSE values were higher (0.05-0.07). These values were close to 0.04, which is the baseline value for accuracy designated for many satellite soil moisture missions.

  18. Estimating the effect of hospital closure on areawide inpatient hospital costs: a preliminary model and application.

    PubMed Central

    Shepard, D S

    1983-01-01

    A preliminary model is developed for estimating the extent of savings, if any, likely to result from discontinuing a specific inpatient service. By examining the sources of referral to the discontinued service, the model estimates potential demand and how cases will be redistributed among remaining hospitals. This redistribution determines average cost per day in hospitals that receive these cases, relative to average cost per day of the discontinued service. The outflow rate, which measures the proportion of cases not absorbed in other acute care hospitals, is estimated as 30 percent for the average discontinuation. The marginal cost ratio, which relates marginal costs of cases absorbed in surrounding hospitals to the average costs in those hospitals, is estimated as 87 percent in the base case. The model was applied to the discontinuation of all inpatient services in the 75-bed Chelsea Memorial Hospital, near Boston, Massachusetts, using 1976 data. As the precise value of key parameters is uncertain, sensitivity analysis was used to explore a range of values. The most likely result is a small increase ($120,000) in the area's annual inpatient hospital costs, because many patients are referred to more costly teaching hospitals. A similar situation may arise with other urban closures. For service discontinuations to generate savings, recipient hospitals must be low in costs, the outflow rate must be large, and the marginal cost ratio must be low. PMID:6668181

  19. Physico-mechanical and wear properties of novel sustainable sour-weed fiber reinforced polyester composites

    NASA Astrophysics Data System (ADS)

    Patel, Vinay Kumar; Chauhan, Shivani; Katiyar, Jitendra Kumar

    2018-04-01

    In this study, a novel natural fiber i.e. Sour-weed botanically known as ‘Rumex acetosella’ has been first time introduced as natural reinforcements to polyester matrix. The natural fiber based polyester composites were fabricated by hand lay-up technique using different sizes and different weight percentages. In Sour-weed/Polyester composites, physical (density, water absorption and hardness), mechanical properties (tensile and impact properties) and wear properties (sand abrasion and sliding wear) were investigated for different sizes of sour weed of 0.6 mm, 5 mm, 10 mm, 15 mm and 20 mm at 3, 6 and 9 weight percent loading, respectively in polyester matrix. Furthermore, on average value of results, the multi-criteria optimization technique i.e. TOPSIS was employed to decide the ranking of the composites. From the optimized results, it was observed that Sour-weed composite reinforced with fiber’s size of 15 mm at 6 wt% loading demonstrated the best ranked composite exhibiting best overall properties as average tensile strength of 34.33 MPa, average impact strength of 10 Joule, average hardness of 12 Hv, average specific sand abrasion wear rate of 0.0607 mm3 N‑1m‑1, average specific sliding wear rate of 0.002 90 mm3 N‑1m‑1, average percentage of water absorption of 3.446% and average density of 1.013 among all fabricated composites.

  20. [The importance of Injury Severity Score (ISS) in the management of thoracolumbar burst fracture].

    PubMed

    Rezende, Rodrigo; Avanzi, Osmar

    2009-02-01

    There are few publications which relate the injury severity score (ISS) to the thoracolumbar burst fractures. For that reason and for the frequency in which they occur, we have evaluated the severity of the trauma in these patients. We have evaluated 190 burst fractures in the spinal cord according to Denis, using the codes of Abbreviated Injury Scales (AIS) for the calculation of the ISS, which uses the three parts of the human body with major severity. These lesions are a squared number and the results are summed up. Among 190 cases evaluated, the median value of the ISS was 13 and the average was 14,4. Males presented a higher ISS than females. The young adult patients presented an average and a median value of the ISS higher than the old patients. The higher the ISS is, the longer the hospitalization period is, except for the patients with the ISS over 35. The fractures in thoracic level show the ISS higher than the rest. The ISS is directly related to surgical treatment and mortality. The ISS values which were found show that a less severe trauma can cause a burst thoracic or lumbar spinal cord fracture. The value of the ISS has not shown correlation to the sex and the fracture level, but it is proportional to the hospitalization period, the surgical treatment and the mortality rate. This result shows a value which is inversely proportional to the age of the patients.

  1. 40 CFR 63.482 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... operation operated in a batch mode. Block polymer means a polymer where the polymerization is controlled... frequent block average values. Continuous unit operation means a unit operation operated in a continuous... (EPM) result from the polymerization of ethylene and propylene and contain a saturated chain of the...

  2. 40 CFR 63.482 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... operation operated in a batch mode. Block polymer means a polymer where the polymerization is controlled... frequent block average values. Continuous unit operation means a unit operation operated in a continuous... (EPM) result from the polymerization of ethylene and propylene and contain a saturated chain of the...

  3. 40 CFR 63.482 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... operation operated in a batch mode. Block polymer means a polymer where the polymerization is controlled... frequent block average values. Continuous unit operation means a unit operation operated in a continuous... (EPM) result from the polymerization of ethylene and propylene and contain a saturated chain of the...

  4. Energy cost of physical activities in 12-y-old girls: MET values and the influence of body weight.

    PubMed

    Spadano, J L; Must, A; Bandini, L G; Dallal, G E; Dietz, W H

    2003-12-01

    Few data exist on the energy cost of specific activities in children. The influence of body weight on the energy cost of activity when expressed as metabolic equivalents (METs) has not been vigorously explored. To provide MET data on five specific activities in 12-y-old girls and to test the hypothesis that measured MET values are independent of body weight. In 17 12-y-old girls, resting metabolic rate (RMR) and the energy expended while sitting, standing, walking on a flat treadmill at 3.2 and at 4.8 km/h, and walking on a treadmill at a 10% incline at 4.8 km/h were measured using indirect calorimetry. MET values were calculated by dividing the energy expenditure of an activity by the subject's RMR. The influence of body weight was assessed using simple linear regression. The observed METs were more consistent with published values for similar activities in adults than those offered for children. Body weight was a statistically significant predictor of the MET of all three walking activities, but not the MET of sitting or standing. Body weight explained 25% of the variance in the MET value for walking at 3.2 km/h, 39% for walking at 4.8 km/h, and 63% for walking at a 10% incline at 4.8 km/h. METs for the three walking activities were not independent of body weight. The use of average MET values to estimate the energy cost of these three activities would result in an underestimation of their energy cost in heavier girls and an overestimation in lighter girls. These results suggest that the estimation of total energy expenditure from activity diary, recall, and direct observation data using average MET values may be biased by body weight.

  5. Isobaric vapor-liquid equilibria for binary systems α-phenylethylamine + toluene and α-phenylethylamine + cyclohexane at 100 kPa

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoru; Gao, Yingyu; Ban, Chunlan; Huang, Qiang

    2016-09-01

    In this paper the results of the vapor-liquid equilibria study at 100 kPa are presented for two binary systems: α-phenylethylamine(1) + toluene (2) and (α-phenylethylamine(1) + cyclohexane(2)). The binary VLE data of the two systems were correlated by the Wilson, NRTL, and UNIQUAC models. For each binary system the deviations between the results of the correlations and the experimental data have been calculated. For the both binary systems the average relative deviations in temperature for the three models were lower than 0.99%. The average absolute deviations in vapour phase composition (mole fractions) and in temperature T were lower than 0.0271 and 1.93 K, respectively. Thermodynamic consistency has been tested for all vapor-liquid equilibrium data by the Herrington method. The values calculated by Wilson and NRTL equations satisfied the thermodynamics consistency test for the both two systems, while the values calculated by UNIQUAC equation didn't.

  6. A new algorithm to reduce noise in microscopy images implemented with a simple program in python.

    PubMed

    Papini, Alessio

    2012-03-01

    All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.

  7. Simulation of load traffic and steeped speed control of conveyor

    NASA Astrophysics Data System (ADS)

    Reutov, A. A.

    2017-10-01

    The article examines the possibilities of the step control simulation of conveyor speed within Mathcad, Simulink, Stateflow software. To check the efficiency of the control algorithms and to more accurately determine the characteristics of the control system, it is necessary to simulate the process of speed control with real values of traffic for a work shift or for a day. For evaluating the belt workload and absence of spillage it is necessary to use empirical values of load flow in a shorter period of time. The analytical formulas for optimal speed step values were received using empirical values of load. The simulation checks acceptability of an algorithm, determines optimal parameters of regulation corresponding to load flow characteristics. The average speed and the number of speed switching during simulation are admitted as criteria of regulation efficiency. The simulation example within Mathcad software is implemented. The average conveyor speed decreases essentially by two-step and three-step control. A further increase in the number of regulatory steps decreases average speed insignificantly but considerably increases the intensity of the speed switching. Incremental algorithm of speed regulation uses different number of stages for growing and reducing load traffic. This algorithm allows smooth control of the conveyor speed changes with monotonic variation of the load flow. The load flow oscillation leads to an unjustified increase or decrease of speed. Work results can be applied at the design of belt conveyors with adjustable drives.

  8. Response of marine bacterioplankton to a massive under-ice phytoplankton bloom in the Chukchi Sea (Western Arctic Ocean)

    NASA Astrophysics Data System (ADS)

    Ortega-Retuerta, E.; Fichot, C. G.; Arrigo, K. R.; Van Dijken, G. L.; Joux, F.

    2014-07-01

    The activity of heterotrophic bacterioplankton and their response to changes in primary production in the Arctic Ocean is essential to understand biogenic carbon flows in the area. In this study, we explored the patterns of bacterial abundance (BA) and bacterial production (BP) in waters coinciding with a massive under-ice phytoplankton bloom in the Chukchi Sea in summer 2011, where chlorophyll a (chl a) concentrations were up to 38.9 mg m-3. Contrary to our expectations, BA and BP did not show their highest values coinciding with the bloom. In fact, bacterial biomass was only 3.5% of phytoplankton biomass. Similarly, average DOC values were similar inside (average 57.2±3.1 μM) and outside (average 64.3±4.8 μM) the bloom patch. Regression analyses showed relatively weak couplings, in terms of slope values, between chl a or primary production and BA or BP. Multiple regression analyses indicated that both temperature and chl a explained BA and BP variability in the Chukchi Sea. This temperature dependence was confirmed experimentally, as higher incubation temperatures (6.6 °C vs. 2.2 °C) enhanced BA and BP, with Q10 values of BP up to 20.0. Together, these results indicate that low temperatures in conjunction with low dissolved organic matter release can preclude bacteria to efficiently process a higher proportion of carbon fixed by phytoplankton, with further consequences on the carbon cycling in the area.

  9. Cooperation prevails when individuals adjust their social ties.

    PubMed

    Santos, Francisco C; Pacheco, Jorge M; Lenaerts, Tom

    2006-10-20

    Conventional evolutionary game theory predicts that natural selection favours the selfish and strong even though cooperative interactions thrive at all levels of organization in living systems. Recent investigations demonstrated that a limiting factor for the evolution of cooperative interactions is the way in which they are organized, cooperators becoming evolutionarily competitive whenever individuals are constrained to interact with few others along the edges of networks with low average connectivity. Despite this insight, the conundrum of cooperation remains since recent empirical data shows that real networks exhibit typically high average connectivity and associated single-to-broad-scale heterogeneity. Here, a computational model is constructed in which individuals are able to self-organize both their strategy and their social ties throughout evolution, based exclusively on their self-interest. We show that the entangled evolution of individual strategy and network structure constitutes a key mechanism for the sustainability of cooperation in social networks. For a given average connectivity of the population, there is a critical value for the ratio W between the time scales associated with the evolution of strategy and of structure above which cooperators wipe out defectors. Moreover, the emerging social networks exhibit an overall heterogeneity that accounts very well for the diversity of patterns recently found in acquired data on social networks. Finally, heterogeneity is found to become maximal when W reaches its critical value. These results show that simple topological dynamics reflecting the individual capacity for self-organization of social ties can produce realistic networks of high average connectivity with associated single-to-broad-scale heterogeneity. On the other hand, they show that cooperation cannot evolve as a result of "social viscosity" alone in heterogeneous networks with high average connectivity, requiring the additional mechanism of topological co-evolution to ensure the survival of cooperative behaviour.

  10. Curie point depth in the SW Caribbean using the radially averaged spectra of magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Salazar, Juan M.; Vargas, Carlos A.; Leon, Hermann

    2017-01-01

    We have estimated the Curie Point Depth (CPD) using the average radial power spectrum in a tectonically complex area located in the SW Caribbean basin. Data analyzed came from the World Digital Magnetic Anomaly Map, and three methods have been used to compare results and evaluate uncertainties: Centroid, Spectral Peak, and Forward Modeling. Results show a match along the three methods, suggesting that the CPD values in the area ranging between 6 km and 50 km. The results share the following characteristics: A) High values (> 30 km) are in continental regions; B) There is a trend of maximum CPD values along the SW-NE direction, starting from the Central Cordillera in Colombia to the Maracaibo Lake in Venezuela; C) There is a maximum CPD at the Sierra Nevada de Santa Marta (Colombia) as well as between Costa Rica - Nicaragua and Nicaragua - Honduras borders. The lowest CPD values (< 20 km) are associated with the coastal regions and offshore. We also tested results by estimating the geothermal gradient and comparing measured observations of the study area. Our results suggest at least five thermal terrains in the SW Caribbean Basin: A) The area that is comprising the Venezuela Basin, the Beata Ridge and the Colombia Basin up to longitude parallel to the Providencia Throat. B) The area that includes zones to the north of the Cocos Ridge and Panam Basin up to the trench. C) The orogenic region of the northern Andes and including areas of the Santa Marta Massif. D) The continental sector that encompasses Nicaragua, northern Costa Rica and eastern of Honduras. E) Corresponds to areas of the northern Venezuela and Colombia, NW of Colombia, the Panamanian territory and the transition zones between the Upper and Lower Nicaragua Rise.

  11. Determination of prescription dose for Cs-131 permanent implants using the BED formalism including resensitization correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei, E-mail: wei.luo@uky.edu; Molloy, Janelle; Aryal, Prakash

    2014-02-15

    Purpose: The current widely used biological equivalent dose (BED) formalism for permanent implants is based on the linear-quadratic model that includes cell repair and repopulation but not resensitization (redistribution and reoxygenation). The authors propose a BED formalism that includes all the four biological effects (4Rs), and the authors propose how it can be used to calculate appropriate prescription doses for permanent implants with Cs-131. Methods: A resensitization correction was added to the BED calculation for permanent implants to account for 4Rs. Using the same BED, the prescription doses with Au-198, I-125, and Pd-103 were converted to the isoeffective Cs-131 prescriptionmore » doses. The conversion factor F, ratio of the Cs-131 dose to the equivalent dose with the other reference isotope (F{sub r}: with resensitization, F{sub n}: without resensitization), was thus derived and used for actual prescription. Different values of biological parameters such as α, β, and relative biological effectiveness for different types of tumors were used for the calculation. Results: Prescription doses with I-125, Pd-103, and Au-198 ranging from 10 to 160 Gy were converted into prescription doses with Cs-131. The difference in dose conversion factors with (F{sub r}) and without (F{sub n}) resensitization was significant but varied with different isotopes and different types of tumors. The conversion factors also varied with different doses. For I-125, the average values of F{sub r}/F{sub n} were 0.51/0.46, for fast growing tumors, and 0.88/0.77 for slow growing tumors. For Pd-103, the average values of F{sub r}/F{sub n} were 1.25/1.15 for fast growing tumors, and 1.28/1.22 for slow growing tumors. For Au-198, the average values of F{sub r}/F{sub n} were 1.08/1.25 for fast growing tumors, and 1.00/1.06 for slow growing tumors. Using the biological parameters for the HeLa/C4-I cells, the averaged value of F{sub r} was 1.07/1.11 (rounded to 1.1), and the averaged value of F{sub n} was 1.75/1.18. F{sub r} of 1.1 has been applied to gynecological cancer implants with expected acute reactions and outcomes as expected based on extensive experience with permanent implants. The calculation also gave the average Cs-131 dose of 126 Gy converted from the I-125 dose of 144 Gy for prostate implants. Conclusions: Inclusion of an allowance for resensitization led to significant dose corrections for Cs-131 permanent implants, and should be applied to prescription dose calculation. The adjustment of the Cs-131 prescription doses with resensitization correction for gynecological permanent implants was consistent with clinical experience and observations. However, the Cs-131 prescription doses converted from other implant doses can be further adjusted based on new experimental results, clinical observations, and clinical outcomes.« less

  12. Determination of prescription dose for Cs-131 permanent implants using the BED formalism including resensitization correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei, E-mail: wei.luo@uky.edu; Molloy, Janelle; Aryal, Prakash

    Purpose: The current widely used biological equivalent dose (BED) formalism for permanent implants is based on the linear-quadratic model that includes cell repair and repopulation but not resensitization (redistribution and reoxygenation). The authors propose a BED formalism that includes all the four biological effects (4Rs), and the authors propose how it can be used to calculate appropriate prescription doses for permanent implants with Cs-131. Methods: A resensitization correction was added to the BED calculation for permanent implants to account for 4Rs. Using the same BED, the prescription doses with Au-198, I-125, and Pd-103 were converted to the isoeffective Cs-131 prescriptionmore » doses. The conversion factor F, ratio of the Cs-131 dose to the equivalent dose with the other reference isotope (F{sub r}: with resensitization, F{sub n}: without resensitization), was thus derived and used for actual prescription. Different values of biological parameters such as α, β, and relative biological effectiveness for different types of tumors were used for the calculation. Results: Prescription doses with I-125, Pd-103, and Au-198 ranging from 10 to 160 Gy were converted into prescription doses with Cs-131. The difference in dose conversion factors with (F{sub r}) and without (F{sub n}) resensitization was significant but varied with different isotopes and different types of tumors. The conversion factors also varied with different doses. For I-125, the average values of F{sub r}/F{sub n} were 0.51/0.46, for fast growing tumors, and 0.88/0.77 for slow growing tumors. For Pd-103, the average values of F{sub r}/F{sub n} were 1.25/1.15 for fast growing tumors, and 1.28/1.22 for slow growing tumors. For Au-198, the average values of F{sub r}/F{sub n} were 1.08/1.25 for fast growing tumors, and 1.00/1.06 for slow growing tumors. Using the biological parameters for the HeLa/C4-I cells, the averaged value of F{sub r} was 1.07/1.11 (rounded to 1.1), and the averaged value of F{sub n} was 1.75/1.18. F{sub r} of 1.1 has been applied to gynecological cancer implants with expected acute reactions and outcomes as expected based on extensive experience with permanent implants. The calculation also gave the average Cs-131 dose of 126 Gy converted from the I-125 dose of 144 Gy for prostate implants. Conclusions: Inclusion of an allowance for resensitization led to significant dose corrections for Cs-131 permanent implants, and should be applied to prescription dose calculation. The adjustment of the Cs-131 prescription doses with resensitization correction for gynecological permanent implants was consistent with clinical experience and observations. However, the Cs-131 prescription doses converted from other implant doses can be further adjusted based on new experimental results, clinical observations, and clinical outcomes.« less

  13. 17 CFR 275.205-2 - Definition of “specified period” over which the asset value of the company or fund under...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... periodâ over which the asset value of the company or fund under management is averaged. 275.205-2 Section... REGULATIONS, INVESTMENT ADVISERS ACT OF 1940 § 275.205-2 Definition of “specified period” over which the asset value of the company or fund under management is averaged. (a) For purposes of this rule: (1) Fulcrum...

  14. Probability distributions of continuous measurement results for conditioned quantum evolution

    NASA Astrophysics Data System (ADS)

    Franquet, A.; Nazarov, Yuli V.

    2017-02-01

    We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.

  15. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  16. The value-based medicine comparative effectiveness and cost-effectiveness of penetrating keratoplasty for keratoconus.

    PubMed

    Roe, Richard H; Lass, Jonathan H; Brown, Gary C; Brown, Melissa M

    2008-10-01

    To perform a base case, comparative effectiveness, and cost-effectiveness (cost-utility) analysis of penetrating keratoplasty for patients with severe keratoconus. Visual acuity data were obtained from a large, retrospective multicenter study in which patients with keratoconus with less than 20/40 best corrected visual acuity and/or the inability to wear contact lenses underwent penetrating keratoplasty, with an average follow-up of 2.1 years. The results were combined with other retrospective studies investigating complication rates of penetrating keratoplasty. The data were then incorporated into a cost-utility model using patient preference-based, time trade-off utilities, computer-based decision analysis, and a net present value model to account for the time value of outcomes and money. The comparative effectiveness of the intervention is expressed in quality-of-life gain and QALYs (quality-adjusted life-years), and the cost-effectiveness results are expressed in the outcome of $/QALY (dollars spent per QALY). Penetrating keratoplasty in 1 eye for patients with severe keratoconus results in a comparative effectiveness (value gain) of 16.5% improvement in quality of life every day over the 44-year life expectancy of the average patient with severe keratoconus. Discounting the total value gain of 5.36 QALYs at a 3% annual discount rate yields 3.05 QALYs gained. The incremental cost for penetrating keratoplasty, including all complications, is $5934 ($5913 discounted at 3% per year). Thus, the incremental cost-utility (discounted at 3% annually) for this intervention is $5913/3.05 QALYs = $1942/QALY. If both eyes undergo corneal transplant, the total discounted value gain is 30% and the overall cost-utility is $2003. Surgery on the second eye confers a total discounted value gain of 2.5 QALYs, yielding a quality-of-life gain of 11.6% and a discounted cost-utility of $2238/QALY. Penetrating keratoplasty for patients with severe keratoconus seems to be a comparatively effective and cost-effective procedure when compared with other interventions across different medical specialties.

  17. Human perceptions of colour rendition vary with average fidelity, average gamut, and gamut shape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royer, MP; Wilkerson, A.; Wei, M.

    An experiment was conducted to evaluate how subjective impressions of color quality vary with changes in average fidelity, average gamut, and gamut shape (which considers the specific hues that are saturated or desaturated). Twenty-eight participants each evaluated 26 lighting conditions—created using four, seven-channel, tunable LED luminaires—in a 3.1 m by 3.7 m room filled with objects selected to cover a range of hue, saturation, and lightness. IES TM-30 fidelity index (Rf) values ranged from 64 to 93, IES TM-30 gamut index (Rg¬) values from 79 to 117, and IES TM-30 Rcs,h1 values (a proxy for gamut shape) from -19% tomore » 26%. All lighting conditions delivered the same nominal illuminance and chromaticity. Participants were asked to rate each condition on eight point semantic differential scales for saturated-dull, normal-shifted, and like-dislike. They were also asked one multiple choice question, classifying the condition as saturated, dull, normal, or shifted. The findings suggest that gamut shape is more important than average gamut for human preference, where reds play a more important role than other hues. Additionally, average fidelity alone is a poor predictor of human perceptions, although Rf was somewhat better than CIE Ra. The most preferred source had a CIE Ra value of 68, and 9 of the top 12 rated products had a CIE Ra value of 73 or less, which indicates that the commonly used criteria of CIE Ra ≥ 80 may be excluding a majority of preferred light sources.« less

  18. [Low caloric value and high salt content in the meals served in school canteens].

    PubMed

    Paiva, Isabel; Pinto, Carlos; Queirós, Laurinda; Meister, Maria Cristina; Saraiva, Margarida; Bruno, Paula; Antunes, Delfina; Afonso, Manuel

    2011-01-01

    School lunch can contribute to aggravate food quality, by excess or deficiency, or it can contribute to compensate and alleviate them. This school meal should be an answer to combating the epidemic of obesity, and to feed some grace children. The objective was to study the nutritional composition of catering in canteens of public schools, from Northern municipalities in the District of Porto: Vila do Conde, Póvoa de Varzim, Santo Tirso and Trofa. Meals were subjected to laboratory analysis. Thirty two meals, four per each school were analysed, reference values for the analysis of the nutritional composition of meals were dietary reference intakes (USA) and eating well at school (UK). The average energy meal content was 447 kcal and the median 440 kcal (22% of daily calories). The average values of nutrients, per meal, were: lipids 9, 8 g, carbohydrate 65,7 g and proteins 24,0 g. In average the contribution for the meal energy was: 20% fat, 59% carbohydrate and 21% protein. In more than 75% of meals the contribution of lipid content was below the lower bound of the reference range. The average content of sodium chloride per meal was 3.4 g, and the confidence interval 95% to average 3.0 to 3.8 g, well above the recommended maximum value of 1.5 grams. The average content fiber per meal was 10.8 g higher than the minimum considered appropriate. In conclusion, the value low caloric meals was mainly due to the low fat content, and content salt of any of the components of the meal was very high.

  19. Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.

    1995-01-01

    Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.

  20. Performance in the 6-minute walk test and postoperative pulmonary complications in pulmonary surgery: an observational study.

    PubMed

    Santos, Bruna F A; Souza, Hugo C D; Miranda, Aline P B; Cipriano, Federico G; Gastaldi, Ada C

    2016-01-01

    To assess functional capacity in the preoperative phase of pulmonary surgery by comparing predicted and obtained values for the six-minute walk test (6MWT) in patients with and without postoperative pulmonary complication (PPC) METHOD: Twenty-one patients in the preoperative phase of open thoracotomy were evaluated using the 6MWT, followed by monitoring of the postoperative evolution of each participant who underwent the routine treatment. Participants were then divided into two groups: the group with PPC and the group without PPC. The results were also compared with the predicted values using reference equations for the 6MWT RESULTS: Over half (57.14%) of patients developed PPC. The 6MWT was associated with the odds for PPC (odds ratio=22, p=0.01); the group without PPC in the postoperative period walked 422.38 (SD=72.18) meters during the 6MWT, while the group with PPC walked an average of 340.89 (SD=100.93) meters (p=0.02). The distance traveled by the group without PPC was 80% of the predicted value, whereas the group with PPC averaged less than 70% (p=0.03), with more appropriate predicted values for the reference equations The 6MWT is an easy, safe, and feasible test for routine preoperative evaluation in pulmonary surgery and may indicate patients with a higher chance of developing PPC.

  1. Health risk assessment of heavy metals and metalloid in drinking water from communities near gold mines in Tarkwa, Ghana.

    PubMed

    Bortey-Sam, Nesta; Nakayama, Shouta M M; Ikenaka, Yoshinori; Akoto, Osei; Baidoo, Elvis; Mizukawa, Hazuki; Ishizuka, Mayumi

    2015-07-01

    Concentrations of heavy metals and metalloid in borehole drinking water from 18 communities in Tarkwa, Ghana, were measured to assess the health risk associated with its consumption. Mean concentrations of heavy metals (μg/L) exceeded recommended values in some communities. If we take into consideration the additive effect of heavy metals and metalloid, then oral hazard index (HI) results raise concerns about the noncarcinogenic adverse health effects of drinking groundwater in Huniso. According to the US Environmental Protection Agency's (USEPA) guidelines, HI values indicating noncarcinogenic health risk for adults and children in Huniso were 0.781 (low risk) and 1.08 (medium risk), respectively. The cancer risk due to cadmium (Cd) exposure in adults and children in the sampled communities was very low. However, the average risk values of arsenic (As) for adults and children through drinking borehole water in the communities indicated medium cancer risk, but high cancer risk in some communities such as Samahu and Mile 7. Based on the USEPA assessment, the average cancer risk values of As for adults (3.65E-05) and children (5.08E-05) indicated three (adults) and five (children) cases of neoplasm in a hundred thousand inhabitants. The results of this study showed that residents in Tarkwa who use and drink water from boreholes could be at serious risk from exposure to these heavy metals and metalloid.

  2. Nutritive value of selected variety breads and pastas.

    PubMed

    Ranhotra, G S; Gelroth, J A; Novak, F A; Bock, M A; Winterringer, G L; Matthews, R H

    1984-03-01

    Nine types of commercially produced variety breads, plain bagels, corn tortillas, and three types of pasta products were obtained from each of four cities, New York, San Francisco, Atlanta, and Kansas City. Proximate components and 12 minerals and vitamins were determined in these and in cooked pasta products. Available carbohydrate and energy values were calculated. On the average, French, Italian, and pita breads were lower in moisture than other breads. Protein in bread products averaged between 7.6% and 10.4% and in cooked pastas and tortillas between 4.4% and 5.3%. Bagels averaged 10.2% protein. Insoluble dietary fiber in whole wheat bread averaged 5.6%; for most products, dietary fiber values were five- to eightfold higher than crude fiber values. Pasta products and tortillas were virtually free of sodium. Sodium in bread products averaged between 379 and 689 mg/100 gm. Although all pasta products and most bread products were enriched, calcium was often not included. Iron averaged from 2.16 to 3.29 mg/100 gm in bread products and 3.10 to 4.24 mg/100 gm in dry pasta products. Products made with unrefined or less-refined flours and/or containing germ and bran tended to be high in phosphorus, magnesium, zinc, and manganese, and, to a lesser extent, in copper. A good portion of potassium, thiamin, riboflavin, and niacin in pasta products was lost during cooking.

  3. Average is Over

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  4. Improvements in world-wide intercomparison of PV module calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salis, E.; Pavanello, D.; Field, M.

    The calibration of the electrical performance of seven photovoltaic (PV) modules was compared between four reference laboratories on three continents. The devices included two samples in standard and two in high-efficiency crystalline silicon technology, two CI(G)S and one CdTe module. The reference value for each PV module parameter was calculated from the average of the results of all four laboratories, weighted by the respective measurement uncertainties. All single results were then analysed with respect to this reference value using the E n number approach. For the four modules in crystalline silicon technology, the results agreed in general within +/-0.5%, withmore » all values within +/-1% and all E n numbers well within [-1, 1], indicating further scope for reducing quoted measurement uncertainty. Regarding the three thin-film modules, deviations were on average roughly twice as large, i.e. in general from +/-1% to +/-2%. A number of inconsistent results were observable, although within the 5% that can be statistically expected on the basis of the E n number approach. Most inconsistencies can be traced to the preconditioning procedure of one participant, although contribution of other factors cannot be ruled out. After removing these obvious inconsistent results, only two real outliers remained, representing less than 2% of the total number of measurands. The results presented show improved agreement for the calibration of PV modules with respect to previous international exercises. For thin-film PV modules, the preconditioning of the devices prior to calibration measurements is the most critical factor for obtaining consistent results, while the measurement processes seem consistent and repeatable.« less

  5. Improvements in world-wide intercomparison of PV module calibration

    DOE PAGES

    Salis, E.; Pavanello, D.; Field, M.; ...

    2017-09-14

    The calibration of the electrical performance of seven photovoltaic (PV) modules was compared between four reference laboratories on three continents. The devices included two samples in standard and two in high-efficiency crystalline silicon technology, two CI(G)S and one CdTe module. The reference value for each PV module parameter was calculated from the average of the results of all four laboratories, weighted by the respective measurement uncertainties. All single results were then analysed with respect to this reference value using the E n number approach. For the four modules in crystalline silicon technology, the results agreed in general within +/-0.5%, withmore » all values within +/-1% and all E n numbers well within [-1, 1], indicating further scope for reducing quoted measurement uncertainty. Regarding the three thin-film modules, deviations were on average roughly twice as large, i.e. in general from +/-1% to +/-2%. A number of inconsistent results were observable, although within the 5% that can be statistically expected on the basis of the E n number approach. Most inconsistencies can be traced to the preconditioning procedure of one participant, although contribution of other factors cannot be ruled out. After removing these obvious inconsistent results, only two real outliers remained, representing less than 2% of the total number of measurands. The results presented show improved agreement for the calibration of PV modules with respect to previous international exercises. For thin-film PV modules, the preconditioning of the devices prior to calibration measurements is the most critical factor for obtaining consistent results, while the measurement processes seem consistent and repeatable.« less

  6. Finite element modelling versus classic beam theory: comparing methods for stress estimation in a morphologically diverse sample of vertebrate long bones

    PubMed Central

    Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.

    2013-01-01

    Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199

  7. Potential uncertainty reduction in model-averaged benchmark dose estimates informed by an additional dose study.

    PubMed

    Shao, Kan; Small, Mitchell J

    2011-10-01

    A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.

  8. Influence of CT automatic tube current modulation on uncertainty in effective dose.

    PubMed

    Sookpeng, S; Martin, C J; Gentle, D J

    2016-01-01

    Computed tomography (CT) scanners are equipped with automatic tube current modulation (ATCM) systems that adjust the current to compensate for variations in patient attenuation. CT dosimetry variables are not defined for ATCM situations and, thus, only the averaged values are displayed and analysed. The patient effective dose (E), which is derived from a weighted sum of organ equivalent doses, will be modified by the ATCM. Values for E for chest-abdomen-pelvis CT scans have been calculated using the ImPACT spreadsheet for patients on five CT scanners. Values for E resulting from the z-axis modulation under ATCM have been compared with results assessed using the same effective mAs values with constant tube currents. Mean values for E under ATCM were within ±10 % of those for fixed tube currents for all scanners. Cumulative dose distributions under ATCM have been simulated for two patient scans using single-slice dose profiles measured in elliptical and cylindrical phantoms on one scanner. Contributions to the effective dose from organs in the upper thorax under ATCM are 30-35 % lower for superficial tissues (e.g. breast) and 15-20 % lower for deeper organs (e.g. lungs). The effect on doses to organs in the abdomen depends on body shape, and they can be 10-22 % higher for larger patients. Results indicate that scan dosimetry parameters, dose-length product and effective mAs averaged over the whole scan can provide an assessment in terms of E that is sufficiently accurate to quantify relative risk for routine patient exposures under ATCM. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Translating HbA1c measurements into estimated average glucose values in pregnant women with diabetes.

    PubMed

    Law, Graham R; Gilthorpe, Mark S; Secher, Anna L; Temple, Rosemary; Bilous, Rudolf; Mathiesen, Elisabeth R; Murphy, Helen R; Scott, Eleanor M

    2017-04-01

    This study aimed to examine the relationship between average glucose levels, assessed by continuous glucose monitoring (CGM), and HbA 1c levels in pregnant women with diabetes to determine whether calculations of standard estimated average glucose (eAG) levels from HbA 1c measurements are applicable to pregnant women with diabetes. CGM data from 117 pregnant women (89 women with type 1 diabetes; 28 women with type 2 diabetes) were analysed. Average glucose levels were calculated from 5-7 day CGM profiles (mean 1275 glucose values per profile) and paired with a corresponding (±1 week) HbA 1c measure. In total, 688 average glucose-HbA 1c pairs were obtained across pregnancy (mean six pairs per participant). Average glucose level was used as the dependent variable in a regression model. Covariates were gestational week, study centre and HbA 1c . There was a strong association between HbA 1c and average glucose values in pregnancy (coefficient 0.67 [95% CI 0.57, 0.78]), i.e. a 1% (11 mmol/mol) difference in HbA 1c corresponded to a 0.67 mmol/l difference in average glucose. The random effects model that included gestational week as a curvilinear (quadratic) covariate fitted best, allowing calculation of a pregnancy-specific eAG (PeAG). This showed that an HbA 1c of 8.0% (64 mmol/mol) gave a PeAG of 7.4-7.7 mmol/l (depending on gestational week), compared with a standard eAG of 10.2 mmol/l. The PeAG associated with maintaining an HbA 1c level of 6.0% (42 mmol/mol) during pregnancy was between 6.4 and 6.7 mmol/l, depending on gestational week. The HbA 1c -average glucose relationship is altered by pregnancy. Routinely generated standard eAG values do not account for this difference between pregnant and non-pregnant individuals and, thus, should not be used during pregnancy. Instead, the PeAG values deduced in the current study are recommended for antenatal clinical care.

  10. Dispersion of the Neutron Emission in U{sup 235} Fission

    DOE R&D Accomplishments Database

    Feynman, R. P.; de Hoffmann, F.; Serber, R.

    1955-01-01

    Equations are developed which allow the calculation of the average number of neutrons per U{sup235} fission from experimental measurements. Experimental methods are described, the results of which give a value of (7.8 + 0.6){sup ?} neutrons per U{sup 235} thermal fission.

  11. Random-matrix approach to the statistical compound nuclear reaction at low energies using the Monte-Carlo technique [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko

    2015-11-10

    This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedommore » ν a is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.« less

  12. Background estimation and player detection in badminton video clips using histogram of pixel values along temporal dimension

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu

    2015-12-01

    Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.

  13. ASCA Observations of Distant Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Tsuru, T. G.

    We present results from ASCA observation of distant clusters of galaxies. The observed clusters are as follows; CL0016+16, A370, A959, AC118, Zw3136, MS1305.4+2941, A1851, A963, A2163, MS0839.8+2938, A665, A1689, A2218, A586 and A1413. The covering range of the redshifts is 0.14-0.55 and their average red-shift is 0.245. The negative correlation between the metal abundance and the plasma temperature seen in near clusters is also detected in the distant clusters. No apparent difference between the two correlation. It suggests no strong metal evolution has been made from z = 0.2-0.3 to z = 0. Data of velocity dispersion is available for seven clusters among our samples. All the betaspec of them are above the average of near clusters. The average betaspec for the distant clusters obtained to be betaspec = 1.85 with an rms scatter of 0.62. The value is significantly higher than the near clusters' value of betaspec = 0.94 plus or minus 0.08 with an rms scatter of 0.46.

  14. Determination of total tin in geological materials by electrothermal atomic-absorption spectrophotometry using a tungsten-impregnated graphite furnace

    USGS Publications Warehouse

    Zhou, L.; Chao, T.T.; Meier, A.L.

    1984-01-01

    An electrothermal atomic-absorption spectrophotometric method is described for the determination of total tin in geological materials, with use of a tungsten-impregnated graphite furnace. The sample is decomposed by fusion with lithium metaborate and the melt is dissolved in 10% hydrochloric acid. Tin is then extracted into trioctylphosphine oxide-methyl isobutyl ketone prior to atomization. Impregnation of the furnace with a sodium tungstate solution increases the sensitivity of the determination and improves the precision of the results. The limits of determination are 0.5-20 ppm of tin in the sample. Higher tin values can be determined by dilution of the extract. Replicate analyses of eighteen geological reference samples with diverse matrices gave relative standard deviations ranging from 2.0 to 10.8% with an average of 4.6%. Average tin values for reference samples were in general agreement with, but more precise than, those reported by others. Apparent recoveries of tin added to various samples ranged from 95 to 111% with an average of 102%. ?? 1984.

  15. The Cold Land Processes Experiment (CLPX-1): Analysis and Modelling of LSOS Data (IOP3 Period)

    NASA Technical Reports Server (NTRS)

    Tedesco, Marco; Kim, Edward J.; Cline, Don; Graf, Tobias; Koike, Toshio; Hardy, Janet; Armstrong, Richard; Brodzik, Mary

    2004-01-01

    Microwave brightness temperatures at 18.7,36.5, and 89 GHz collected at the Local-Scale Observation Site (LSOS) of the NASA Cold-Land Processes Field Experiment in February, 2003 (third Intensive Observation Period) were simulated using a Dense Media Radiative Transfer model (DMRT), based on the Quasi Crystalline Approximation with Coherent Potential (QCA-CP). Inputs to the model were averaged from LSOS snow pit measurements, although different averages were used for the lower frequencies vs. the highest one, due to the different penetration depths and to the stratigraphy of the snowpack. Mean snow particle radius was computed as a best-fit parameter. Results show that the model was able to reproduce satisfactorily brightness temperatures measured by the University of Tokyo s Ground Based Microwave Radiometer system (CBMR-7). The values of the best-fit snow particle radii were found to fall within the range of values obtained by averaging the field-measured mean particle sizes for the three classes of Small, Medium and Large grain sizes measured at the LSOS site.

  16. Viking bistatic radar observations of the hellas basin on Mars: preliminary results.

    PubMed

    Simpson, R A; Tyler, G L; Brenkle, J P; Sue, M

    1979-01-05

    Preliminary reduction of Viking bistatic radar data gives root-mean-square surface slopes in the Hellas basin on Mars of about 4 degrees on horizontal scales averaged over 10 centimeters to 100 meters. This roughness decreases slightly with position along the ground track, south to north. The dielectric constant in this area appears to be approximately 3.1, greater than the martian average. These values are characteristic of lunar maria and are similar to those found near the Viking lander site in Chryse with the use of Earth-based radar.

  17. Choquet integral as an alternative aggregation method to measure the overall academic performance of primary school students: A case study

    NASA Astrophysics Data System (ADS)

    Kasim, Maznah Mat; Abdullah, Siti Rohana Goh

    2014-07-01

    Many average methods are available to aggregate a set of numbers to become single number. However these methods do not consider the interdependencies between the criteria of the related numbers. This paper is highlighting the Choquet Integral method as an alternative aggregation method where the interdependency estimates between the criteria are comprised in the aggregation process. The interdependency values can be estimated by using lambda fuzzy measure method. By considering the interdependencies or interaction between the criteria, the resulted aggregated values are more meaningful as compared to the ones obtained by normal average methods. The application of the Choquet Integral is illustrated in a case study of finding the overall academic achievement of year six pupils in a selected primary school in a northern state of Malaysia.

  18. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  19. Quantification of differences between occupancy and total monitoring periods for better assessment of exposure to particles in indoor environments

    NASA Astrophysics Data System (ADS)

    Wierzbicka, A.; Bohgard, M.; Pagels, J. H.; Dahl, A.; Löndahl, J.; Hussein, T.; Swietlicki, E.; Gudmundsson, A.

    2015-04-01

    For the assessment of personal exposure, information about the concentration of pollutants when people are in given indoor environments (occupancy time) are of prime importance. However this kind of data frequently is not reported. The aim of this study was to assess differences in particle characteristics between occupancy time and the total monitoring period, with the latter being the most frequently used averaging time in the published data. Seven indoor environments were selected in Sweden and Finland: an apartment, two houses, two schools, a supermarket, and a restaurant. They were assessed for particle number and mass concentrations and number size distributions. The measurements using a Scanning Mobility Particle Sizer and two photometers were conducted for seven consecutive days during winter in each location. Particle concentrations in residences and schools were, as expected, the highest during occupancy time. In the apartment average and median PM2.5 mass concentrations during the occupancy time were 29% and 17% higher, respectively compared to total monitoring period. In both schools, the average and medium values of the PM2.5 mass concentrations were on average higher during teaching hours compared to the total monitoring period by 16% and 32%, respectively. When it comes to particle number concentrations (PNC), in the apartment during occupancy, the average and median values were 33% and 58% higher, respectively than during the total monitoring period. In both houses and schools the average and median PNC were similar for the occupancy and total monitoring periods. General conclusions on the basis of measurements in the limited number of indoor environments cannot be drawn. However the results confirm a strong dependence on type and frequency of indoor activities that generate particles and site specificity. The results also indicate that the exclusion of data series during non-occupancy periods can improve the estimates of particle concentrations and characteristics suitable for exposure assessment, which is crucial for estimating health effects in epidemiological and toxicological studies.

  20. Double dissociation of value computations in orbitofrontal and anterior cingulate neurons

    PubMed Central

    Kennerley, Steven W.; Behrens, Timothy E. J.; Wallis, Jonathan D.

    2011-01-01

    Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable within PFC neurons. While many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population eliminated value coding. However, a special population of neurons in anterior cingulate cortex (ACC) - but not orbitofrontal cortex (OFC) - multiplex chosen value across decision parameters using a unified encoding scheme, and encoded reward prediction errors. In contrast, neurons in OFC - but not ACC - encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, while ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters. PMID:22037498

  1. Association between obesity and sperm quality.

    PubMed

    Ramaraju, G A; Teppala, S; Prathigudupu, K; Kalagara, M; Thota, S; Kota, M; Cheemakurthi, R

    2018-04-01

    There is awareness of likelihood of abnormal spermatozoa in obese men; however, results from previous studies are inconclusive. Advances in computer-aided sperm analysis (CASA) enable precise evaluation of sperm quality and include assessment of several parameters. We studied a retrospective cohort of 1285 men with CASA data from our infertility clinic during 2016. Obesity (BMI ≥30) was associated with lower (mean ± SE) volume (-0.28 ± 0.12, p-value = .04), sperm count (48.36 ± 16.51, p-value = .002), concentration (-15.83 ± 5.40, p-value = .01), progressive motility (-4.45 ± 1.92, p-value = .001), total motility (-5.50 ± 2.12, p-value = .002), average curve velocity (μm/s) (-2.09 ± 0.85, p-value = .001), average path velocity (μm/s) (-1.59 ± 0.75, p-value = .006), and higher per cent head defects (0.92 ± 0.81, p-value = .02), thin heads (1.12 ± 0.39, p-value = .007) and pyriform heads (1.36 ± 0.65, p-value = .02). Obese men were also more likely to have (odds ratio, 95% CI) oligospermia (1.67, 1.15-2.41, p-value = .007) and asthenospermia (1.82, 1.20-2.77, p-value = .005). This is the first report of abnormal sperm parameters in obese men based on CASA. Clinicians may need to factor in paternal obesity prior to assisted reproduction. © 2017 Blackwell Verlag GmbH.

  2. Assessment of renal vasoconstriction in vivo after intra-arterial administration of the isosmotic contrast medium iodixanol compared to the low-osmotic contrast medium iopamidol.

    PubMed

    Treitl, Marcus; Rupprecht, Harald; Wirth, Stefan; Korner, Markus; Reiser, Maximilian; Rieger, Johannes

    2009-05-01

    Low-osmotic contrast media (LOCM) such as iopamidol are known to increase the renal resistance index (RRI). The aim of our study was to evaluate in vivo the different effects of intra-arterial administration of LOCM in comparison to isosmotic contrast medium (IOCM) such as iodixanol on the human RRI. Twenty patients (16 males, 4 females; 66 years on average) with normal renal function (mean creatinine 1.0 mg/dl) had digital subtraction angiography (DSA) of the abdominal and lower-limb arteries. Ten patients received LOCM, and 10 patients IOCM (150 ml on average, 20 ml/s). The RRI was assessed by an experienced nephrologist with duplex ultrasound from 15 min before until 30 min after the first injection with delays of 1-5 min. The basic value of the RRI and differential RRI were calculated. The basic value of the RRI was 0.69 in the LOCM group and 0.71 in the IOCM group. After LOCM a significant increase of the RRI to 0.73 on average (P < or = 0.001) 2 min after the first injection was found, whereas IOCM did not result in a significant change of the RRI (RRI remained 0.71 on average, P > or = 0.1). In the LOCM group, the RRI returned to the basic value after 30 min (+/-2.3 min). Intra-arterial administration of IOCM had no influence on renal vascular resistance as expressed by the RRI, unlike LOCM, which induced a highly significant increase of the RRI for up to 30 min.

  3. Seasonal trend analysis and ARIMA modeling of relative humidity and wind speed time series around Yamula Dam

    NASA Astrophysics Data System (ADS)

    Eymen, Abdurrahman; Köylü, Ümran

    2018-02-01

    Local climate change is determined by analysis of long-term recorded meteorological data. In the statistical analysis of the meteorological data, the Mann-Kendall rank test, which is one of the non-parametrical tests, has been used; on the other hand, for determining the power of the trend, Theil-Sen method has been used on the data obtained from 16 meteorological stations. The stations cover the provinces of Kayseri, Sivas, Yozgat, and Nevşehir in the Central Anatolia region of Turkey. Changes in land-use affect local climate. Dams are structures that cause major changes on the land. Yamula Dam is located 25 km northwest of Kayseri. The dam has huge water body which is approximately 85 km2. The mentioned tests have been used for detecting the presence of any positive or negative trend in meteorological data. The meteorological data in relation to the seasonal average, maximum, and minimum values of the relative humidity and seasonal average wind speed have been organized as time series and the tests have been conducted accordingly. As a result of these tests, the following have been identified: increase was observed in minimum relative humidity values in the spring, summer, and autumn seasons. As for the seasonal average wind speed, decrease was detected for nine stations in all seasons, whereas increase was observed in four stations. After the trend analysis, pre-dam mean relative humidity time series were modeled with Autoregressive Integrated Moving Averages (ARIMA) model which is statistical modeling tool. Post-dam relative humidity values were predicted by ARIMA models.

  4. Effects of photosynthetic photon flux density, frequency, duty ratio, and their interactions on net photosynthetic rate of cos lettuce leaves under pulsed light: explanation based on photosynthetic-intermediate pool dynamics.

    PubMed

    Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro

    2018-06-01

    Square-wave pulsed light is characterized by three parameters, namely average photosynthetic photon flux density (PPFD), pulsed-light frequency, and duty ratio (the ratio of light-period duration to that of the light-dark cycle). In addition, the light-period PPFD is determined by the averaged PPFD and duty ratio. We investigated the effects of these parameters and their interactions on net photosynthetic rate (P n ) of cos lettuce leaves for every combination of parameters. Averaged PPFD values were 0-500 µmol m -2  s -1 . Frequency values were 0.1-1000 Hz. White LED arrays were used as the light source. Every parameter affected P n and interactions between parameters were observed for all combinations. The P n under pulsed light was lower than that measured under continuous light of the same averaged PPFD, and this difference was enhanced with decreasing frequency and increasing light-period PPFD. A mechanistic model was constructed to estimate the amount of stored photosynthetic intermediates over time under pulsed light. The results indicated that all effects of parameters and their interactions on P n were explainable by consideration of the dynamics of accumulation and consumption of photosynthetic intermediates.

  5. [Micro-simulation of firms' heterogeneity on pollution intensity and regional characteristics].

    PubMed

    Zhao, Nan; Liu, Yi; Chen, Ji-Ning

    2009-11-01

    In the same industrial sector, heterogeneity of pollution intensity exists among firms. There are some errors if using sector's average pollution intensity, which are calculated by limited number of firms in environmental statistic database to represent the sector's regional economic-environmental status. Based on the production function which includes environmental depletion as input, a micro-simulation model on firms' operational decision making is proposed. Then the heterogeneity of firms' pollution intensity can be mechanically described. Taking the mechanical manufacturing sector in Deyang city, 2005 as the case, the model's parameters were estimated. And the actual COD emission intensities of environmental statistic firms can be properly matched by the simulation. The model's results also show that the regional average COD emission intensity calculated by the environmental statistic firms (0.002 6 t per 10 000 yuan fixed asset, 0.001 5 t per 10 000 yuan production value) is lower than the regional average intensity calculated by all the firms in the region (0.003 0 t per 10 000 yuan fixed asset, 0.002 3 t per 10 000 yuan production value). The difference among average intensities in the six counties is significant as well. These regional characteristics of pollution intensity attribute to the sector's inner-structure (firms' scale distribution, technology distribution) and its spatial deviation.

  6. A Methodology to Detect Abnormal Relative Wall Shear Stress on the Full Surface of the Thoracic Aorta Using 4D Flow MRI

    PubMed Central

    van Ooij, Pim; Potters, Wouter V.; Nederveen, Aart J.; Allen, Bradley D.; Collins, Jeremy; Carr, James; Malaisrie, S. Chris; Markl, Michael; Barker, Alex J.

    2014-01-01

    Purpose To compute cohort-averaged wall shear stress (WSS) maps in the thoracic aorta of patients with aortic dilatation or valvular stenosis and to detect abnormal regional WSS. Methods Systolic WSS vectors, estimated from 4D flow MRI data, were calculated along the thoracic aorta lumen in 10 controls, 10 patients with dilated aortas and 10 patients with aortic valve stenosis. 3D segmentations of each aorta were co-registered by group and used to create a cohort-specific aortic geometry. The WSS vectors of each subject were interpolated onto the corresponding cohort-specific geometry to create cohort-averaged WSS maps. A Wilcoxon rank sum test was used to generate aortic P-value maps (P<0.05) representing regional relative WSS differences between groups. Results Cohort-averaged systolic WSS maps and P-value maps were successfully created for all cohorts and comparisons. The dilation cohort showed significantly lower WSS on 7% of the ascending aorta surface, whereas the stenosis cohort showed significantly higher WSS aorta on 34% the ascending aorta surface. Conclusions The findings of this study demonstrated the feasibility of generating cohort-averaged WSS maps for the visualization and identification of regionally altered WSS in the presence of disease, as compared to healthy controls. PMID:24753241

  7. Estimating systemic exposure to ethinyl estradiol from an oral contraceptive.

    PubMed

    Westhoff, Carolyn L; Pike, Malcolm C; Tang, Rosalind; DiNapoli, Marianne N; Sull, Monica; Cremers, Serge

    2015-05-01

    This study was conducted to compare single-dose pharmacokinetics of ethinyl estradiol in an oral contraceptive with steady-state values and to assess whether any simpler measures could provide an adequate proxy of the "gold standard" 24-hour steady-state area under the curve (AUC) value. Identification of a simple, less expensive measure of systemic ethinyl estradiol exposure would be useful for larger studies that are designed to assess the relationship between an individual's ethinyl estradiol exposure and side-effects. We collected 13 samples over 24 hours for pharmacokinetic analysis on days 1 and 21 of the first cycle of a monophasic oral contraceptive that contained 30 μg ethinyl estradiol and 150 μg levonorgestrel in 17 nonobese healthy white women. We also conducted an abbreviated single-dose 9-sample pharmacokinetic analysis after a month washout. Ethinyl estradiol was measured by liquid chromatography-tandem mass spectrometry. We compared results of a full 13-sample steady-state pharmacokinetic analysis with results that had been calculated with the use of fewer samples (9 or 5) and after the single doses. We calculated Pearson correlation coefficients to evaluate the relationships between these estimates of systemic ethinyl estradiol exposure. The AUC, maximum, and 24-hour values were similar after the 2 single oral contraceptive doses (AUC; r=0.92). The steady-state 13-sample 24-hour AUC value was correlated highly with the average 9-sample AUC value after the 2 single doses (r=0.81; P=.0002). This correlation remained the same if the number of single-dose samples was reduced to 4, taken at time 1, 2.5, 4, and 24 hours. The 24-hour value at steady-state was correlated highly with the 24-hour steady-state AUC value (r=0.92; P<.0001). The average of the 24-hour values after the 2 single doses was also correlated quite highly with the steady-state AUC value (r=0.72; P=.0026). Limited blood sampling, including results from 2 single doses, gave highly correlated estimates of an oral contraceptive user's steady-state ethinyl estradiol exposure. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators.

    PubMed

    Belley, Matthew D; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J; Chen, Benny J; Dewhirst, Mark W; Yoshizumi, Terry T

    2014-03-01

    Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Average doses in soft-tissue organs were found to vary by as much as 23%-32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.

  9. Heat flow and hydrocarbon generation in the Transylvanian basin, Romania

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cranganu, C.; Deming, D.

    1996-10-01

    The Transylvanian basin in central Romania is a Neogene depression superimposed on the Cretaceous nappe system of the Carpathian Mountains. The basin contains the main gas reserves of Romania, and is one of the most important gas-producing areas of continental Europe; since 1902, gas has been produced from more than 60 fields. Surface heat flow in the Transylvanian basin as estimated in other studies ranges from 26 to 58 mW/m{sup 2}, with a mean value of 38 mW/m{sup 2}, relatively low compared to surrounding areas. The effect of sedimentation on heat flow and temperature in the Transylvanian basin was estimatedmore » with a numerical model that solved the heat equation in one dimension. Because both sediment thickness and heat flow vary widely throughout the Transylvanian basin, a wide range of model variables were used to bracket the range of possibilities. Three different burial histories were considered (thin, average, and thick), along with three different values of background heat flow (low, average, and high). Altogether, nine different model permutations were studied. Modeling results show that average heat flow in the Transylvanian basin was depressed approximately 16% during rapid Miocene sedimentation, whereas present-day heat flow remains depressed, on average, about 17% below equilibrium values. We estimated source rock maturation and the timing of hydrocarbon generation by applying Lopatin`s method. Potential source rocks in the Transylvanian basin are Oligocene-Miocene, Cretaceous, and Jurassic black shales. Results show that potential source rocks entered the oil window no earlier than approximately 13 Ma, at depths of between 4200 and 8800 m. Most simulations encompassing a realistic range of sediment thicknesses and background heat flows show that potential source rocks presently are in the oil window; however, no oil has ever been discovered or produced in the Transylvanian basin.« less

  10. 7 CFR 59.200 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... these same swine. Average lean percentage. The term “average lean percentage” means the value equal to the average percentage of the carcass weight comprised of lean meat for the swine slaughtered during the applicable reporting period. Whenever the packer changes the manner in which the average lean...

  11. 7 CFR 59.200 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... these same swine. Average lean percentage. The term “average lean percentage” means the value equal to the average percentage of the carcass weight comprised of lean meat for the swine slaughtered during the applicable reporting period. Whenever the packer changes the manner in which the average lean...

  12. The Study of Equilibrium factor between Radon-222 and its Daughters in Bangkok Atmosphere by Gamma-ray Spectrometry

    NASA Astrophysics Data System (ADS)

    Rujiwarodom, Rachanee

    2010-05-01

    To study the Equilibrium between radon-222 and its daughters in Bangkok atmosphere by Gamma-ray spectrometry, air sample were collected on 48 activated charcoal canister and 360 glass fiber filters by using a high volume jet-air sampler during December 2007 to November 2008.The Spectra of gamma-ray were measured by using a HPGe (Hyper Pure Germanium Detector). In the condition of secular equilibrium obtaining between Radon-222 and its decay products, radon-222 on activated charcoal canister and its daughters on glass fiber filters collected in the same time interval were calculated. The equilibrium factor (F) in the open air had a value of 0.38 at the minimum ,and 0.75 at the maximum. The average value of equilibrium factor (F) was 0.56±0.12. Based on the results, F had variations with a maximum value in the night to the early morning and decreased in the afternoon. In addition, F was higher in the winter than in the summer. This finding corresponds with the properties of the Earth atmosphere. The equilibrium factor (F) also depended on the concentration of dust in the atmosphere. People living in Bangkok were exposed to average value of 30 Bq/m3 of Radon-222 in the atmosphere. The equilibrium factor (0.56±0.12) and the average value of Radon-222 showed that people were exposed to alpha energy from radon-222 and its daughters decay at 0.005 WL(Working Level) which is lower than the safety standard at 0.02 WL. Keywords: Radon, Radon daughters , equilibrium factor, Gamma -ray spectrum analysis ,Bangkok ,Thailand

  13. [Efficacy and problems of bladder volume measurement using portable three dimensional ultrasound scanning device--in particular, on measuring bladder volume lower than 100ml].

    PubMed

    Oh-Oka, Hitoshi; Nose, Ryuichiro

    2005-09-01

    Using a portable three dimensional ultrasound scanning device (The Bladder Scan BVI6100, Diagnostic Ultrasound Corporation), we examined measured values of bladder volume, especially focusing on volume lower than 100 ml. A total of 100 patients (male: 66, female: 34) were enrolled in the study. We made a comparison study between the measured value (the average of three measurements of bladder urine volume after a trial in male and female modes) using BVI6100, and the actual measured value of the sample obtained by urethral catheterization in each patient. We examined the factors which could increase the error rate. We also introduced the effective techniques to reduce measurement errors. The actual measured values in all patients correlated well with the average value of three measurements after a trial in a male mode of the BVI6100. The correlation coefficient was 0.887, the error rate was--4.6 +/- 24.5%, and the average coefficient of variation was 15.2. It was observed that the measurement result using the BVI6100 is influenced by patient side factors (extracted edges between bladder wall and urine, thickened bladder wall, irregular bladder wall, flattened rate of bladder, mistaking prostate for bladder in male, mistaking bladder for uterus in a female mode, etc.) or examiner side factors (angle between BVI and abdominal wall, compatibility between abdominal wall and ultrasound probe, controlling deflection while using probe, etc). When appropriate patients are chosen and proper measurement is performed, BVI6100 provides significantly higher accuracy in determining bladder volume, compared with existing abdominal ultrasound methods. BVI6100 is a convenient and extremely effective device also for the measurement of bladder urine over 100 ml.

  14. Evaluation and Treatment of Essential Hypertension During Short Duration Space Flight

    NASA Technical Reports Server (NTRS)

    Rossum, Alfred C.; Baisden, Dennis L.

    2000-01-01

    During the last four decades of manned space flight, two individuals have successfully flown in space with the preflight diagnosis of essential hypertension (HTN). Treatment of this disease process in the astronaut population warrants special consideration particularly when selecting medication for a mission. A retrospective review of data offers two different clinical scenarios involving the treatment, or lack thereof, for essential hypertension during space flight. Case I; A Caucasian quinquagenerian diagnosed with HTN one year prior to the mission obtained flight certification after a negative diagnostic workup. The patient was placed on a diuretic. Preflight isolated blood pressure (BP) measurements averaged 138/102. Inflight, the patient electively declined medication. A 36-hour BP monitor revealed an average value of 124/87. Postflight, BP measurements returned to preflight BP values. Case II: A Caucasian quatrogenerian diagnosed with HTN 6 months prior to launch completed flight training after a negative diagnostic workup. The patient was placed on an ACE inhibiter. Preflight BP measurements averaged 130/80. Inflight, isolated BP measurements were considerably less. Normotensive values were obtained postflight. In both cases, BP values inflight were lower than pre or postflight values. Yelle et al has confirmed similar findings in the normotensive astronaut population. Spaceflight may result in fluid shifting, mild dehydration, electrolyte imbalance, orthostatic hypotension, and increased heart rates. Based on these factors, certain classes of antihypertensive agents such as vasodilators, beta-blockers, and diuretics are excluded from consideration as a primary therapeutic modality. To date, Ace Inhibitors are viewed as the more acceptable drug of choice during spaceflight. Newer classes of drugs may also provide additional choices. Presently, astronauts developing uncomplicated HTN may continue their careers when treated with the appropriate class of continue their careers when treated with the appropriate class of antihypertensive medication.

  15. Comparison of edge detection techniques for M7 subtype Leukemic cell in terms of noise filters and threshold value

    NASA Astrophysics Data System (ADS)

    Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande

    2017-11-01

    This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.

  16. Performance Comparison of Big Data Analytics With NEXUS and Giovanni

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Huang, T.; Lynnes, C.

    2016-12-01

    NEXUS is an emerging data-intensive analysis framework developed with a new approach for handling science data that enables large-scale data analysis. It is available through open source. We compare performance of NEXUS and Giovanni for 3 statistics algorithms applied to NASA datasets. Giovanni is a statistics web service at NASA Distributed Active Archive Centers (DAACs). NEXUS is a cloud-computing environment developed at JPL and built on Apache Solr, Cassandra, and Spark. We compute global time-averaged map, correlation map, and area-averaged time series. The first two algorithms average over time to produce a value for each pixel in a 2-D map. The third algorithm averages spatially to produce a single value for each time step. This talk is our report on benchmark comparison findings that indicate 15x speedup with NEXUS over Giovanni to compute area-averaged time series of daily precipitation rate for the Tropical Rainfall Measuring Mission (TRMM with 0.25 degree spatial resolution) for the Continental United States over 14 years (2000-2014) with 64-way parallelism and 545 tiles per granule. 16-way parallelism with 16 tiles per granule worked best with NEXUS for computing an 18-year (1998-2015) TRMM daily precipitation global time averaged map (2.5 times speedup) and 18-year global map of correlation between TRMM daily precipitation and TRMM real time daily precipitation (7x speedup). These and other benchmark results will be presented along with key lessons learned in applying the NEXUS tiling approach to big data analytics in the cloud.

  17. R-Matrix Analysis of Structures in Economic Indices: from Nuclear Reactions to High-Frequency Trading

    NASA Astrophysics Data System (ADS)

    Firk, Frank W. K.

    2014-03-01

    It is shown that the R-matrix theory of nuclear reactions is a viable mathematical theory for the description of the fine, intermediate and gross structure observed in the time-dependence of economic indices in general, and the daily Dow Jones Industrial Average in particular. A Lorentzian approximation to R-matrix theory is used to analyze the complex structures observed in the Dow Jones Industrial Average on a typical trading day. Resonant structures in excited nuclei are characterized by the values of their fundamental strength function, (average total width of the states)/(average spacing between adjacent states). Here, values of the ratios (average lifetime of individual states of a given component of the daily Dow Jones Industrial Average)/(average interval between the adjacent states) are determined. The ratios for the observed fine and intermediate structure of the index are found to be essentially constant throughout the trading day. These quantitative findings are characteristic of the highly statistical nature of many-body, strongly interacting systems, typified by daily trading. It is therefore proposed that the values of these ratios, determined in the first hour-or-so of trading, be used to provide valuable information concerning the likely performance of the fine and intermediate components of the index for the remainder of the trading day.

  18. Dynamic speckle interferometry of microscopic processes in solid state and thin biological objects

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. P.

    2015-08-01

    Modernized theory of dynamic speckle interferometry is considered. It is shown that the time-average radiation intensity has the parameters characterizing the wave phase changes. It also brings forward an expression for time autocorrelation function of the radiation intensity. It is shown that with the vanishing averaging time value the formulas transform to the prior expressions. The results of experiments with high-cycle material fatigue and cell metabolism analysis conducted using the time-averaging technique are discussed. Good reproducibility of the results is demonstrated. It is specified that the upgraded technique allows analyzing accumulation of fatigue damage, detecting the crack start moment and determining its growth velocity with uninterrupted cyclic load. It is also demonstrated that in the experiments with a cell monolayer the technique allows studying metabolism change both in an individual cell and in a group of cells.

  19. An investigation of the key parameters for predicting PV soiling losses

    DOE PAGES

    Micheli, Leonardo; Muller, Matthew

    2017-01-25

    One hundred and two environmental and meteorological parameters have been investigated and compared with the performance of 20 soiling stations installed in the USA, in order to determine their ability to predict the soiling losses occurring on PV systems. The results of this investigation showed that the annual average of the daily mean particulate matter values recorded by monitoring stations deployed near the PV systems are the best soiling predictors, with coefficients of determination ( R 2) as high as 0.82. The precipitation pattern was also found to be relevant: among the different meteorological parameters, the average length of drymore » periods had the best correlation with the soiling ratio. Lastly, a preliminary investigation of two-variable regressions was attempted and resulted in an adjusted R 2 of 0.90 when a combination of PM 2.5 and a binary classification for the average length of the dry period was introduced.« less

  20. The Value of a Well-Being Improvement Strategy

    PubMed Central

    Guo, Xiaobo; Coberley, Carter; Pope, James E.; Wells, Aaron

    2015-01-01

    Objective: The objective of this study is to evaluate effectiveness of a firm's 5-year strategy toward improving well-being while lowering health care costs amidst adoption of a Consumer-Driven Health Plan. Methods: Repeated measures statistical models were employed to test and quantify association between key demographic factors, employment type, year, individual well-being, and outcomes of health care costs, obesity, smoking, absence, and performance. Results: Average individual well-being trended upward by 13.5% over 5 years, monthly allowed amount health care costs declined 5.2% on average per person per year, and obesity and smoking rates declined by 4.8 and 9.7%, respectively, on average each year. The results show that individual well-being was significantly associated with each outcome and in the expected direction. Conclusions: The firm's strategy was successful in driving statistically significant, longitudinal well-being, biometric and productivity improvements, and health care cost reduction. PMID:26461860

  1. Analytical Assessment of the Relationship between 100MWp Large-scale Grid-connected Photovoltaic Plant Performance and Meteorological Parameters

    NASA Astrophysics Data System (ADS)

    Sheng, Jie; Zhu, Qiaoming; Cao, Shijie; You, Yang

    2017-05-01

    This paper helps in study of the relationship between the photovoltaic power generation of large scale “fishing and PV complementary” grid-tied photovoltaic system and meteorological parameters, with multi-time scale power data from the photovoltaic power station and meteorological data over the same period of a whole year. The result indicates that, the PV power generation has the most significant correlation with global solar irradiation, followed by diurnal temperature range, sunshine hours, daily maximum temperature and daily average temperature. In different months, the maximum monthly average power generation appears in August, which related to the more global solar irradiation and longer sunshine hours in this month. However, the maximum daily average power generation appears in October, this is due to the drop in temperature brings about the improvement of the efficiency of PV panels. Through the contrast of monthly average performance ratio (PR) and monthly average temperature, it is shown that, the larger values of monthly average PR appears in April and October, while it is smaller in summer with higher temperature. The results concluded that temperature has a great influence on the performance ratio of large scale grid-tied PV power system, and it is important to adopt effective measures to decrease the temperature of PV plant properly.

  2. Study of average valence and valence electron distribution of several oxides using X-ray photoelectron spectra

    NASA Astrophysics Data System (ADS)

    Ding, L. L.; Wu, L. Q.; Ge, X. S.; Du, Y. N.; Qian, J. J.; Tang, G. D.; Zhong, W.

    2018-06-01

    X-ray photoelectron spectra of the O 1s electrons of MnFe2O4, ZnFe2O4, ZnO, and CaO were used to estimate the average valence, ValO, of the oxygen anions in these samples. The absolute values of ValO for these samples were found to be distinctly lower than the traditional value of 2.0, suggesting that the total average valences of the cations are also lower than the conventionally accepted values owing to valence balance in the compounds. In addition, we analyzed the valence band spectra of the samples and investigated the distribution characteristics of the valence electrons.

  3. Kinematic Characteristics of Meteor Showers by Results of the Combined Radio-Television Observations

    NASA Astrophysics Data System (ADS)

    Narziev, Mirhusen

    2016-07-01

    One of the most important tasks of meteor astronomy is the study of the distribution of meteoroid matter in the solar system. The most important component to address this issue presents the results of measurements of the velocities, radiants, and orbits of both showers and sporadic meteors. Radiant's and orbits of meteors for different sets of data obtained as a result of photographic, television, electro-optical, video, Fireball Network and radar observations have been measured repeatedly. However, radiants, velocities and orbits of shower meteors based on the results of combined radar-optical observations have not been sufficiently studied. In this paper, we present a methods for computing the radiants, velocities, and orbits of the combined radar-TV meteor observations carried out at HisAO in 1978-1980. As a result of the two-year cycle of simultaneous TV-radar observations 57 simultaneous meteors have been identified. Analysis of the TV images has shown that some meteor trails appeared as dashed lines. Among the simultaneous meteors of d-Aquariids 10 produced such dashed images, and among the Perseids there were only 7. Using a known method, for such fragmented images of simultaneous meteors - together with the measured radar distance, trace length, and time interval between the segments - allowed to determine meteor velocity using combined method. In addition, velocity of the same meteors was measured using diffraction and radar range-time methods based on the results of radar observation. It has been determined that the mean values of meteoroid velocity based on the combined radar-TV observations are greater in 1 ÷ 3 km / c than the averaged velocity values measured using only radar methods. Orbits of the simultaneously observed meteors with segmented photographic images were calculated on the basis of the average velocity observed using the combined radar-TV method. The measured results of radiants velocities and orbital elements of individual meteors allowed us to calculate the average value for stream meteors. The data for the radiants, velocities and orbits of the meteor showers obtained by combined radar-TV observations to compared with data obtained by other authors.

  4. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  5. Is average chain length of plant lipids a potential proxy for vegetation, environment and climate changes?

    NASA Astrophysics Data System (ADS)

    Wang, M.; Zhang, W.; Hou, J.

    2015-04-01

    Average chain length (ACL) of leaf wax components preserved in lacustrine sediments and soil profiles has been widely adopted as a proxy indicator for past changes in vegetation, environment and climate during the late Quaternary. The fundamental assumption is that woody plants produce leaf waxes with shorter ACL values than non-woody plants. However, there is a lack of systematic survey of modern plants to justify the assumption. Here, we investigated various types of plants at two lakes, Blood Pond in the northeastern USA and Lake Ranwu on the southeastern Tibetan Plateau, and found that the ACL values were not significantly different between woody and non-woody plants. We also compiled the ACL values of modern plants in the literatures and performed a meta-analysis to determine whether a significant difference exists between woody and non-woody plants at single sites. The results showed that the ACL values of plants at 19 out of 26 sites did not show a significant difference between the two major types of plants. This suggests that extreme caution should be taken in using ACL as proxy for past changes in vegetation, environment and climate.

  6. [Quantitative estimation of evapotranspiration from Tahe forest ecosystem, Northeast China].

    PubMed

    Qu, Di; Fan, Wen-Yi; Yang, Jin-Ming; Wang, Xu-Peng

    2014-06-01

    Evapotranspiration (ET) is an important parameter of agriculture, meteorology and hydrology research, and also an important part of the global hydrological cycle. This paper applied the improved DHSVM distributed hydrological model to estimate daily ET of Tahe area in 2007 using leaf area index and other surface data extracted TM remote sensing data, and slope, aspect and other topographic indices obtained by using the digital elevation model. The relationship between daily ET and daily watershed outlet flow was built by the BP neural network, and a water balance equation was established for the studied watershed, together to test the accuracy of the estimation. The results showed that the model could be applied in the study area. The annual total ET of Tahe watershed was 234.01 mm. ET had a significant seasonal variation. The ET had the highest value in summer and the average daily ET value was 1.56 mm. The average daily ET in autumn and spring were 0.30, 0.29 mm, respectively, and winter had the lowest ET value. Land cover type had a great effect on ET value, and the broadleaf forest had a higher ET ability than the mixed forest, followed by the needle leaf forest.

  7. Impact of complications and bladder cancer stage on quality of life in patients with different types of urinary diversions.

    PubMed

    Prcic, Alden; Aganovic, Damir; Hadziosmanovic, Osman

    2013-12-01

    Determine correlation between complications and stage of the disease and their impact on quality of life in patients with different types of ileal urinary derivation after radical cystectomy, and upon estimation of acquired results, to suggest the most acceptable type of urinary diversion. In five year period a prospective clinical study was performed on 106 patients, to whom a radical cystectomy was performed due to bladder cancer. Patients were divided into two groups, 66 patients with ileal conduit derivation and 40 patients with orthotopic derivation, whereby in each group a comparison between reflux and anti-reflux technique of orthotopic bladder was made. All patients from both groups filled the Sickness Impact Profile score six months after the operation. All patients had CT urography or Intravenous urography performed, as well as standard laboratory, vitamin B12 blood values, in order to evaluate early (ileus or subileus, wound dehiscence, bladder fistula, rupture of orthotopic bladder, urine extravazation) and late complications (VUR, urethral stricture, ureter stenosis, metabolic acidosis, mineral dis-balance, hypovitaminosis of vitamin B12, increased resorption of bone calcium, urinary infection, kidney damage, relapse of primary disease), so as disease stage and it's impact on quality of life. From gained results we observe that each category of SIP score correlates with different rate of correlation with the type of operation, group, T, N, and R grade, except work category. Average value of SIP score rises depending on the type of operation and T stage. It is notable that there is no difference in T1 stage, no matter the type of operation. So the average value of SIP score in T1 stage for conduit was 20.3, for Abol-Enein and Ghoneim 17.25 and Hautmann 18.75 respectively. Average value of SIP score in T2 stage for conduit was 31, for Abol-Enein and Ghoneim 19.1 and Hautmann 17.8. Average value of SIP score in T3 stage for conduit was 38.03, for Abol-Enein and Ghoneim 18.75 and Hautmann 19.5. SIP score for T4 was present only in patients with conduit performed and average value od SIP score was 40.42. There is a high level of correlation of late complications and psychosocial and physical dimension with their parameters, while for an independent dimension of correlation is not significant. Early complications have insignificant correlation in all categories of SIP score. Upon analyzing quality of life and morbidity, significant advantage is given to orthotopic derivations, especially Hautmann derivation with Chimney modification, unless there are no absolute contraindications for performing this type of operation. Factors which mostly influence quality of life are cancer stage, type of derivation, late complications and patient age. SIP score, as a well validated questionnaire, are applicable in this kind of research.

  8. Langmuir probe measurements in a time-fluctuating-highly ionized non-equilibrium cutting arc: analysis of the electron retarding part of the time-averaged current-voltage characteristic of the probe.

    PubMed

    Prevosto, L; Kelly, H; Mancinelli, B

    2013-12-01

    This work describes the application of Langmuir probe diagnostics to the measurement of the electron temperature in a time-fluctuating-highly ionized, non-equilibrium cutting arc. The electron retarding part of the time-averaged current-voltage characteristic of the probe was analysed, assuming that the standard exponential expression describing the electron current to the probe in collision-free plasmas can be applied under the investigated conditions. A procedure is described which allows the determination of the errors introduced in time-averaged probe data due to small-amplitude plasma fluctuations. It was found that the experimental points can be gathered into two well defined groups allowing defining two quite different averaged electron temperature values. In the low-current region the averaged characteristic was not significantly disturbed by the fluctuations and can reliably be used to obtain the actual value of the averaged electron temperature. In particular, an averaged electron temperature of 0.98 ± 0.07 eV (= 11400 ± 800 K) was found for the central core of the arc (30 A) at 3.5 mm downstream from the nozzle exit. This average included not only a time-average over the time fluctuations but also a spatial-average along the probe collecting length. The fitting of the high-current region of the characteristic using such electron temperature value together with the corrections given by the fluctuation analysis showed a relevant departure of local thermal equilibrium in the arc core.

  9. Method for detection and correction of errors in speech pitch period estimates

    NASA Technical Reports Server (NTRS)

    Bhaskar, Udaya (Inventor)

    1989-01-01

    A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.

  10. Impact of the ozone monitoring instrument row anomaly on the long-term record of aerosol products

    NASA Astrophysics Data System (ADS)

    Torres, Omar; Bhartia, Pawan K.; Jethva, Hiren; Ahn, Changwoo

    2018-05-01

    Since about three years after the launch the Ozone Monitoring Instrument (OMI) on the EOS-Aura satellite, the sensor's viewing capability has been affected by what is believed to be an internal obstruction that has reduced OMI's spatial coverage. It currently affects about half of the instrument's 60 viewing positions. In this work we carry out an analysis to assess the effect of the reduced spatial coverage on the monthly average values of retrieved aerosol optical depth (AOD), single scattering albedo (SSA) and the UV Aerosol Index (UVAI) using the 2005-2007 three-year period prior to the onset of the row anomaly. Regional monthly average values calculated using viewing positions 1 through 30 were compared to similarly obtained values using positions 31 through 60, with the expectation of finding close agreement between the two calculations. As expected, mean monthly values of AOD and SSA obtained with these two scattering-angle dependent subsets of OMI observations agreed over regions where carbonaceous or sulphate aerosol particles are the predominant aerosol type. However, over arid regions, where desert dust is the main aerosol type, significant differences between the two sets of calculated regional mean values of AOD were observed. As it turned out, the difference in retrieved desert dust AOD between the scattering-angle dependent observation subsets was due to the incorrect representation of desert dust scattering phase function. A sensitivity analysis using radiative transfer calculations demonstrated that the source of the observed AOD bias was the spherical shape assumption of desert dust particles. A similar analysis in terms of UVAI yielded large differences in the monthly mean values for the two sets of calculations over cloudy regions. On the contrary, in arid regions with minimum cloud presence, the resulting UVAI monthly average values for the two sets of observations were in very close agreement. The discrepancy under cloudy conditions was found to be caused by the parameterization of clouds as opaque Lambertian reflectors. When properly accounting for cloud scattering effects using Mie theory, the observed UVAI angular bias was significantly reduced. The analysis discussed here has uncovered important algorithmic deficiencies associated with the model representation of the angular dependence of scattering effects of desert dust aerosols and cloud droplets. The resulting improvements in the handling of desert dust and cloud scattering have been incorporated in an improved version of the OMAERUV algorithm.

  11. How-to-Do-It: Further Improvements to the Steucek & Hill Assay of Photosynthesis.

    ERIC Educational Resources Information Center

    Juliao, Fernando; Butcher, Henry C., IV

    1989-01-01

    Several modifications that improve upon the assay of photosynthesis are suggested. Described are the apparatus, materials, light intensity and photosynthesis measurements, and results. A table of the average light intensity values versus the screen number and a sketch of the experimental set-up is included. (RT)

  12. Mid-1974 Population Estimates for Nonmetropolitan Communities in Arizona.

    ERIC Educational Resources Information Center

    Scott, Harold; Williams, Valerie C.

    Rural Arizona population estimates were determined for 67 communities by computing a ratio of 1970 population to a 1970 population indicator and then multiplying the resultant persons per indicator times the 1974 value of the specific indicator. The indicators employed were: average daily elementary school enrollment (Arizona Department of…

  13. The Spots and Activity of Stars in the Beehive Cluster Observed by the Kepler Space Telescope (K2)

    NASA Astrophysics Data System (ADS)

    Savanov, I. S.; Kalinicheva, E. S.; Dmitrienko, E. S.

    2018-05-01

    The spottedness parameters S (the fraction of the visible surface of the star occupied by spots) characterizing the activity of 674 stars in the Beehive Cluster (age 650 Myr) are estimated, together with variations of this parameter as a function of the rotation period, Rossby number Ro and other characteristics of the stars. The activity of the stars in this cluster is lower than the activity of stars in the younger Pleiades (125 Myr). The average S value for the Beehive Cluster stars is 0.014, while Pleiades stars have the much higher average value 0.052. The activity parameters of 61 solar-type stars in the Beehive Cluster, similar Hyades stars (of about the same age), and stars in the younger Pleiades are compared. The average S value of such objects in the Beehive Cluster is 0.014± 0.008, nearly coincident with the estimate obtained for solar-type Hyades stars. The rotation periods of these objects are 9.1 ± 3.4 day, on average, in agreement with the average rotation period of the Hyades stars (8.6 d ). Stars with periods exceeding 3-4 d are more numerous in the Beehive Cluster than in the Pleiades, and their periods have a larger range, 3-30 d . The characteristic dependence with a kink at Ro (saturation) = 0.13 is not observed in the S-Rossby number diagram for the Beehive and Hyades stars, only a clump of objects with Rossby numbers Ro > 0.7. The spottedness data for the Beehive Cluster and Hyades stars are in good agreement with the S values for dwarfs with ages of 600-700 Myr. This provides evidence for the reliability of the results of gyrochronological calibrations. The data for the Beehive and Pleiades stars are used to analyze variations in the spot-forming activity for a large number of stars of the same age that are members of a single cluster. A joint consideration of the data for two clusters can be used to draw conclusions about the time evolution of the activity of stars of different masses (over a time interval of the order of 500 Myr).

  14. Indirect and direct methods for measuring a dynamic throat diameter in a solid rocket motor

    NASA Astrophysics Data System (ADS)

    Colbaugh, Lauren

    In a solid rocket motor, nozzle throat erosion is dictated by propellant composition, throat material properties, and operating conditions. Throat erosion has a significant effect on motor performance, so it must be accurately characterized to produce a good motor design. In order to correlate throat erosion rate to other parameters, it is first necessary to know what the throat diameter is throughout a motor burn. Thus, an indirect method and a direct method for determining throat diameter in a solid rocket motor are investigated in this thesis. The indirect method looks at the use of pressure and thrust data to solve for throat diameter as a function of time. The indirect method's proof of concept was shown by the good agreement between the ballistics model and the test data from a static motor firing. The ballistics model was within 10% of all measured and calculated performance parameters (e.g. average pressure, specific impulse, maximum thrust, etc.) for tests with throat erosion and within 6% of all measured and calculated performance parameters for tests without throat erosion. The direct method involves the use of x-rays to directly observe a simulated nozzle throat erode in a dynamic environment; this is achieved with a dynamic calibration standard. An image processing algorithm is developed for extracting the diameter dimensions from the x-ray intensity digital images. Static and dynamic tests were conducted. The measured diameter was compared to the known diameter in the calibration standard. All dynamic test results were within +6% / -7% of the actual diameter. Part of the edge detection method consists of dividing the entire x-ray image by an average pixel value, calculated from a set of pixels in the x-ray image. It was found that the accuracy of the edge detection method depends upon the selection of the average pixel value area and subsequently the average pixel value. An average pixel value sensitivity analysis is presented. Both the indirect method and the direct method prove to be viable approaches to determining throat diameter during solid rocket motor operation.

  15. Exclusion Area Radiation Release during the MIT Reactor Design Basis Accident.

    DTIC Science & Technology

    1983-05-06

    Concrete Wall 116 6.2 Concrete Albedo Dose 121 6.3 Steel Door Scattering Dose 124 7.1 Total Dose Results 133 A.1 Values of N /NO for Neutron -Capture...plate fuel elements arranged in x a compact hexagonal core. This core design maximizes the neutron flux in the DO2 reflector region where numerous...sec) V = Volume of the fuel (cm 3 f Ef = Macroscopic fission cross section (cm ) = Thermal neutron flux ( neutrons /cm2 - sec) = Core-averaged value Yi

  16. M shell X-ray production cross sections and fluorescence yields for the elements with 71 <= Z <= 92 using 5.96 keV photons

    NASA Astrophysics Data System (ADS)

    Puri, S.; Mehta, D.; Chand, B.; Singh, Nirmal; Mangal, P. C.; Trehan, P. N.

    1993-03-01

    Total M X-ray production (XRP) cross sections for ten elements in the atomic number region 71 ≤ Z ≤ 92 were measured at 5.96 keV incident photon energy. The average M shell fluorescence yields < overlineωM> have also been computed using the present measured cross section values and the theoretical M shell photoionisation cross sections. The results are compared with theoretical values.

  17. 49 CFR 537.9 - Determination of fuel economy values and average fuel economy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Determination of fuel economy values and average fuel economy. 537.9 Section 537.9 Transportation Other Regulations Relating to Transportation (Continued) NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AUTOMOTIVE FUEL ECONOMY REPORTS § 537.9 Determination of fuel...

  18. Comparing the Value of Nonprofit Hospitals’ Tax Exemption to Their Community Benefits

    PubMed Central

    Herring, Bradley; Gaskin, Darrell; Zare, Hossein; Anderson, Gerard

    2018-01-01

    The tax-exempt status of nonprofit hospitals has received increased attention from policymakers interested in examining the value they provide instead of paying taxes. We use 2012 data from the Internal Revenue Service (IRS) Form 990, Centers for Medicare and Medicaid Services (CMS) Hospital Cost Reports, and American Hospital Association’s (AHA) Annual Survey to compare the value of community benefits with the tax exemption. We contrast nonprofit’s total community benefits to what for-profits provide and distinguish between charity and other community benefits. We find that the value of the tax exemption averages 5.9% of total expenses, while total community benefits average 7.6% of expenses, incremental nonprofit community benefits beyond those provided by for-profits average 5.7% of expenses, and incremental charity alone average 1.7% of expenses. The incremental community benefit exceeds the tax exemption for only 62% of nonprofits. Policymakers should be aware that the tax exemption is a rather blunt instrument, with many nonprofits benefiting greatly from it while providing relatively few community benefits. PMID:29436247

  19. Geomagnetism during solar cycle 23: Characteristics.

    PubMed

    Zerbo, Jean-Louis; Amory-Mazaudier, Christine; Ouattara, Frédéric

    2013-05-01

    On the basis of more than 48 years of morphological analysis of yearly and monthly values of the sunspot number, the aa index, the solar wind speed and interplanetary magnetic field, we point out the particularities of geomagnetic activity during the period 1996-2009. We especially investigate the last cycle 23 and the long minimum which followed it. During this period, the lowest values of the yearly averaged IMF (3 nT) and yearly averaged solar wind speed (364 km/s) are recorded in 1996, and 2009 respectively. The year 2003 shows itself particular by recording the highest value of the averaged solar wind (568 km/s), associated to the highest value of the yearly averaged aa index (37 nT). We also find that observations during the year 2003 seem to be related to several coronal holes which are known to generate high-speed wind stream. From the long time (more than one century) study of solar variability, the present period is similar to the beginning of twentieth century. We especially present the morphological features of solar cycle 23 which is followed by a deep solar minimum.

  20. Evaluation of the Orssengo-Pye IOP corrective algorithm in LASIK patients with thick corneas.

    PubMed

    Kirstein, Elliot M; Hüsler, André

    2005-09-01

    The objective of this study was to evaluate the Orssengo-Pye central corneal thickness (CCT) Goldmann applanation tonometry (GAT) corrective algorithm by observing changes in GAT and CCT before and after laser in situ keratomileusis (LASIK) surgery in patients with CCT that remains greater than 545 microm postoperatively. Tonometric and pachymetric measurements were made on 14 patients (28 eyes) before and after LASIK surgery. The selected patients were required to have average or above average postoperative central corneal thickness values in both eyes (not less than 545 microm). Preoperatively, all patients had CCT and GAT measurements taken. Postoperatively patients had CCT, GAT, and dynamic contour tonometric (DCT) measurements taken. Preoperatively, median CCT values were 589.536 microm. Median GAT values were 16.750 mmHg. Median corrected preoperative GAT values were 14.450 mmHg. After LASIK treatment, median CCT values were 559.417 microm. The decrease in median CCT was 30.119 microm. Median postoperative GAT values were 11.500 mmHg (decrease, 5.250 mmHg). Median corrected postoperative GAT values were 10.775 mmHg (decrease, 3.675 mmHg). Median postoperative DCT values were 17.858 mmHg. LASIK treatment causes a significant reduction in measured GAT intraocular pressure (IOP) values. The Orssengo-Pye formula, which attempts to correct for GAT error associated with individual variation in CCT, appears to yield misleading results in these circumstances. An unexpected 3.675-mmHg decrease in "corrected IOP" by the Orssengo-Pye method seen in this study may be attributed to some limitation or error in the formula. After adjusting for the approximate1.7-mmHg difference, which has been demonstrated between DCT and GAT, postoperative DCT values were similar to preoperative measured GAT values.

  1. Calculation of weighted averages approach for the estimation of ping tolerance values

    USGS Publications Warehouse

    Silalom, S.; Carter, J.L.; Chantaramongkol, P.

    2010-01-01

    A biotic index was created and proposed as a tool to assess water quality in the Upper Mae Ping sub-watersheds. The Ping biotic index was calculated by utilizing Ping tolerance values. This paper presents the calculation of Ping tolerance values of the collected macroinvertebrates. Ping tolerance values were estimated by a weighted averages approach based on the abundance of macroinvertebrates and six chemical constituents that include conductivity, dissolved oxygen, biochemical oxygen demand, ammonia nitrogen, nitrate nitrogen and orthophosphate. Ping tolerance values range from 0 to 10. Macroinvertebrates assigned a 0 are very sensitive to organic pollution while macroinvertebrates assigned 10 are highly tolerant to pollution.

  2. Business Profile of Boat Lift Net and Stationary Lift Net Fishing Gear in Morodemak Waters Central Java

    NASA Astrophysics Data System (ADS)

    Hapsari, Trisnani D.; Jayanto, Bogi B.; Fitri, Aristi D. P.; Triarso, I.

    2018-02-01

    Lift net is one of the fishing gears that is used widely in the Morodemak coastal fishing port (PPP) for catching pelagic fish. The yield of fish captured by these fishing gear has high economic value, such as fish belt (Trichiurus sp), squids (Loligo sp) and anchovies (Stelophorus sp). The aims of this research were to determine the technical aspects of boat lift net and stationary lift net fishing gear in Morodemak Waters Demak Regency; to find out the financial aspect of those fishing gears and to analyze the financial feasibility by counting PP, NPV, IRR, and B/C ratio criteria. This research used case study method with descriptive analysis. The sampling method was purposive sampling with 22 fishermen as respondents. The result of the research showed that the average of boat lift net acceptance was Rp 388,580,000. The financial analysis of fisheries boat lift net with the result of NPV Rp 836,149,272, PP 2.44 years, IRR value 54%, and B/C ratio 1.73. The average of stationary lift net acceptance was Rp 27,750,000. The financial analysis lift net with the result of NPV Rp 37,937,601; PP 1.96 years, IRR value 86%, and B/C ratio 1.32. This research had a positive NPV value, B/C ratio >1, and IRR > discount rate (12 %). This study concluded that the fishery business of boat lift net and stationary lift net in Morodemak coastal fishing port (PPP) was worth running.

  3. Evaluation of annual effective dose from indoor radon concentration in Eastern Province, Dammam, Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Abuelhia, E.

    2017-11-01

    The aim of this study is to determine the indoor radon concentration and to evaluate the annual effective dose received by the inhabitants in Dammam, Al-Khobar, and compare it with new premises built at university of dammam. The research has been carried out by using active detection method; Electronic Radon Detector (RAD-7) a solid state α-detector with its special accessories. The indoor radon concentration measured varies from 10.2 Bqm-3 to 25.8 Bqm-3 with an average value of 18.8 Bqm-3 and 19.7 Bqm-3 to 23.5 Bqm-3 with an average value of 21.7 Bqm-3, in Dammam and Al-khobar dwellings, respectively. In university of dammam the radon concentration varies from 7.4 Bqm-3 to 15.8 Bqm-3 with an average value of 9.02 Bqm-3. The values of annual effective doses were found to be 0.47mSv/y, 0.55mSv/y, and 0.23mSv/y, in Dammam, Al-khobar and university new premises, respectively. The average radon concentration in the old dwellings was two times compared to that in the new premises and it was 25.4 Bqm-3 lower than the world average value of 40 Bqm-3 reported by the UNSCEAR. The annual effective doses in the old dwellings was found to be (0.55mSv/y) two times the doses received at the new premises, and below the world wide average of 1.15mSv/y reported by ICRP (2010). The indoor radon concentration in the study region is safe as far as health hazard is concerned.

  4. [Influence of trabecular microstructure modeling on finite element analysis of dental implant].

    PubMed

    Shen, M J; Wang, G G; Zhu, X H; Ding, X

    2016-09-01

    To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.

  5. [APHAB scores for individual assessment of the benefit of hearing aid fitting].

    PubMed

    Löhler, J; Wollenberg, B; Schönweiler, R

    2017-11-01

    The Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire measures subjective hearing impairment on four different subscales pertaining to different listening situations. Using a very large patient cohort, this study aims to show how answers are distributed within the four subscales before and after hearing aid fitting, and what benefit the patients experience. The results are discussed on the basis of the available literature. Between April 2013 and March 2016, 35,000 APHAB questionnaires from nine German statutory health insurance providers were evaluated. The average values before and after hearing aid fitting, as well as the benefit, were determined for all four APHAB subscales and analyzed graphically. The results of the subjective evaluation of hearing impairment before and after hearing aid fitting and the resultant benefit were plotted by percentile distribution graphs and boxplots. The data were analyzed statistically. There was no overlap of the interquartile ranges before and after hearing aid fitting in any of the APHAB subscales. In three scales (EC, BN and RV), the median improvement after hearing aid fitting was nearly 30 percentage points. In the AV subscale, this value was slightly negative. The percentile distribution graphs used in this study allow individual evaluation of subjective hearing impairment before and after hearing aid fitting, as well as of the resultant benefit, on the background of a huge database. Additionally, it is demonstrated why presentation as boxplots and the average benefit values calculated from these is problematic.

  6. Accuracies of genomic breeding values in American Angus beef cattle using K-means clustering for cross-validation

    PubMed Central

    2011-01-01

    Background Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Methods Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Results Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. Conclusions These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but the recurrent inclusion of genotyped sires in retraining analyses will be necessary to routinely produce for the industry the direct genomic values with the highest accuracy. PMID:22122853

  7. 'Sportmotorische Bestandesaufnahme': criterion- vs. norm-based reference values of fitness tests for Swiss first grade children.

    PubMed

    Tomatis, Laura; Krebs, Andreas; Siegenthaler, Jessica; Murer, Kurt; de Bruin, Eling D

    2015-01-01

    Health is closely linked to physical activity and fitness. It is therefore important to monitor fitness in children. Although many reports on physical tests have been published, data comparison between studies is an issue. This study reports Swiss first grade norm values of fitness tests and compares these with criterion reference data. A total of 10,565 boys (7.18 ± 0.42 years) and 10,204 girls (7.14 ± 0.41 years) were tested for standing long jump, plate tapping, 20-m shuttle run, lateral jump and 20-m sprint. Average values for six-, seven- and eight-year-olds were analysed and reference curves for age were constructed. Z-values were generated for comparisons with criterion references reported in the literature. Results were better for all disciplines in seven-year-old first grade children compared to six-year-old children (p < 0.01). Eight-year-old children did not perform better compared to seven-year-old children in the sprint run (p = 0.11), standing long jump (p > 0.99) and shuttle run (p = 0.43), whereas they were better in all other disciplines compared to their younger peers. The average performance of boys was better than girls except for tapping at the age of 8 (p = 0.06). Differences in performance due to testing protocol and setting must be considered when test values from a first grade setting are compared to criterion-based benchmarks. In a classroom setting, younger children tended to have better results and older children tended to have worse outcomes when compared to their age group criterion reference values. Norm reference data are valid allowing comparison with other data generated by similar test protocols applied in a classroom setting.

  8. Invasive blood pressure recording comparing nursing charts with an electronic monitor: a technical report.

    PubMed

    Wong, Benjamin T; Glassford, Neil J; Bion, Victoria; Chai, Syn Y; Bellomo, Rinaldo

    2014-03-01

    Blood pressure management (assessed using nursing charts) in the early phase of septic shock may have an effect on renal outcomes. Assessment of mean arterial pressure (MAP) values as recorded on nursing charts may be inaccurate. To determine the difference between hourly blood pressure values as recorded on the nursing charts and hourly average blood pressure values over the corresponding period obtained electronically from the bedside monitor. We studied 20 patients with shock requiring vasopressor support and invasive blood pressure monitoring. Hourly blood pressure measurements were recorded on the nursing charts over a 12-hour period. Blood pressure values recorded every 10 minutes were downloaded from electronic patient monitors over the corresponding period. The hourly average of the 10-minute blood pressure values was compared with the measurements recorded on the nursing charts. We assessed 240 chart readings and 1440 electronic recordings. Average chart MAP was 72.54 mmHg and average electronic monitor MAP was 71.54 mmHg. MAP data from the two sources showed a strong correlation (ρ0.71, P < 0.005). Bland-Altman assessment revealed acceptable agreement, with a mean bias of 1mmHg and 95% limits of agreement of -11.76 mmHg and 13.76 mmHg. Using average data over 6 hours, 95% limits of agreement narrowed to -6.79mmHg and 8.79mmHg. With multiple measurements over time, mean blood pressure as recorded on nursing charts reasonably approximates mean blood pressure recorded on the monitor.

  9. Determination of representative dimension parameter values of Korean knee joints for knee joint implant design.

    PubMed

    Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu

    2012-05-01

    Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.

  10. [The assessment of the general functional status and of the psychosomatic complaints of workers at hydroelectric power plants].

    PubMed

    Danev, S; Dapov, E; Pavlov, E; Nikolova, R

    1992-01-01

    Evaluation of the general functional status and psychosomatic complaints of 61 workers from the hydroelectric power stations is made. The following methods are used: 1. Assessment of the general functional state, by means of computer analysis of the cardiac variability, analysing the changes in the values of the following indices: average value of the cardiac intervals (X), their standard deviation (SD), coefficient of variation (CV), amplitude of the mode (AMO), index of stress (IS), index of the vegetative balance (IVB), homeostatic index (HI). The last 3 indices serve for determination of the complex evaluation of chronic fatigue and work adaptation (ChFWA). 2. Evaluation of the psychosomatic complaints, by the use of a questionnaire for the subjective psychosomatic complaints. 3. Studying the systolic and diastolic blood pressure. The average values received in workers from HPS were compared with the average values of the population of the country and with the average values of a similar working activity of a group of operators from the thermal power station HPS. In conclusion it could be noted that concerning ChFWA the received values in workers from HPS are not more unfavourable generalized values from that measured in workers, occupied with similar type of work in other industrial branches of the country. However, they are with more unfavourable data in comparison with the workers from HPS. The subjective evaluation of the operators concerning their psychic and body health status is moderately worse, both in comparison with the values of the index for the country, and in comparison with those of the operators from HPS.

  11. Effect of polyoxyethylene sorbitan esters and sodium caseinate on physicochemical properties of palm-based functional lipid nanodispersions.

    PubMed

    Cheong, Jean Ne; Mirhosseini, Hamed; Tan, Chin Ping

    2010-06-01

    The main objective of the present study was to investigate the effect of polyoxyethylene sorbitan esters and sodium caseinate on physicochemical properties of palm-based functional lipid nanodispersions prepared by the emulsification-evaporation technique. The results indicated that the average droplet size increased significantly (P < 0.05) by increasing the chain length of fatty acids and also by increasing the hydrophile-lipophile balance value. Among the prepared nanodispersions, the nanoemulsion containing Polysorbate 20 showed the smallest average droplet size (202 nm) and narrowest size distribution for tocopherol-tocotrienol nanodispersions, while sodium caseinate-stabilized nanodispersions containing carotenoids had the largest average droplet size (386 nm), thus indicating a greater emulsifying role for Polysorbate 20 compared with sodium caseinate.

  12. Relative determination of W-values for alpha particles in tissue equivalent and other gases. [5. 4 MeV alpha particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krieger, G L

    1976-06-01

    W (the average energy to form an ion pair) for 5.4 MeV /sup 241/Am alpha particles in a Rossi-type tissue equivalent (T.E.) gas, argon and methane was determined to an accuracy better than 0.2% using a new automated data handling system. A vibrating reed electrometer and current digitizer were used to measure the current produced by completely stopping the alpha particles in a large cylindrical ionization chamber. A multichannel analyzer, operating in a slow multiscalar mode, was used to store pulses from the current digitizer. The dwell time, on the order of 60 minutes per channel, was selected with anmore » external timer gate. Current measurements were made at reduced pressures (approximately 200 torr) to reduce ion-recombination. The average current, over many repeated measurements, was compared to the current produced in nitrogen and its previously published W-value of 36.39 +- 0.04 eV/ion pair. The resulting W-values were (in eV/ion pair): 26.29 +- 0.05 for argon, 29.08 +- 0.03 for methane and 30.72 +- 0.04 for T.E. gas, which had an analyzed composition of 64.6% methane, 32.4% CO/sub 2/, and 2.7% nitrogen. Although the methane and argon values agree within 0.1% with previously published values, the value for T.E. is 1.2% lower than the single previously reported value.« less

  13. Leptin levels in patients with anorexia nervosa following day/inpatient treatment do not predict weight 1 year post-referral.

    PubMed

    Seitz, Jochen; Bühren, Katharina; Biemann, Ronald; Timmesfeld, Nina; Dempfle, Astrid; Winter, Sibylle Maria; Egberts, Karin; Fleischhaker, Christian; Wewetzer, Christoph; Herpertz-Dahlmann, Beate; Hebebrand, Johannes; Föcker, Manuel

    2016-09-01

    Elevated serum leptin levels following rapid therapeutically induced weight gain in anorexia nervosa (AN) patients are discussed as a potential biomarker for renewed weight loss as a result of leptin-related suppression of appetite and increased energy expenditure. This study aims to analyze the predictive value of leptin levels at discharge as well as the average rate of weight gain during inpatient or day patient treatment for body weight at 1-year follow-up. 121 patients were recruited from the longitudinal Anorexia Nervosa Day patient versus Inpatient (ANDI) trial. Serum leptin levels were analyzed at referral and discharge. A multiple linear regression analysis to predict age-adjusted body mass index (BMI-SDS) at 1-year follow-up was performed. Leptin levels, the average rate of weight gain, premorbid BMI-SDS, BMI-SDS at referral, age and illness duration were included as independent variables. Neither leptin levels at discharge nor rate of weight gain significantly predicted BMI-SDS at 1-year follow-up explaining only 1.8 and 0.4 % of the variance, respectively. According to our results, leptin levels at discharge and average rate of weight gain did not exhibit any value in predicting weight at 1-year follow-up in our longitudinal observation study of adolescent patients with AN. Thus, research should focus on other potential factors to predict weight at follow-up. As elevated leptin levels and average rate of weight gain did not pose a risk for reduced weight, we found no evidence for the beneficial effect of slow refeeding in patients with acute AN.

  14. Averaging and Adding in Children's Worth Judgements

    ERIC Educational Resources Information Center

    Schlottmann, Anne; Harman, Rachel M.; Paine, Julie

    2012-01-01

    Under the normative Expected Value (EV) model, multiple outcomes are additive, but in everyday worth judgement intuitive averaging prevails. Young children also use averaging in EV judgements, leading to a disordinal, crossover violation of utility when children average the part worths of simple gambles involving independent events (Schlottmann,…

  15. Precipitation interpolation in mountainous areas

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  16. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  17. Optimization of intra-voxel incoherent motion imaging at 3.0 Tesla for fast liver examination.

    PubMed

    Leporq, Benjamin; Saint-Jalmes, Hervé; Rabrait, Cecile; Pilleul, Frank; Guillaud, Olivier; Dumortier, Jérôme; Scoazec, Jean-Yves; Beuf, Olivier

    2015-05-01

    Optimization of multi b-values MR protocol for fast intra-voxel incoherent motion imaging of the liver at 3.0 Tesla. A comparison of four different acquisition protocols were carried out based on estimated IVIM (DSlow , DFast , and f) and ADC-parameters in 25 healthy volunteers. The effects of respiratory gating compared with free breathing acquisition then diffusion gradient scheme (simultaneous or sequential) and finally use of weighted averaging for different b-values were assessed. An optimization study based on Cramer-Rao lower bound theory was then performed to minimize the number of b-values required for a suitable quantification. The duration-optimized protocol was evaluated on 12 patients with chronic liver diseases No significant differences of IVIM parameters were observed between the assessed protocols. Only four b-values (0, 12, 82, and 1310 s.mm(-2) ) were found mandatory to perform a suitable quantification of IVIM parameters. DSlow and DFast significantly decreased between nonadvanced and advanced fibrosis (P < 0.05 and P < 0.01) whereas perfusion fraction and ADC variations were not found to be significant. Results showed that IVIM could be performed in free breathing, with a weighted-averaging procedure, a simultaneous diffusion gradient scheme and only four optimized b-values (0, 10, 80, and 800) reducing scan duration by a factor of nine compared with a nonoptimized protocol. Preliminary results have shown that parameters such as DSlow and DFast based on optimized IVIM protocol can be relevant biomarkers to distinguish between nonadvanced and advanced fibrosis. © 2014 Wiley Periodicals, Inc.

  18. Dielectric properties of lava flows west of Ascraeus Mons, Mars

    USGS Publications Warehouse

    Carter, L.M.; Campbell, B.A.; Holt, J.W.; Phillips, R.J.; Putzig, N.E.; Mattei, S.; Seu, R.; Okubo, C.H.; Egan, A.F.

    2009-01-01

    The SHARAD instrument on the Mars Reconnaissance Orbiter detects subsurface interfaces beneath lava flow fields northwest of Ascraeus Mons. The interfaces occur in two locations; a northern flow that originates south of Alba Patera, and a southern flow that originates at the rift zone between Ascraeus and Pavonis Montes. The northern flow has permittivity values, estimated from the time delay of echoes from the basal interface, between 6.2 and 17.3, with an average of 12.2. The southern flow has permittivity values of 7.0 to 14.0, with an average of 9.8. The average permittivity values for the northern and southern flows imply densities of 3.7 and 3.4 g cm-3, respectively. Loss tangent values for both flows range from 0.01 to 0.03. The measured bulk permittivity and loss tangent values are consistent with those of terrestrial and lunar basalts, and represent the first measurement of these properties for dense rock on Mars. Copyright 2009 by the American Geophysical Union.

  19. [Effects of Long-term Implementation of the Flow-Sediment Regulation Scheme on Grain and Clay Compositions of Inshore Sediments in the Yellow River Estuary].

    PubMed

    Wang, Miao-miao; Sun, Zhi-gao; Lu, Xiao-ning; Wang, Wei; Wang, Chuan-yuan

    2015-04-01

    Based on the laser particle size and X-ray diffraction (XRD) analysis, 28 sediment samples collected from the inshore region of the Yellow River estuary in October 2013 were determined to discuss the influence of long-term implementation of the flow-sediment regulation scheme (FSRS, initiated in 2002) on the distributions of grain size and clay components (smectite, illite, kaolinite and chlorite) in sediments. Results showed that, after the FSRS was implemented for more than 10 years, although the proportion of sand in inshore sediments of the Yellow River estuary was higher (average value, 23.5%) than those in sediments of the Bohai Sea and the Yellow River, silt was predominated (average value, 59.1%) and clay components were relatively low (average value, 17.4%). The clay components in sediments of the inshore region in the Yellow River estuary were close with those in the Yellow River. The situation was greatly changed due to the implementation of FSRS since 2002, and the clay components were in the order of illite > smectite > chlorite > kaolinite. This study also indicated that, compared to large-scale investigation in Bohai Sea, the local study on the inshore region of the Yellow River estuary was more favorable for revealing the effects of long-term implementation of the FSRS on sedimentation environment of the Yellow River estuary.

  20. Zero-point corrections for isotropic coupling constants for cyclohexadienyl radical, C₆H₇ and C₆H₆Mu: beyond the bond length change approximation.

    PubMed

    Hudson, Bruce S; Chafetz, Suzanne K

    2013-04-25

    Zero-point vibrational level averaging for electron spin resonance (ESR) and muon spin resonance (µSR) hyperfine coupling constants (HFCCs) are computed for H and Mu isotopomers of the cyclohexadienyl radical. A local mode approximation previously developed for computation of the effect of replacement of H by D on ¹³C-NMR chemical shifts is used. DFT methods are used to compute the change in energy and HFCCs when the geometry is changed from the equilibrium values for the stretch and both bend degrees of freedom. This variation is then averaged over the probability distribution for each degree of freedom. The method is tested using data for the methylene group of C₆H₇, cyclohexadienyl radical and its Mu analog. Good agreement is found for the difference between the HFCCs for Mu and H of CHMu and that for H of CHMu and CH₂ of the parent radical methylene group. All three of these HFCCs are the same in the absence of the zero point average, a one-parameter fit of the static HFCC, a(0), can be computed. That value, 45.2 Gauss, is compared to the results of several fixed geometry electronic structure computations. The HFCC values for the ortho, meta and para H atoms are then discussed.

Top