Sample records for minimum contrast estimators

  1. Joint Entropy for Space and Spatial Frequency Domains Estimated from Psychometric Functions of Achromatic Discrimination

    PubMed Central

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158

  2. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    PubMed

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.

  3. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  4. Estimation of Leakage Potential of Selected Sites in Interstate and Tri-State Canals Using Geostatistical Analysis of Selected Capacitively Coupled Resistivity Profiles, Western Nebraska, 2004

    USGS Publications Warehouse

    Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.

    2009-01-01

    With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.

  5. Minimum Alcohol Prices and Outlet Densities in British Columbia, Canada: Estimated Impacts on Alcohol-Attributable Hospital Admissions

    PubMed Central

    Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane

    2013-01-01

    Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383

  6. Potential benefits of minimum unit pricing for alcohol versus a ban on below cost selling in England 2014: modelling study.

    PubMed

    Brennan, Alan; Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S

    2014-09-30

    To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Modelling study using the Sheffield Alcohol Policy Model version 2.5. England 2014-15. Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45 p, and 50 p per unit (7.9 g/10 mL) of pure alcohol. Changes in mean consumption in terms of units of alcohol, drinkers' expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45 p minimum unit price. Below cost selling is estimated to reduce harmful drinkers' mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45 p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health-saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45 p minimum unit price is estimated to save 624 deaths and 23,700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40 p and 50 p per unit, is estimated to have an approximately 40-50 times greater effect. © Brennan et al 2014.

  7. Potential benefits of minimum unit pricing for alcohol versus a ban on below cost selling in England 2014: modelling study

    PubMed Central

    Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S

    2014-01-01

    Objective To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Design Modelling study using the Sheffield Alcohol Policy Model version 2.5. Setting England 2014-15. Population Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Interventions Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45p, and 50p per unit (7.9 g/10 mL) of pure alcohol. Main outcome measures Changes in mean consumption in terms of units of alcohol, drinkers’ expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. Results The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45p minimum unit price. Below cost selling is estimated to reduce harmful drinkers’ mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health—saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45p minimum unit price is estimated to save 624 deaths and 23 700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. Conclusions The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40p and 50p per unit, is estimated to have an approximately 40-50 times greater effect. PMID:25270743

  8. Effects of important parameters variations on computing eigenspace-based minimum variance weights for ultrasound tissue harmonic imaging

    NASA Astrophysics Data System (ADS)

    Haji Heidari, Mehdi; Mozaffarzadeh, Moein; Manwar, Rayyan; Nasiriavanaki, Mohammadreza

    2018-02-01

    In recent years, the minimum variance (MV) beamforming has been widely studied due to its high resolution and contrast in B-mode Ultrasound imaging (USI). However, the performance of the MV beamformer is degraded at the presence of noise, as a result of the inaccurate covariance matrix estimation which leads to a low quality image. Second harmonic imaging (SHI) provides many advantages over the conventional pulse-echo USI, such as enhanced axial and lateral resolutions. However, the low signal-to-noise ratio (SNR) is a major problem in SHI. In this paper, Eigenspace-based minimum variance (EIBMV) beamformer has been employed for second harmonic USI. The Tissue Harmonic Imaging (THI) is achieved by Pulse Inversion (PI) technique. Using the EIBMV weights, instead of the MV ones, would lead to reduced sidelobes and improved contrast, without compromising the high resolution of the MV beamformer (even at the presence of a strong noise). In addition, we have investigated the effects of variations of the important parameters in computing EIBMV weights, i.e., K, L, and δ, on the resolution and contrast obtained in SHI. The results are evaluated using numerical data (using point target and cyst phantoms), and the proper parameters of EIBMV are indicated for THI.

  9. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.

    PubMed

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.

  10. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  11. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography.

    PubMed

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-04-01

    Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.

  12. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography

    PubMed Central

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-01-01

    Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547

  13. Virtual forensic entomology: improving estimates of minimum post-mortem interval with 3D micro-computed tomography.

    PubMed

    Richards, Cameron S; Simonsen, Thomas J; Abel, Richard L; Hall, Martin J R; Schwyn, Daniel A; Wicklein, Martina

    2012-07-10

    We demonstrate how micro-computed tomography (micro-CT) can be a powerful tool for describing internal and external morphological changes in Calliphora vicina (Diptera: Calliphoridae) during metamorphosis. Pupae were sampled during the 1st, 2nd, 3rd and 4th quarter of development after the onset of pupariation at 23 °C, and placed directly into 80% ethanol for preservation. In order to find the optimal contrast, four batches of pupae were treated differently: batch one was stained in 0.5M aqueous iodine for 1 day; two for 7 days; three was tagged with a radiopaque dye; four was left unstained (control). Pupae stained for 7d in iodine resulted in the best contrast micro-CT scans. The scans were of sufficiently high spatial resolution (17.2 μm) to visualise the internal morphology of developing pharate adults at all four ages. A combination of external and internal morphological characters was shown to have the potential to estimate the age of blowfly pupae with a higher degree of accuracy and precision than using external morphological characters alone. Age specific developmental characters are described. The technique could be used as a measure to estimate a minimum post-mortem interval in cases of suspicious death where pupae are the oldest stages of insect evidence collected. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Impact of minimum contrast media volumes during elective percutaneous coronary intervention for prevention of contrast-induced nephropathy in patients with stable coronary artery disease.

    PubMed

    Ebisawa, Soichiro; Kurita, Tairo; Tanaka, Nobuyoshi; Nasu, Kenya; Kimura, Masashi; Ito, Tatsuya; Kinoshita, Yoshihisa; Tsuchikane, Etsuo; Terashima, Mitsuyasu; Suzuki, Takahiko

    2016-01-01

    Contrast-induced nephropathy (CIN) is an important complication following percutaneous coronary intervention (PCI). The clinical importance of a minimum contrast media volume (CMV) for PCI to prevent CIN has not been well evaluated. The purpose of this study was to evaluate the impact of minimum CMV to prevent CIN after PCI. In this study, 2052 consecutive patients who underwent elective PCI in our institute were analyzed. We divided patients into two groups according to CMV: a minimum CMV PCI group [CMV ≤50 ml (n = 94)] and a non-minimum CMV PCI group [CMV >50 ml (n = 1958)]. CIN occurred in 160 (7.8 %) patients. The incidence of CIN was significantly lower in the minimum CMV PCI group than in the non-minimum CMV PCI group (2.1 vs. 8.1 %; P = 0.03). According to multivariate analysis, elderly patients and diabetes mellitus patients were at high risk of developing CIN in this study population. When analyzing only high-risk patients, the incidence of CIN was also significantly lower in the minimum CMV group than in the non-minimum CMV group (2.6 vs. 10.3 %; P = 0.03). Minimum CMV PCI could reduce the incidence of CIN, particularly in high-risk patients; as such, defining the minimum CMV clinical cut-off values may be useful for the prevention of CIN.

  15. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds

    DOE PAGES

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less

  16. Agrometeorological models for forecasting the qualitative attributes of "Valência" oranges

    NASA Astrophysics Data System (ADS)

    Moreto, Victor Brunini; Rolim, Glauco de Souza; Zacarin, Bruno Gustavo; Vanin, Ana Paula; de Souza, Leone Maia; Latado, Rodrigo Rocha

    2017-11-01

    Forecasting is the act of predicting unknown future events using available data. Estimating, in contrast, uses data to simulate an actual condition. Brazil is the world's largest producer of oranges, and the state of São Paulo is the largest producer in Brazil. The "Valência" orange is among the most common cultivars in the state. We analyzed the influence of monthly meteorological variables during the growth cycle of Valência oranges grafted onto "Rangpur" lime rootstocks (VACR) for São Paulo, and developed monthly agrometeorological models for forecasting the qualitative attributes of VACR in mature orchard. For fruits per box for all months, the best accuracy was of 0.84 % and the minimum forecast range of 4 months. For the relation between °brix and juice acidity (RATIO) the best accuracy was of 0.69 % and the minimum forecast range of 5 months. Minimum, mean and maximum air temperatures, and relative evapotranspiration were the most important variables in the models.

  17. How to obtain a shortest mode converter based on periodic waveguide with limited index contrast?

    NASA Astrophysics Data System (ADS)

    Zhang, Lingxuan; Zhang, Wenfu; Wang, Guoxi; Hu, Yaowei; Ge, Zhiqiang; Wang, Leiran; Sun, Qibing; Wang, Weiqiang; Gong, Yongkang; Zhao, Wei

    2017-05-01

    Mode converter is one of most significant elements in photonic integrated circuits. It relies on increasing index contrast to shorten its length. However, index contrast is limited for technology. In addition, an overlarge index contrast leads to some disadvantages, such as large scattering loss, reflection loss, and small tolerance for manufacturing. Thus, an approximate scheme to design a mode converter is manipulating the transverse distribution of index to achieve the minimum length when the index contrast is given. We have analytically deduced the theoretical maximum coupled efficiency in periodic waveguide, which determines the minimum coupling length of mode converter. What is more, we have demonstrated how to construct a distribution function of indices in a cross section of waveguide to achieve the minimum length and a case is also given to illustrate the process. Proofs, based on both mathematic derivation and numerical simulation, have been exhibited in the paper.

  18. Urban-Rural Humidity Contrasts in Mexico City

    NASA Astrophysics Data System (ADS)

    Jáuregui, E.; Tejeda, A.

    1997-02-01

    Data from one pair of urban-suburban (Tacubaya and Airport) andone pair of urban-rural (School of Mines and Plan Texcoco) temperature and humidity measuring stations were used to illustrate specific humidity(q) contrasts in Mexico City. Results show a marked seasonal variation of q from around 7.9 g kg-1 during the dry months to 10 g kg-1 in the wet season (May-October) on both urban and suburban sites. The mean monthly contrasts for this pair of stations, albeit small, show that the city air is somewhat drier during the first half of the year. Comparison of urban and rural q on an hourly basis shows that although urban air is more humid at night the reverse is true during the afternoon. Areal distribution of q shows two centres of maximum humidity over the city at night and a corresponding minimum during the afternoon. On average the urban-rural contrasts in q were found to be somewhat smaller than the estimated uncertainty. The above results are in agreement with mid-latitude experience.

  19. Display characterization by eye: contrast ratio and discrimination throughout the grayscale

    NASA Astrophysics Data System (ADS)

    Gille, Jennifer; Arend, Larry; Larimer, James O.

    2004-06-01

    We have measured the ability of observers to estimate the contrast ratio (maximum white luminance / minimum black or gray) of various displays and to assess luminous discrimination over the tonescale of the display. This was done using only the computer itself and easily-distributed devices such as neutral density filters. The ultimate goal of this work is to see how much of the characterization of a display can be performed by the ordinary user in situ, in a manner that takes advantage of the unique abilities of the human visual system and measures visually important aspects of the display. We discuss the relationship among contrast ratio, tone scale, display transfer function and room lighting. These results may contribute to the development of applications that allow optimization of displays for the situated viewer / display system without instrumentation and without indirect inferences from laboratory to workplace.

  20. Contrasts between estimates of baseflow help discern multiple sources of water contributing to rivers

    NASA Astrophysics Data System (ADS)

    Cartwright, I.; Gilfedder, B.; Hofmann, H.

    2014-01-01

    This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. During the early stages of high-discharge events, the chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those based on chemical mass balance using Cl calculated from continuous electrical conductivity measurements. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of the annual discharge with a net baseflow contribution of 16% of total discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of discharge annually with a net baseflow contribution between 2001 and 2011 of 35% of total discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge and 26% of total discharge). These differences most probably reflect how the different techniques characterise baseflow. The local minimum and recursive digital filters probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow, floodplain storage, or interflow) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The joint use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  1. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  2. A generalised optimal linear quadratic tracker with universal applications. Part 2: discrete-time systems

    NASA Astrophysics Data System (ADS)

    Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li

    2017-01-01

    Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.

  3. Minimum resolvable power contrast model

    NASA Astrophysics Data System (ADS)

    Qian, Shuai; Wang, Xia; Zhou, Jingjing

    2018-01-01

    Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.

  4. Relict colluvial boulder deposits as paleoclimatic indicators in the Yucca Mountain region, southern Nevada

    USGS Publications Warehouse

    Whitney, J.W.; Harrington, C.D.

    1993-01-01

    Early to middle Pleistocene boulder deposits are common features on southern Nevada hillslopes. These darkly varnished, ancient colluvial deposits stand out in stark contrast to the underlying light-colored bedrock of volcanic tuffs, and they serve as minor divides between drainage channels on modern hillslopes. To demonstrate the antiquity of these stable hillslope features, six colluvial boulder deposits from Yucca Mountain, Nye County, Nevada, were dated by cation-ratio dating of rock varnish accreted on boulder surfaces. Estimated minimum ages of these boulder deposits range from 760 to 170 ka. Five additional older deposits on nearby Skull and Little Skull Mountains and Buckboard Mesa yielded cation-ratio minimum-age estimates of 1.38 Ma to 800 ka. An independent cosmogenic chlorine-36 surface exposure date was obtained on one deposit, which confirms an estimated early to middle Quaternary age. These deposits have provided the oldest age estimates for unconsolidated hillslope deposits in the southwestern United States. We suggest that the colluvial boulder deposits were produced during early and middle Pleistocene glacial/pluvial episodes and were stabilized during the transition to drier interglacial climates. The preservation of old, thin hillslope deposits and the less-than-2-m incision by hillslope runoff adjacent to these deposits, indicate that extremely low denudation rates have occurred on resistant volcanic hillslopes in the southern Great Basin during Quaternary time. -from Authors

  5. Doubly robust nonparametric inference on the average treatment effect.

    PubMed

    Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B

    2017-12-01

    Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.

  6. The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method

    PubMed Central

    2015-01-01

    Phylogenetic comparative methods (PCMs) use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE), which purportedly employs a novel model of evolution based on Felsenstein’s Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or “directional” contrasts, and draw parallels between IE and previous PCMs such as “minimum evolution”. Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses. PMID:26683838

  7. Minimum time and fuel flight profiles for an F-15 airplane with a Highly Integrated Digital Electronic Control (HIDEC) system

    NASA Technical Reports Server (NTRS)

    Haering, E. A., Jr.; Burcham, F. W., Jr.

    1984-01-01

    A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.

  8. Practical implementation of channelized hotelling observers: effect of ROI size

    NASA Astrophysics Data System (ADS)

    Ferrero, Andrea; Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.

    2017-03-01

    Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.

  9. Practical implementation of Channelized Hotelling Observers: Effect of ROI size.

    PubMed

    Ferrero, Andrea; Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H

    2017-03-01

    Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.

  10. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  11. SU-E-I-65: The Joint Commission's Requirements for Annual Diagnostic Physics Testing of Nuclear Medicine Equipment, and a Clinically Relevant Methodology for Testing Low-Contrast Resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, W. Geoffrey; Gray, David Clinton

    Purpose: To introduce the Joint Commission's requirements for annual diagnostic physics testing of all nuclear medicine equipment, effective 7/1/2014, and to highlight an acceptable methodology for testing lowcontrast resolution of the nuclear medicine imaging system. Methods: The Joint Commission's required diagnostic physics evaluations are to be conducted for all of the image types produced clinically by each scanner. Other accrediting bodies, such as the ACR and the IAC, have similar imaging metrics, but do not emphasize testing low-contrast resolution as it relates clinically. The proposed method for testing low contrast resolution introduces quantitative metrics that are clinically relevant. The acquisitionmore » protocol and calculation of contrast levels will utilize a modified version of the protocol defined in AAPM Report #52. Results: Using the Rose criterion for lesion detection with a SNRpixel = 4.335 and a CNRlesion = 4, the minimum contrast levels for 25.4 mm and 31.8 mm cold spheres were calculated to be 0.317 and 0.283, respectively. These contrast levels are the minimum threshold that must be attained to guard against false positive lesion detection. Conclusion: Low contrast resolution, or detectability, can be properly tested in a manner that is clinically relevant by measuring the contrast level of cold spheres within a Jaszczak phantom using pixel values within ROI's placed in the background and cold sphere regions. The measured contrast levels are then compared to a minimum threshold calculated using the Rose criterion and a CNRlesion = 4. The measured contrast levels must either meet or exceed this minimum threshold to prove acceptable lesion detectability. This research and development activity was performed by the authors while employed at West Physics Consulting, LLC. It is presented with the consent of West Physics, which has authorized the dissemination of the information and/or techniques described in the work.« less

  12. A comparison between temporal and subband minimum variance adaptive beamforming

    NASA Astrophysics Data System (ADS)

    Diamantis, Konstantinos; Voxen, Iben H.; Greenaway, Alan H.; Anderson, Tom; Jensen, Jørgen A.; Sboros, Vassilis

    2014-03-01

    This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar resolution but slightly lower side-lobes and higher contrast for the subband approach at the expense of increased computation time.

  13. Effects of encapsulation damping on the excitation threshold for subharmonic generation from contrast microbubbles.

    PubMed

    Katiyar, Amit; Sarkar, Kausik

    2012-11-01

    A recent study [Katiyar and Sarkar (2011). J. Acoust. Soc. Am. 130, 3137-3147] showed that in contrast to the analytical result for free bubbles, the minimum threshold for subharmonic generation for contrast microbubbles does not necessarily occur at twice the resonance frequency. Here increased damping-either due to the small radius or the encapsulation-is shown to shift the minimum threshold away from twice the resonance frequency. Free bubbles as well as four models of the contrast agent encapsulation are investigated varying the surface dilatational viscosity. Encapsulation properties are determined using measured attenuation data for a commercial contrast agent. For sufficiently small damping, models predict two minima for the threshold curve-one at twice the resonance frequency being lower than the other at resonance frequency-in accord with the classical analytical result. However, increased damping damps the bubble response more at twice the resonance than at resonance, leading to a flattening of the threshold curve and a gradual shift of the absolute minimum from twice the resonance frequency toward the resonance frequency. The deviation from the classical result stems from the fact that the perturbation analysis employed to obtain it assumes small damping, not always applicable for contrast microbubbles.

  14. Correlation techniques to determine model form in robust nonlinear system realization/identification

    NASA Technical Reports Server (NTRS)

    Stry, Greselda I.; Mook, D. Joseph

    1991-01-01

    The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.

  15. Practical implementation of Channelized Hotelling Observers: Effect of ROI size

    PubMed Central

    Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.

    2017-01-01

    Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO’s performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO’s performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies. PMID:28943699

  16. [Estimation of the number of minimum salaries attributed to professions in function of their prestige].

    PubMed

    Sousa, F A; da Silva, J A

    2000-04-01

    The purpose of this study was to verify the relationship between professional prestige scaled through estimations and the professional prestige scaled through estimation of the number of minimum salaries attributed to professions in function of their prestige in society. Results showed: 1--the relationship between the estimation of magnitudes and the estimation of the number of minimum salaries attributed to the professions in function of their prestige is characterized by a function of potence with an exponent lower than 1,0,2--the orders of degrees of prestige of the professions resultant from different experiments involving different samples of subjects are highly concordant (W = 0.85; p < 0.001), considering the modality used as a number (estimation of magnitudes of minimum salaries).

  17. Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates

    PubMed Central

    Curtis, Caroline A.; Bradley, Bethany A.

    2016-01-01

    Background Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance. Methods We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the ‘plant characteristics’ information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF) and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN), and tested whether ΔCN was influenced by growth form or range size. Results Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001). The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation. Conclusion Our results show that distribution data are consistently broader than USDA PLANTS experts’ knowledge and likely provide more robust estimates of climatic tolerance, especially for widespread forbs and grasses. These findings suggest that widely available expert-based climatic tolerance estimates underrepresent species’ fundamental niche and likely fail to capture the realized niche. PMID:27870859

  18. State-level minimum wage and heart disease death rates in the United States, 1980-2015: A novel application of marginal structural modeling.

    PubMed

    Van Dyke, Miriam E; Komro, Kelli A; Shah, Monica P; Livingston, Melvin D; Kramer, Michael R

    2018-07-01

    Despite substantial declines since the 1960's, heart disease remains the leading cause of death in the United States (US) and geographic disparities in heart disease mortality have grown. State-level socioeconomic factors might be important contributors to geographic differences in heart disease mortality. This study examined the association between state-level minimum wage increases above the federal minimum wage and heart disease death rates from 1980 to 2015 among 'working age' individuals aged 35-64 years in the US. Annual, inflation-adjusted state and federal minimum wage data were extracted from legal databases and annual state-level heart disease death rates were obtained from CDC Wonder. Although most minimum wage and health studies to date use conventional regression models, we employed marginal structural models to account for possible time-varying confounding. Quasi-experimental, marginal structural models accounting for state, year, and state × year fixed effects estimated the association between increases in the state-level minimum wage above the federal minimum wage and heart disease death rates. In models of 'working age' adults (35-64 years old), a $1 increase in the state-level minimum wage above the federal minimum wage was on average associated with ~6 fewer heart disease deaths per 100,000 (95% CI: -10.4, -1.99), or a state-level heart disease death rate that was 3.5% lower per year. In contrast, for older adults (65+ years old) a $1 increase was on average associated with a 1.1% lower state-level heart disease death rate per year (b = -28.9 per 100,000, 95% CI: -71.1, 13.3). State-level economic policies are important targets for population health research. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Image detection and compression for memory efficient system analysis

    NASA Astrophysics Data System (ADS)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  20. Constrained signal reconstruction from wavelet transform coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1991-12-31

    A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less

  1. Methods for the quantitative comparison of molecular estimates of clade age and the fossil record.

    PubMed

    Clarke, Julia A; Boyd, Clint A

    2015-01-01

    Approaches quantifying the relative congruence, or incongruence, of molecular divergence estimates and the fossil record have been limited. Previously proposed methods are largely node specific, assessing incongruence at particular nodes for which both fossil data and molecular divergence estimates are available. These existing metrics, and other methods that quantify incongruence across topologies including entirely extinct clades, have so far not taken into account uncertainty surrounding both the divergence estimates and the ages of fossils. They have also treated molecular divergence estimates younger than previously assessed fossil minimum estimates of clade age as if they were the same as cases in which they were older. However, these cases are not the same. Recovered divergence dates younger than compared oldest known occurrences require prior hypotheses regarding the phylogenetic position of the compared fossil record and standard assumptions about the relative timing of morphological and molecular change to be incorrect. Older molecular dates, by contrast, are consistent with an incomplete fossil record and do not require prior assessments of the fossil record to be unreliable in some way. Here, we compare previous approaches and introduce two new descriptive metrics. Both metrics explicitly incorporate information on uncertainty by utilizing the 95% confidence intervals on estimated divergence dates and data on stratigraphic uncertainty concerning the age of the compared fossils. Metric scores are maximized when these ranges are overlapping. MDI (minimum divergence incongruence) discriminates between situations where molecular estimates are younger or older than known fossils reporting both absolute fit values and a number score for incompatible nodes. DIG range (divergence implied gap range) allows quantification of the minimum increase in implied missing fossil record induced by enforcing a given set of molecular-based estimates. These metrics are used together to describe the relationship between time trees and a set of fossil data, which we recommend be phylogenetically vetted and referred on the basis of apomorphy. Differences from previously proposed metrics and the utility of MDI and DIG range are illustrated in three empirical case studies from angiosperms, ostracods, and birds. These case studies also illustrate the ways in which MDI and DIG range may be used to assess time trees resultant from analyses varying in calibration regime, divergence dating approach or molecular sequence data analyzed. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Avoidable Burden of Risk Factors for Serious Road Traffic Crashes in Iran: A Modeling Study.

    PubMed

    Khosravi Shadmani, Fatemeh; Mansori, Kamyar; Karami, Manoochehr; Zayeri, Farid; Shadman, Reza Khosravi; Hanis, Shiva Mansouri; Soori, Hamid

    2017-03-01

    The aim of this study was to model the avoidable burden of the risk factors of road traffic crashes in Iran and to prioritize interventions to reduce that burden. The prevalence and the effect size of the risk factors were obtained from data documented by the traffic police of Iran in 2013. The effect size was estimated using an ordinal regression model. The potential impact fraction index was applied to calculate the avoidable burden in order to prioritize interventions. This index was calculated for theoretical, plausible, and feasible minimum risk level scenarios. The joint effects of the risk factors were then estimated for all the scenarios. The highest avoidable burdens in the theoretical, plausible, and feasible minimum risk level scenarios for the non-use of child restraints on urban roads were 52.25, 28.63, and 46.67, respectively. In contrast, the value of this index for speeding was 76.24, 37.00, and 62.23, respectively, for rural roads. On the basis of the different scenarios considered in this research, we suggest focusing on future interventions to decrease the prevalence of speeding, the non-use of child restraints, the use of cell phones while driving, and helmet disuse, and the laws related to these items should be considered seriously.

  3. 12 CFR Appendix M1 to Part 1026 - Repayment Disclosures

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...

  4. 12 CFR Appendix M1 to Part 1026 - Repayment Disclosures

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...

  5. Estimating missing daily temperature extremes in Jaffna, Sri Lanka

    NASA Astrophysics Data System (ADS)

    Thevakaran, A.; Sonnadara, D. U. J.

    2018-04-01

    The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.

  6. Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-04-01

    The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.

  7. Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging

    PubMed Central

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-01-01

    The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402

  8. Analysis of the Bayesian Cramér-Rao lower bound in astrometry. Studying the impact of prior information in the location of an object

    NASA Astrophysics Data System (ADS)

    Echeverria, Alex; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos

    2016-10-01

    Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims: We analyze bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (I.e., the position is a random variable) from observations that are statistically dependent on the position. Methods: We characterize the Bayesian Cramér-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results: We quantify and analyze the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions: The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.

  9. Optimal portfolio strategy with cross-correlation matrix composed by DCCA coefficients: Evidence from the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Sun, Xuelian; Liu, Zixian

    2016-02-01

    In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.

  10. Incorporating fault zone head wave and direct wave secondary arrival times into seismic tomography: Application at Parkfield, California

    NASA Astrophysics Data System (ADS)

    Bennington, Ninfa L.; Thurber, Clifford; Peng, Zhigang; Zhang, Haijiang; Zhao, Peng

    2013-03-01

    We present a three-dimensional (3D) P wave velocity (Vp) model of the Parkfield region that utilizes existing P wave arrival time data, including fault zone head waves (FZHWs), and data from direct wave secondary arrivals (DWSAs). The first-arrival and DWSA travel times are obtained as the global- and local-minimum travel time paths, respectively. The inclusion of FZHWs and DWSAs results in as much as a 5% and a 10% increase in the across-fault velocity contrast, respectively, for the Vp model at Parkfield relative to that of Thurber et al. [2006]. Viewed along strike, three pronounced velocity contrast regions are observed: a pair of strong positive velocity contrasts (SW fast), one NW of the 1966 Parkfield earthquake hypocenter and the other SE of the 2004 Parkfield earthquake hypocenter, and a strong negative velocity contrast (NE fast) between the two hypocenters. The negative velocity contrast partially to entirely encompasses peak coseismic slip estimated in several slip models for the 2004 earthquake, suggesting that the negative velocity contrast played a part in defining the rupture patch of the 2004 Parkfield earthquake. Following Ampuero and Ben-Zion (2008), the pattern of velocity contrasts is consistent with the observed bilateral rupture propagation for the 2004 Parkfield earthquake. Although the velocity contrasts also suggest bilateral rupture propagation for the 1966 Parkfield earthquake, the fault is creeping to the NW here, i.e., exhibiting velocity-strengthening behavior. Thus, it is not surprising that rupture propagated only SE during this event.

  11. Estimation of minimum ventilation requirement of dairy cattle barns for different outdoor temperature and its affects on indoor temperature: Bursa case.

    PubMed

    Yaslioglu, Erkan; Simsek, Ercan; Kilic, Ilker

    2007-04-15

    In the study, 10 different dairy cattle barns with natural ventilation system were investigated in terms of structural aspects. VENTGRAPH software package was used to estimate minimum ventilation requirements for three different outdoor design temperatures (-3, 0 and 1.7 degrees C). Variation in indoor temperatures was also determined according to the above-mentioned conditions. In the investigated dairy cattle barns, on condition that minimum ventilation requirement to be achieved for -3, 0 and 1.7 degrees C outdoor design temperature and 70, 80% Indoor Relative Humidity (IRH), estimated indoor temperature were ranged from 2.2 to 12.2 degrees C for 70% IRH, 4.3 to 15.0 degrees C for 80% IRH. Barn type, outdoor design temperature and indoor relative humidity significantly (p < 0.01) affect the indoor temperature. The highest ventilation requirement was calculated for straw yard (13879 m3 h(-1)) while the lowest was estimated for tie-stall (6169.20 m3 h(-1)). Estimated minimum ventilation requirements per animal were significantly (p < 0.01) different according to the barn types. Effect of outdoor esign temperatures on minimum ventilation requirements and minimum ventilation requirements per animal was found to be significant (p < 0.05, p < 0.01). Estimated indoor temperatures were in thermoneutral zone (-2 to 20 degrees C). Therefore, one can be said that use of naturally ventilated cold dairy barns in the region will not lead to problems associated with animal comfort in winter.

  12. Annual Estimated Minimum School Program of Utah School Districts, 1984-85.

    ERIC Educational Resources Information Center

    Utah State Office of Education, Salt Lake City. School Finance and Business Section.

    This bulletin presents both the statistical and financial data of the Estimated Annual State-Supported Minimum School Program for the 40 school districts of the State of Utah for the 1984-85 school year. It is published for the benefit of those interested in research into the minimum school programs of the various Utah school districts. A brief…

  13. Probabilistic divergence time estimation without branch lengths: dating the origins of dinosaurs, avian flight and crown birds.

    PubMed

    Lloyd, G T; Bapst, D W; Friedman, M; Davis, K E

    2016-11-01

    Branch lengths-measured in character changes-are an essential requirement of clock-based divergence estimation, regardless of whether the fossil calibrations used represent nodes or tips. However, a separate set of divergence time approaches are typically used to date palaeontological trees, which may lack such branch lengths. Among these methods, sophisticated probabilistic approaches have recently emerged, in contrast with simpler algorithms relying on minimum node ages. Here, using a novel phylogenetic hypothesis for Mesozoic dinosaurs, we apply two such approaches to estimate divergence times for: (i) Dinosauria, (ii) Avialae (the earliest birds) and (iii) Neornithes (crown birds). We find: (i) the plausibility of a Permian origin for dinosaurs to be dependent on whether Nyasasaurus is the oldest dinosaur, (ii) a Middle to Late Jurassic origin of avian flight regardless of whether Archaeopteryx or Aurornis is considered the first bird and (iii) a Late Cretaceous origin for Neornithes that is broadly congruent with other node- and tip-dating estimates. Demonstrating the feasibility of probabilistic time-scaling further opens up divergence estimation to the rich histories of extinct biodiversity in the fossil record, even in the absence of detailed character data. © 2016 The Authors.

  14. Software Development Cost Estimation Executive Summary

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Menzies, Tim

    2006-01-01

    Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.

  15. Feature long axis size and local luminance contrast determine ship target acquisition performance: strong evidence for the TOD case

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; Toet, Alexander; Kooi, Frank L.

    2016-10-01

    Visual images of a civilian target ship on a sea background were produced using a CAD model. The total set consisted of 264 images and included 3 different color schemes, 2 ship viewing aspects, 5 sun illumination conditions, 2 sea reflection values, 2 ship positions with respect to the horizon and 3 values of atmospheric contrast reduction. In a perception experiment, the images were presented on a display in a long darkened corridor. Observers were asked to indicate the range at which they were able to detect the ship and classify the following 5 ship elements: accommodation, funnel, hull, mast, and hat above the bridge. This resulted in a total of 1584 Target Acquisition (TA) range estimates for two observers. Next, the ship contour, ship elements and corresponding TA ranges were analyzed applying several feature size and contrast measures. Most data coincide on a contrast versus angular size plot using (1) the long axis as characteristic ship/ship feature size and (2) local Weber contrast as characteristic ship/ship feature contrast. Finally, the data were compared with a variety of visual performance functions assumed to be representative for Target Acquisition: the TOD (Triangle Orientation Discrimination), MRC (Minimum Resolvable Contrast), CTF (Contrast Threshold Function), TTP (Targeting Task Performance) metric and circular disc detection data for the unaided eye (Blackwell). The results provide strong evidence for the TOD case: both position and slope of the TOD curve match the ship detection and classification data without any free parameter. In contrast, the MRC and CTF are too steep, the TTP and disc detection curves are too shallow and all these curves need an overall scaling factor in order to coincide with the ship and ship feature recognition data.

  16. A Fast and Robust Beamspace Adaptive Beamformer for Medical Ultrasound Imaging.

    PubMed

    Mohades Deylami, Ali; Mohammadzadeh Asl, Babak

    2017-06-01

    Minimum variance beamformer (MVB) increases the resolution and contrast of medical ultrasound imaging compared with nonadaptive beamformers. These advantages come at the expense of high computational complexity that prevents this adaptive beamformer to be applied in a real-time imaging system. A new beamspace (BS) based on discrete cosine transform is proposed in which the medical ultrasound signals can be represented with less dimensions compared with the standard BS. This is because of symmetric beampattern of the beams in the proposed BS compared with the asymmetric ones in the standard BS. This lets us decrease the dimensions of data to two, so a high complex algorithm, such as the MVB, can be applied faster in this BS. The results indicated that by keeping only two beams, the MVB in the proposed BS provides very similar resolution and also better contrast compared with the standard MVB (SMVB) with only 0.44% of needed flops. Also, this beamformer is more robust against sound speed estimation errors than the SMVB.

  17. Estimated Global Mortality Attributable to Smoke from Landscape Fires

    PubMed Central

    Henderson, Sarah B.; Chen, Yang; Randerson, James T.; Marlier, Miriam; DeFries, Ruth S.; Kinney, Patrick; Bowman, David M.J.S.; Brauer, Michael

    2012-01-01

    Background: Forest, grass, and peat fires release approximately 2 petagrams of carbon into the atmosphere each year, influencing weather, climate, and air quality. Objective: We estimated the annual global mortality attributable to landscape fire smoke (LFS). Methods: Daily and annual exposure to particulate matter ≤ 2.5 μm in aerodynamic diameter (PM2.5) from fire emissions was estimated globally for 1997 through 2006 by combining outputs from a chemical transport model with satellite-based observations of aerosol optical depth. In World Health Organization (WHO) subregions classified as sporadically affected, the daily burden of mortality was estimated using previously published concentration–response coefficients for the association between short-term elevations in PM2.5 from LFS (contrasted with 0 μg/m3 from LFS) and all-cause mortality. In subregions classified as chronically affected, the annual burden of mortality was estimated using the American Cancer Society study coefficient for the association between long-term PM2.5 exposure and all-cause mortality. The annual average PM2.5 estimates were contrasted with theoretical minimum (counterfactual) concentrations in each chronically affected subregion. Sensitivity of mortality estimates to different exposure assessments, counterfactual estimates, and concentration–response functions was evaluated. Strong La Niña and El Niño years were compared to assess the influence of interannual climatic variability. Results: Our principal estimate for the average mortality attributable to LFS exposure was 339,000 deaths annually. In sensitivity analyses the interquartile range of all tested estimates was 260,000–600,000. The regions most affected were sub-Saharan Africa (157,000) and Southeast Asia (110,000). Estimated annual mortality during La Niña was 262,000, compared with 532,000 during El Niño. Conclusions: Fire emissions are an important contributor to global mortality. Adverse health outcomes associated with LFS could be substantially reduced by curtailing burning of tropical rainforests, which rarely burn naturally. The large estimated influence of El Niño suggests a relationship between climate and the burden of mortality attributable to LFS. PMID:22456494

  18. Helium as a Dynamical Tracer in the Thermosphere

    NASA Astrophysics Data System (ADS)

    Thayer, J. P.; Liu, X.; Wang, W.; Burns, A. G.

    2014-12-01

    Helium has been a missing constituent in current thermosphere general circulation models. Although typically a minor gas relative to the more abundant major gasses, its unique properties of being chemically inert and light make it an excellent tracer of thermosphere dynamics. Studying helium can help simplify understanding of transport effects. This understanding can then be projected to other gasses whose overall structure and behavior are complex but, by contrasting with helium, can be evaluated for its transport dependencies. The dynamical influences on composition impact estimates of thermosphere mass density, where helium during solar minima can have a direct contribution, as well as ionosphere electron density. Furthermore, helium estimates in the upper thermosphere during solar minima have not been observed since the 1976 minimum. Indirect estimates of helium in the upper thermosphere during the recent extreme solar minimum indicates winter-time helium concentrations exceeded NRL-MSISE00 estimates by 30%-70% during periods of quiet geomagnetic activity. For times of active geomagnetic conditions, helium concentrations near ~450 km altitude are estimated to decrease while oxygen concentrations increase. An investigation of the altitude structure in thermosphere mass density storm-time perturbations reveal the important effects of composition change with maximum perturbation occurring near the He/O transition region and a much weaker maximum occurring near the O/N2 transition region. However, evaluating helium behavior and its role as a dynamical tracer is not straightforward and model development is necessary to adequately establish the connection to specific dynamical processes. Fortunately recent efforts have led to the implementation of helium modules in the NCAR TIEGCM and TIME-GCM. In this invited talk, the simulated helium behavior and structure will be shown to reproduce observations (such as the wintertime helium bulge and storm-time response) and its utility as a dynamical tracer of thermosphere dynamics will be elucidated.

  19. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  20. Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values

    DTIC Science & Technology

    2016-12-01

    UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square  error (MMSE)  estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem.                     3    Introduction      Minimum mean‐ square  error (MMSE) estimation is applied to target imaging with synthetic aperture

  1. A thrust-sheet propulsion concept using fissionable elements

    NASA Technical Reports Server (NTRS)

    Moeckel, W. E.

    1976-01-01

    A space propulsion concept is proposed and analyzed which consists of a thin sheet coated on one side with fissionable material, so that nuclear power is converted directly into propulsive power. Thrust is available both from ejected fission fragments and from thermal radiation. Optimum thicknesses are determined for the active and substrate layers. This concept is shown to have potential mission capability (in terms of velocity increments) superior to that of all other advanced propulsion concepts for which performance estimates are available. A suitable spontaneously fissioning material such as Cf254 could provide an extremely high-performance first stage beyond earth orbit. In contrast with some other advanced nuclear propulsion concepts, there is no minimum size below which this concept is infeasible.

  2. A thrust-sheet propulsion concept using fissionable elements

    NASA Technical Reports Server (NTRS)

    Moeckel, W. E.

    1976-01-01

    A space propulsion concept is proposed and analyzed which consists of a thin sheet coated on one side with fissionable material, so that nuclear power is converted directly into propulsive power. Thrust is available both from ejected fission fragments and from thermal radiation. Optimum thicknesses are determined for the active and substrate layers. This concept is shown to have potential mission capability (in terms of velocity increments) superior to that of all other advanced propulsion concepts for which performance estimates are available. A suitable spontaneously fissioning material such as Cf-254 could provide an extremely high-performance first stage beyond earth orbit. In contrast with some other advanced nuclear propulsion concepts, there is no minimum size below which this concept is infeasible.

  3. Aorta and pulmonary artery segmentation using optimal surface graph cuts in non-contrast CT

    NASA Astrophysics Data System (ADS)

    Sedghi Gamechi, Zahra; Arias-Lorza, Andres M.; Pedersen, Jesper Holst; de Bruijne, Marleen

    2018-03-01

    Accurate measurements of the size and shape of the aorta and pulmonary arteries are important as risk factors for cardiovascular diseases, and for Chronicle Obstacle Pulmonary Disease (COPD).1 The aim of this paper is to propose an automated method for segmenting the aorta and pulmonary arteries in low-dose non-ECGgated non-contrast CT scans. Low contrast and the high noise level make the automatic segmentation in such images a challenging task. In the proposed method, first, a minimum cost path tracking algorithm traces the centerline between user-defined seed points. The cost function is based on a multi-directional medialness filter and a lumen intensity similarity metric. The vessel radius is also estimated from the medialness filter. The extracted centerlines are then smoothed and dilated non-uniformly according to the extracted local vessel radius and subsequently used as initialization for a graph-cut segmentation. The algorithm is evaluated on 225 low-dose non-ECG-gated non-contrast CT scans from a lung cancer screening trial. Quantitatively analyzing 25 scans with full manual annotations, we obtain a dice overlap of 0.94+/-0.01 for the aorta and 0.92+/-0.01 for pulmonary arteries. Qualitative validation by visual inspection on 200 scans shows successful segmentation in 93% of all cases for the aorta and 94% for pulmonary arteries.

  4. Body fat influences departure from stopover sites in migratory birds: evidence from whole-island telemetry

    PubMed Central

    Goymann, Wolfgang; Spina, Fernando; Ferri, Andrea; Fusani, Leonida

    2010-01-01

    Migration remains one of the great mysteries of animal life. Small migratory birds rely on refuelling stopovers after crossing ecological barriers such as deserts or seas. Previous studies have suggested that fuel reserves may determine stopover duration but this hypothesis could not be tested because of methodological limitations. Here, we provide evidence that subcutaneous fat stores determine stopover duration by measuring the permanence of migratory garden warblers (Sylvia borin) on a small Mediterranean island during spring migration with telemetry methods. Garden warblers with large amounts of fat stores departed the island significantly sooner than lean birds. All except one fat bird left the island on the same evening after capture, with a mean total stopover estimate of 8.8 hours. In contrast, the mean estimated total stopover duration of lean birds was 41.3 hours. To our knowledge, this is the first study that measures the true minimum stopover duration of a songbird during migration. PMID:20164077

  5. Prevalence of autosomal dominant polycystic kidney disease in the European Union.

    PubMed

    Willey, Cynthia J; Blais, Jaime D; Hall, Anthony K; Krasa, Holly B; Makin, Andrew J; Czerwiec, Frank S

    2017-08-01

    Autosomal dominant polycystic kidney disease (ADPKD) is a leading cause of end-stage renal disease, but estimates of its prevalence vary by >10-fold. The objective of this study was to examine the public health impact of ADPKD in the European Union (EU) by estimating minimum prevalence (point prevalence of known cases) and screening prevalence (minimum prevalence plus cases expected after population-based screening). A review of the epidemiology literature from January 1980 to February 2015 identified population-based studies that met criteria for methodological quality. These examined large German and British populations, providing direct estimates of minimum prevalence and screening prevalence. In a second approach, patients from the 2012 European Renal Association‒European Dialysis and Transplant Association (ERA-EDTA) Registry and literature-based inflation factors that adjust for disease severity and screening yield were used to estimate prevalence across 19 EU countries (N = 407 million). Population-based studies yielded minimum prevalences of 2.41 and 3.89/10 000, respectively, and corresponding estimates of screening prevalences of 3.3 and 4.6/10 000. A close correspondence existed between estimates in countries where both direct and registry-derived methods were compared, which supports the validity of the registry-based approach. Using the registry-derived method, the minimum prevalence was 3.29/10 000 (95% confidence interval 3.27-3.30), and if ADPKD screening was implemented in all countries, the expected prevalence was 3.96/10 000 (3.94-3.98). ERA-EDTA-based prevalence estimates and application of a uniform definition of prevalence to population-based studies consistently indicate that the ADPKD point prevalence is <5/10 000, the threshold for rare disease in the EU. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA.

  6. Proof of concept and dose estimation with binary responses under model uncertainty.

    PubMed

    Klingenberg, B

    2009-01-30

    This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.

  7. An experimental method for the assessment of color simulation tools.

    PubMed

    Lillo, Julio; Alvaro, Leticia; Moreira, Humberto

    2014-07-22

    The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.

  8. Minimum viable populations: Is there a 'magic number' for conservation practitioners?

    Treesearch

    Curtis H. Flather; Gregory D. Hayward; Steven R. Beissinger; Philip A. Stephens

    2011-01-01

    Establishing species conservation priorities and recovery goals is often enhanced by extinction risk estimates. The need to set goals, even in data-deficient situations, has prompted researchers to ask whether general guidelines could replace individual estimates of extinction risk. To inform conservation policy, recent studies have revived the concept of the minimum...

  9. Minimum Wage Effects on Educational Enrollments in New Zealand

    ERIC Educational Resources Information Center

    Pacheco, Gail A.; Cruickshank, Amy A.

    2007-01-01

    This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…

  10. Employment Effects of Minimum and Subminimum Wages. Recent Evidence.

    ERIC Educational Resources Information Center

    Neumark, David

    Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…

  11. Does the Minimum Wage Affect Welfare Caseloads?

    ERIC Educational Resources Information Center

    Page, Marianne E.; Spetz, Joanne; Millar, Jane

    2005-01-01

    Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…

  12. Minimum area requirements for an at-risk butterfly based on movement and demography.

    PubMed

    Brown, Leone M; Crone, Elizabeth E

    2016-02-01

    Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.

  13. Readability of alphanumeric characters having various contrast levels as a function of age and illumination mode.

    DOT National Transportation Integrated Search

    1977-07-01

    Readability data of alphanumeric characters that vary in figure-to- ground contrast ratio were obtained from 36 subjects; 12 subjects were placed in each of three age groups (20-25 yr, 40-45 yr, and 60-65 yr). Minimum illuminance required to identify...

  14. Effects of phonological contrast on auditory word discrimination in children with and without reading disability: A magnetoencephalography (MEG) study

    PubMed Central

    Wehner, Daniel T.; Ahlfors, Seppo P.; Mody, Maria

    2007-01-01

    Poor readers perform worse than their normal reading peers on a variety of speech perception tasks, which may be linked to their phonological processing abilities. The purpose of the study was to compare the brain activation patterns of normal and impaired readers on speech perception to better understand the phonological basis in reading disability. Whole-head magnetoencephalography (MEG) was recorded as good and poor readers, 7-13 years of age, performed an auditory word discrimination task. We used an auditory oddball paradigm in which the ‘deviant’ stimuli (/bat/, /kat/, /rat/) differed in the degree of phonological contrast (1 vs. 3 features) from a repeated standard word (/pat/). Both good and poor readers responded more slowly to deviants that were phonologically similar compared to deviants that were phonologically dissimilar to the standard word. Source analysis of the MEG data using Minimum Norm Estimation (MNE) showed that compared to good readers, poor readers had reduced left-hemisphere activation to the most demanding phonological condition reflecting their difficulties with phonological processing. Furthermore, unlike good readers, poor readers did not show differences in activation as a function of the degree of phonological contrast. These results are consistent with a phonological account of reading disability. PMID:17675109

  15. Investigating the transverse optical structure of spider silk micro-fibers using quantitative optical microscopy

    NASA Astrophysics Data System (ADS)

    Little, Douglas J.; Kane, Deb M.

    2017-01-01

    The transverse optical structure of two orb-weaver (family Araneidae) spider dragline silks was investigated using a variant of the inverse-scattering technique. Immersing the silks in a closely refractive index-matched liquid, the minimum achievable image contrast was greater than expected for an optically homogeneous silk, given what is currently known about the optical absorption of these silks. This "excess contrast" indicated the presence of transverse optical structure within the spider silk. Applying electromagnetic scattering theory to a transparent double cylinder, the minimum achievable irradiance contrast for the Plebs eburnus and Argiope keyserlingi dragline silks was determined to be consistent with step index refractive index contrasts of 1-4×10-4 and 6-7×10-4, respectively, supposing outer-layer thicknesses consistent with previous TEM studies (50 nm and 100 nm, respectively). The possibility of graded index refractive index contrasts within the spider silks is also discussed. This is the strongest evidence, to date, that there is a refractive index contrast associated with the layered morphology of spider silks and/or variation of proportion of nanocrystalline components within the spider silk structure. The method is more generally applicable to optical micro-fibers, including those with refractive index variations on a sub-wavelength scale.

  16. Objective assessment of chromatic and achromatic pattern adaptation reveals the temporal response properties of different visual pathways.

    PubMed

    Robson, Anthony G; Kulikowski, Janus J

    2012-11-01

    The aim was to investigate the temporal response properties of magnocellular, parvocellular, and koniocellular visual pathways using increment/decrement changes in contrast to elicit visual evoked potentials (VEPs). Static achromatic and isoluminant chromatic gratings were generated on a monitor. Chromatic gratings were modulated along red/green (R/G) or subject-specific tritanopic confusion axes, established using a minimum distinct border criterion. Isoluminance was determined using minimum flicker photometry. Achromatic and chromatic VEPs were recorded to contrast increments and decrements of 0.1 or 0.2 superimposed on the static gratings (masking contrast 0-0.6). Achromatic increment/decrement changes in contrast evoked a percept of apparent motion when the spatial frequency was low; VEPs to such stimuli were positive in polarity and largely unaffected by high levels of static contrast, consistent with transient response mechanisms. VEPs to finer achromatic gratings showed marked attenuation as static contrast was increased. Chromatic VEPs to R/G or tritan chromatic contrast increments were of negative polarity and showed progressive attenuation as static contrast was increased, in keeping with increasing desensitization of the sustained responses of the color-opponent visual pathways. Chromatic contrast decrement VEPs were of positive polarity and less sensitive to pattern adaptation. The relative contribution of sustained/transient mechanisms to achromatic processing is spatial frequency dependent. Chromatic contrast increment VEPs reflect the sustained temporal response properties of parvocellular and koniocellular pathways. Cortical VEPs can provide an objective measure of pattern adaptation and can be used to probe the temporal response characteristics of different visual pathways.

  17. Prestack depth migration for complex 2D structure using phase-screen propagators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, P.; Huang, Lian-Jie; Burch, C.

    1997-11-01

    We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4more » CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.« less

  18. Supercontinent cycles and the calculation of absolute palaeolongitude in deep time.

    PubMed

    Mitchell, Ross N; Kilian, Taylor M; Evans, David A D

    2012-02-08

    Traditional models of the supercontinent cycle predict that the next supercontinent--'Amasia'--will form either where Pangaea rifted (the 'introversion' model) or on the opposite side of the world (the 'extroversion' models). Here, by contrast, we develop an 'orthoversion' model whereby a succeeding supercontinent forms 90° away, within the great circle of subduction encircling its relict predecessor. A supercontinent aggregates over a mantle downwelling but then influences global-scale mantle convection to create an upwelling under the landmass. We calculate the minimum moment of inertia about which oscillatory true polar wander occurs owing to the prolate shape of the non-hydrostatic Earth. By fitting great circles to each supercontinent's true polar wander legacy, we determine that the arc distances between successive supercontinent centres (the axes of the respective minimum moments of inertia) are 88° for Nuna to Rodinia and 87° for Rodinia to Pangaea--as predicted by the orthoversion model. Supercontinent centres can be located back into Precambrian time, providing fixed points for the calculation of absolute palaeolongitude over billion-year timescales. Palaeogeographic reconstructions additionally constrained in palaeolongitude will provide increasingly accurate estimates of ancient plate motions and palaeobiogeographic affinities.

  19. Robust Means and Covariance Matrices by the Minimum Volume Ellipsoid (MVE).

    ERIC Educational Resources Information Center

    Blankmeyer, Eric

    P. Rousseeuw and A. Leroy (1987) proposed a very robust alternative to classical estimates of mean vectors and covariance matrices, the Minimum Volume Ellipsoid (MVE). This paper describes the MVE technique and presents a BASIC program to implement it. The MVE is a "high breakdown" estimator, one that can cope with samples in which as…

  20. Image Data Compression Having Minimum Perceptual Error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1997-01-01

    A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  1. Method to analyze remotely sensed spectral data

    DOEpatents

    Stork, Christopher L [Albuquerque, NM; Van Benthem, Mark H [Middletown, DE

    2009-02-17

    A fast and rigorous multivariate curve resolution (MCR) algorithm is applied to remotely sensed spectral data. The algorithm is applicable in the solar-reflective spectral region, comprising the visible to the shortwave infrared (ranging from approximately 0.4 to 2.5 .mu.m), midwave infrared, and thermal emission spectral region, comprising the thermal infrared (ranging from approximately 8 to 15 .mu.m). For example, employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, MCR can be used to successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. Further, MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of a gas plume component near the minimum detectable quantity.

  2. Pollen-based temperature and precipitation inferences for the montane forest of Mt. Kilimanjaro during the last Glacial and the Holocene

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Hemp, A.; Behling, H.

    2014-01-01

    The relationship between modern pollen-rain taxa and measured climate variables was explored along the elevational gradient of the southern slope of Mt. Kilimanjaro, Tanzania. Pollen assemblages in 28 pollen traps positioned on 14 montane forest vegetation plots were identified and their relationship with climate variables was examined using multivariate statistical methods. Canonical correspondence analysis revealed that the mean annual temperature, mean annual precipitation and minimum temperature each account for significant fractions of the variation in pollen taxa. A training set of 107 modern pollen taxa was used to derive temperature and precipitation transfer functions based on pollen subsets using weighted-averaging-partial-least-squares (WA-PLS) techniques. The transfer functions were then applied to a fossil pollen record from the montane forest of Mt. Kilimanjaro and the climate parameter estimates for the Late Glacial and the Holocene on Mt. Kilimanjaro were inferred. Our results present the first quantitatively reconstructed temperature and precipitation estimates for Mt Kilimanjaro and give highly interesting insights into the past 45 000 yr of climate dynamics in tropical East Africa. The climate reconstructions are consistent with the interpretation of pollen data in terms of vegetation and climate history of afro-montane forest in East Africa. Minimum temperatures above the frostline as well as increased precipitation turn out to be crucial for the development and expansion of montane forest during the Holocene. In contrast, consistently low minimum temperatures as well as about 25% drier climate conditions prevailed during the pre LGM, which kept the montane vegetation composition in a stable state. In prospective studies, the quantitative climate reconstruction will be improved by additional modern pollen rain data, especially from lower elevations with submontane dry forests and colline savanna vegetation in order to extend the reference climate gradient.

  3. Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology

    PubMed Central

    Hayward, John

    2016-01-01

    The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world’s earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated. PMID:27579865

  4. Minimum target prices for production of direct-acting antivirals and associated diagnostics to combat hepatitis C virus

    PubMed Central

    van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew

    2015-01-01

    Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Conclusions: Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. (Hepatology 2015;61:1174–1182) PMID:25482139

  5. Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology.

    PubMed

    Ross, June; Westaway, Kira; Travers, Meg; Morwood, Michael J; Hayward, John

    2016-01-01

    The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world's earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated.

  6. Minimum target prices for production of direct-acting antivirals and associated diagnostics to combat hepatitis C virus.

    PubMed

    van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew

    2015-04-01

    Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.

  7. The 1868 Hayward fault, California, earthquake: Implications for earthquake scaling relations on partially creeping faults

    USGS Publications Warehouse

    Hough, Susan E.; Martin, Stacey

    2015-01-01

    The 21 October 1868 Hayward, California, earthquake is among the best-characterized historical earthquakes in California. In contrast to many other moderate-to-large historical events, the causative fault is clearly established. Published magnitude estimates have been fairly consistent, ranging from 6.8 to 7.2, with 95% confidence limits including values as low as 6.5. The magnitude is of particular importance for assessment of seismic hazard associated with the Hayward fault and, more generally, to develop appropriate magnitude–rupture length scaling relations for partially creeping faults. The recent reevaluation of archival accounts by Boatwright and Bundock (2008), together with the growing volume of well-calibrated intensity data from the U.S. Geological Survey “Did You Feel It?” (DYFI) system, provide an opportunity to revisit and refine the magnitude estimate. In this study, we estimate the magnitude using two different methods that use DYFI data as calibration. Both approaches yield preferred magnitude estimates of 6.3–6.6, assuming an average stress drop. A consideration of data limitations associated with settlement patterns increases the range to 6.3–6.7, with a preferred estimate of 6.5. Although magnitude estimates for historical earthquakes are inevitably uncertain, we conclude that, at a minimum, a lower-magnitude estimate represents a credible alternative interpretation of available data. We further discuss implications of our results for probabilistic seismic-hazard assessment from partially creeping faults.

  8. An estimate of the number of tropical tree species.

    PubMed

    Slik, J W Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L; Bellingham, Peter J; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L M; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K; Chazdon, Robin L; Robin, Chazdon L; Clark, Connie; Clark, David B; Clark, Deborah A; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A O; Eisenlohr, Pedro V; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A; Joly, Carlos A; de Jong, Bernardus H J; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F; Lawes, Michael J; Amaral, Ieda Leao do; Letcher, Susan G; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H; Meilby, Henrik; Melo, Felipe P L; Metcalfe, Daniel J; Medjibe, Vincent P; Metzger, Jean Paul; Millet, Jerome; Mohandass, D; Montero, Juan C; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T F; Pitman, Nigel C A; Poorter, Lourens; Poulsen, Axel D; Poulsen, John; Powers, Jennifer; Prasad, Rama C; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; Dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A; Santos, Fernanda; Sarker, Swapan K; Satdichanh, Manichanh; Schmitt, Christine B; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I-Fang; Sunderland, Terry; Sunderand, Terry; Suresh, H S; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L C H; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A; Webb, Campbell O; Whitfeld, Timothy; Wich, Serge A; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C Yves; Yap, Sandra L; Yoneda, Tsuyoshi; Zahawi, Rakan A; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L; Garcia Luize, Bruno; Venticinque, Eduardo M

    2015-06-16

    The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼ 40,000 and ∼ 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼ 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼ 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.

  9. An Observationally Constrained Evaluation of the Oxidative Capacity in the Tropical Western Pacific Troposphere

    NASA Technical Reports Server (NTRS)

    Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.; hide

    2016-01-01

    Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO +NO2) by a factor of 2, resulting in OHCOL approx.30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.

  10. External validation of EPIWIN biodegradation models.

    PubMed

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  11. Reconstructing cortical current density by exploring sparseness in the transform domain

    NASA Astrophysics Data System (ADS)

    Ding, Lei

    2009-05-01

    In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.

  12. Building versatile bipartite probes for quantum metrology

    NASA Astrophysics Data System (ADS)

    Farace, Alessandro; De Pasquale, Antonella; Adesso, Gerardo; Giovannetti, Vittorio

    2016-01-01

    We consider bipartite systems as versatile probes for the estimation of transformations acting locally on one of the subsystems. We investigate what resources are required for the probes to offer a guaranteed level of metrological performance, when the latter is averaged over specific sets of local transformations. We quantify such a performance via the average skew information (AvSk), a convex quantity which we compute in closed form for bipartite states of arbitrary dimensions, and which is shown to be strongly dependent on the degree of local purity of the probes. Our analysis contrasts and complements the recent series of studies focused on the minimum, rather than the average, performance of bipartite probes in local estimation tasks, which was instead determined by quantum correlations other than entanglement. We provide explicit prescriptions to characterize the most reliable states maximizing the AvSk, and elucidate the role of state purity, separability and correlations in the classification of optimal probes. Our results can help in the identification of useful resources for sensing, estimation and discrimination applications when complete knowledge of the interaction mechanism realizing the local transformation is unavailable, and access to pure entangled probes is technologically limited.

  13. Extensive computation of albedo contrast between martian dust devil tracks and their neighboring regions

    NASA Astrophysics Data System (ADS)

    Statella, Thiago; Pina, Pedro; da Silva, Erivaldo Antônio

    2015-04-01

    We have developed a method to compute the albedo contrast between dust devil tracks and their surrounding regions on Mars. It is mainly based on Mathematical Morphology operators and uses all the points of the edges of the tracks to compute the values of the albedo contrast. It permits the extraction of more accurate and complete information, when compared to traditional point sampling, not only providing better statistics but also permitting the analysis of local variations along the entirety of the tracks. This measure of contrast, based on relative quantities, is much more adequate to establish comparisons at regional scales and in multi-temporal basis using imagery acquired in rather different environmental and operational conditions. Also, the substantial increase in the details extracted may permit quantifying differential depositions of dust by computing local temporal fading of the tracks with consequences on a better estimation of the thickness of the top most layer of dust and the minimum value needed to create dust devils tracks. The developed tool is tested on 110 HiRISE images depicting regions in the Aeolis, Argyre, Eridania, Noachis and Hellas quadrangles. As a complementary evaluation, we also performed a temporal analysis of the albedo in a region of Russell crater, where high seasonal dust devil activity was already observed before, comprising the years 2007-2012. The mean albedo of the Russell crater is in this case indicative of dust devil tracks presence and, therefore, can be used to quantify dust devil activity.

  14. Prevention of contrast-induced acute kidney injury by theophylline in elderly patients with chronic kidney disease.

    PubMed

    Matejka, Jan; Varvarovsky, Ivo; Vojtisek, Petr; Herman, Ales; Rozsival, Vladimir; Borkova, Veronika; Kvasnicka, Jiri

    2010-11-01

    Although the optimal strategy for preventing contrast-induced acute kidney injury (CI-AKI) has not yet been established, the current strategy focuses on adequate periprocedural hydration, the use of a low amount of low or iso-osmolar contrast medium, and the application of adjunctive therapies, including hemofiltration, hemodialysis and drugs. Previous trials and meta-analyses concerning the use of the adenosine antagonist theophylline have revealed contradictory results. We sought to evaluate the effect of theophylline in CI-AKI prevention in well-hydrated elderly patients with chronic kidney disease. We therefore conducted a randomized, double-blind, placebo-controlled trial involving 56 patients who had been referred for cardiac coronary angiography and/or angioplasty. 31 of these patients were randomly assigned to 200 mg theophylline IV before the procedure, and 25 to a placebo. The iso-osmolar contrast medium iodixanol was used. The primary endpoint was an increase in serum creatinine at study termination 48 h after contrast medium administration. Baseline characteristics in the placebo and theophylline groups were similar in terms of median age (75 years), estimated glomerular filtration rate (33 ± 10 vs. 33 ± 10 ml/min/1.73 m²; p = 0.87), diabetes mellitus (80 vs. 71%; p = 0.54), and amount of contrast used (94 ± 35 vs. 95 ± 38 ml; p = 0.89). There was no difference in serum creatinine at baseline (2.06 ± 0.59 vs. 2.02 ± 0.45 mg/dl; p = 0.62) or study termination (2.06 ± 0.68 vs. 2.10 ± 0.53; p = 0.79). A prophylactic effect of theophylline was not observed. The incidence of renal impairment following exposure to the contrast medium was low. This fact can be attributed to adequate parenteral hydratation and the use of the minimum amount of contrast medium necessary.

  15. Balancing Score Adjusted Targeted Minimum Loss-based Estimation

    PubMed Central

    Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.

    2015-01-01

    Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539

  16. Validity and reliability of the minimum basic data set in estimating nosocomial acute gastroenteritis caused by rotavirus.

    PubMed

    Redondo-González, Olga

    2015-03-01

    Rotavirus is the principal cause of nosocomial acute gastroenteritis (NAGE) under 5 years of age. The objectiveis to evaluate the validity and reliability of the minimum basic dataset (MBDS) in estimating the NAGE caused by rotavirus (NAGER) and to analyze any changes during the three years that the Rotarix® and Rotateq® vaccines were used in Spain. A descriptive, retrospectivestudy was carried out in the University Hospital of Guadalajara(UHG) (Spain) between 2003-2009 using the MBDS, positive microbiological results for rotavirus (PMRs), and medical histories.Three methods of estimation were used: 1) An ICD-9-CM code 008.61 in the secondary diagnosis fields (DIAG2) of MBDS; 2) method 1 and/or PMRs with a current or recent hospitalization; and 3) the reference method or method 2 contrasted with patient medical histories. The validity of methods 1 and 2 was determined -sensitivity, specificity, predictive values and likelihood ratios (LRs)-, along with their agreement with method 3 (Kappa coefficient). In addition, the incidence rate ratio between the NAGER rate in 2007-2009 (commercialization period of both vaccines) was calculated with respect to 2003-2005 (precommercialization period). Method 1 identified 65 records with a DIAG2 of 008.61. Method 2 found 62 probable cases, and the reference method, 49 true cases. The sensitivity of the MBDS was 67 %,the positive predictive value was 51 %, and both negative LR (LR-) and reliability were moderate (LR- 0.33, Kappa coefficient 0.58). During 2007-2009, the NARGE decreased by 5 cases per 103 hospitalizations and by 9 per 104 days of hospitalization. Method 2 overestimated both the decline in incidence by 2 per 103 hospitalizations and the decreased risk per day of stay by 10 %. The MBDS found no differences between the two three-year periods, but, like method 2, showed an excellent level of diagnostic evidence (LR+ 67). The MBDS taken together with microbiological results, is more exact, safer and more reliable than the MBDS alone in estimating NAGER; and more useful in ruling out it. Nevertheless, the MBDS alone may be used to estimate and compare such disease in contexts with different prevalences.

  17. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  18. Program CONTRAST--A general program for the analysis of several survival or recovery rate estimates

    USGS Publications Warehouse

    Hines, J.E.; Sauer, J.R.

    1989-01-01

    This manual describes the use of program CONTRAST, which implements a generalized procedure for the comparison of several rate estimates. This method can be used to test both simple and composite hypotheses about rate estimates, and we discuss its application to multiple comparisons of survival rate estimates. Several examples of the use of program CONTRAST are presented. Program CONTRAST will run on IBM-cimpatible computers, and requires estimates of the rates to be tested, along with associated variance and covariance estimates.

  19. Deep Bering Sea Circulation and Variability, 2001-2016, From Argo Data

    NASA Astrophysics Data System (ADS)

    Johnson, Gregory C.; Stabeno, Phyllis J.

    2017-12-01

    The mean structure, seasonal cycle, and interannual variability of temperature and salinity are analyzed in the deep Bering Sea basin using Argo profile data collected from 2001 to 2016. Gyre transports are estimated using geostrophic stream function maps of Argo profile data referenced to a 1,000 dbar nondivergent absolute velocity stream function mapped from Argo parking pressure displacement data. Relatively warm and salty water from the North Pacific enters the basin through the Near Strait and passages between Aleutian Islands to the east. This water then flows in a cyclonic (counterclockwise) direction around the region, cooling (and freshening) along its path. Aleutian North Slope Current transports from 0 to 1,890 dbar are estimated at 3-6 Sverdrups (1 Sv = 106 m3 s-1) eastward, feeding into the northwestward Bering Slope Current with transports of mostly 5-6 Sv. The Kamchatka Current has transports of ˜6 Sv north of Shirshov Ridge, increasing to 14-16 Sv south of the ridge, where it is augmented by westward flow from Near Strait. Temperature exhibits strong interannual variations in the upper ocean, with warm periods in 2004-2005 and 2015-2016, and cold periods around 2009 and 2012. In contrast, upper ocean salinity generally decreases from 2001 to 2016. As a result of this salinity decrease, the density of the subsurface temperature minimum decreased over this time period, despite more interannual variability in the minimum temperature value. The subsurface temperature maximum also exhibits interannual variability, but with values generally warmer than those previously reported for the 1970s and 1980s.

  20. Minimum Wages and the Economic Well-Being of Single Mothers

    ERIC Educational Resources Information Center

    Sabia, Joseph J.

    2008-01-01

    Using pooled cross-sectional data from the 1992 to 2005 March Current Population Survey (CPS), this study examines the relationship between minimum wage increases and the economic well-being of single mothers. Estimation results show that minimum wage increases were ineffective at reducing poverty among single mothers. Most working single mothers…

  1. Contrasts in Sea Ice Deformation and Production in the Arctic Seasonal and Perennial Ice Zones

    NASA Technical Reports Server (NTRS)

    Kwok, K.

    2006-01-01

    Four years (1997-2000) of RADARSAT Geophysical Processor System (RGPS) data are used to contrast the sea ice deformation and production regionally, and in the seasonal (SIZ) and perennial (PIZ) ice zones. Ice production is of seasonal ice in openings during the winter. Three-day estimates of these quantities are provided within Lagrangian elements initially 10 km on a side. A distinct seasonal cycle is seen in both zones with these estimates highest in the late fall and with seasonal minimums in the midwinter. Regional divergence over the winter could be up to 30%. Spatially, the highest deformation is seen in the SIZ north of coastal Alaska. Both ice deformation and production are higher in the SIZ: deformation-related ice production in the SIZ (approx.0.5 m) is 1.5-2.3 times that of the PIZ (approx.0.3 m): this is connected to ice strength and thickness. Atmospheric forcing and boundary layer structure contribute to only the seasonal and interannual variability. Seasonal ice growth in ice fractures accounts for approx.25-40% of the total ice production of the Arctic Ocean. Uncertainties in these estimates are discussed. By itself, this deformation-ice production relationship could be considered a negative feedback when thickness is perturbed. However, the overall effect on ice production in the face of increasing seasonal and thinner/weaker ice coverage could be modified by local destabilization of the water column promoting overturning of warmer water due to increased brine rejection; and the upwelling of the pynocline associated with increased occurrence of large shear motion in sea ice. Divergence is shown to be negligibly correlated to cyclonic motion in summer and winter in both ice zones.

  2. Minimum number of measurements for evaluating Bertholletia excelsa.

    PubMed

    Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E

    2017-09-27

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.

  3. Eigenspace-based minimum variance beamformer combined with Wiener postfilter for medical ultrasound imaging.

    PubMed

    Zeng, Xing; Chen, Cheng; Wang, Yuanyuan

    2012-12-01

    In this paper, a new beamformer which combines the eigenspace-based minimum variance (ESBMV) beamformer with the Wiener postfilter is proposed for medical ultrasound imaging. The primary goal of this work is to further improve the medical ultrasound imaging quality on the basis of the ESBMV beamformer. In this method, we optimize the ESBMV weights with a Wiener postfilter. With the optimization of the Wiener postfilter, the output power of the new beamformer becomes closer to the actual signal power at the imaging point than the ESBMV beamformer. Different from the ordinary Wiener postfilter, the output signal and noise power needed in calculating the Wiener postfilter are estimated respectively by the orthogonal signal subspace and noise subspace constructed from the eigenstructure of the sample covariance matrix. We demonstrate the performance of the new beamformer when resolving point scatterers and cyst phantom using both simulated data and experimental data and compare it with the delay-and-sum (DAS), the minimum variance (MV) and the ESBMV beamformer. We use the full width at half maximum (FWHM) and the peak-side-lobe level (PSL) to quantify the performance of imaging resolution and the contrast ratio (CR) to quantify the performance of imaging contrast. The FWHM of the new beamformer is only 15%, 50% and 50% of those of the DAS, MV and ESBMV beamformer, while the PSL is 127.2dB, 115dB and 60dB lower. What is more, an improvement of 239.8%, 232.5% and 32.9% in CR using simulated data and an improvement of 814%, 1410.7% and 86.7% in CR using experimental data are achieved compared to the DAS, MV and ESBMV beamformer respectively. In addition, the effect of the sound speed error is investigated by artificially overestimating the speed used in calculating the propagation delay and the results show that the new beamformer provides better robustness against the sound speed errors. Therefore, the proposed beamformer offers a better performance than the DAS, MV and ESBMV beamformer, showing its potential in medical ultrasound imaging. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.

    PubMed

    Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar

    2018-03-01

    This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility (< 10 cycles per minute), and the difference between near and distance phoria (> 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near phoria, and monocular accommodative facility yield good sensitivity and specificity for diagnosis of NSBVAs in a community set-up. © 2017 Optometry Australia.

  5. An estimate of the number of tropical tree species

    PubMed Central

    Slik, J. W. Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F.; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L.; Bellingham, Peter J.; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q.; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L. M.; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K.; Chazdon, Robin L.; Clark, Connie; Clark, David B.; Clark, Deborah A.; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S.; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J.; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A. O.; Eisenlohr, Pedro V.; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J.; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T.; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M.; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A.; Joly, Carlos A.; de Jong, Bernardus H. J.; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L.; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F.; Lawes, Michael J.; do Amaral, Ieda Leao; Letcher, Susan G.; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H.; Meilby, Henrik; Melo, Felipe P. L.; Metcalfe, Daniel J.; Medjibe, Vincent P.; Metzger, Jean Paul; Millet, Jerome; Mohandass, D.; Montero, Juan C.; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T. F.; Pitman, Nigel C. A.; Poorter, Lourens; Poulsen, Axel D.; Poulsen, John; Powers, Jennifer; Prasad, Rama C.; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A.; Santos, Fernanda; Sarker, Swapan K.; Satdichanh, Manichanh; Schmitt, Christine B.; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S.; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I.-Fang; Sunderland, Terry; Suresh, H. S.; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W.; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L. C. H.; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A.; Webb, Campbell O.; Whitfeld, Timothy; Wich, Serge A.; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C. Yves; Yap, Sandra L.; Yoneda, Tsuyoshi; Zahawi, Rakan A.; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L.; Garcia Luize, Bruno; Venticinque, Eduardo M.

    2015-01-01

    The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher’s alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼40,000 and ∼53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼19,000–25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼4,500–6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa. PMID:26034279

  6. Nonpharmacological Strategies to Prevent Contrast-Induced Acute Kidney Injury

    PubMed Central

    Eiam-Ong, Somchai

    2014-01-01

    Contrast-induced AKI (CI-AKI) has been one of the leading causes for hospital-acquired AKI and is associated with independent risk for adverse clinical outcomes including morbidity and mortality. The aim of this review is to provide a brief summary of the studies that focus on nonpharmacological strategies to prevent CI-AKI, including routine identification of at-risk patients, use of appropriate hydration regimens, withdrawal of nephrotoxic drugs, selection of low-osmolar contrast media or isoosmolar contrast media, and using the minimum volume of contrast media as possible. There is no need to schedule dialysis in relation to injection of contrast media or injection of contrast agent in relation to dialysis program. Hemodialysis cannot protect the poorly functioning kidney against CI-AKI. PMID:24795882

  7. A physical mechanism for the prediction of the sunspot number during solar cycle 21. [graphs (charts)

    NASA Technical Reports Server (NTRS)

    Schatten, K. H.; Scherrer, P. H.; Svalgaard, L.; Wilcox, J. M.

    1978-01-01

    On physical grounds it is suggested that the sun's polar field strength near a solar minimum is closely related to the following cycle's solar activity. Four methods of estimating the sun's polar magnetic field strength near solar minimum are employed to provide an estimate of cycle 21's yearly mean sunspot number at solar maximum of 140 plus or minus 20. This estimate is considered to be a first order attempt to predict the cycle's activity using one parameter of physical importance.

  8. A map of the day-night contrast of the extrasolar planet HD 189733b.

    PubMed

    Knutson, Heather A; Charbonneau, David; Allen, Lori E; Fortney, Jonathan J; Agol, Eric; Cowan, Nicolas B; Showman, Adam P; Cooper, Curtis S; Megeath, S Thomas

    2007-05-10

    'Hot Jupiter' extrasolar planets are expected to be tidally locked because they are close (<0.05 astronomical units, where 1 au is the average Sun-Earth distance) to their parent stars, resulting in permanent daysides and nightsides. By observing systems where the planet and star periodically eclipse each other, several groups have been able to estimate the temperatures of the daysides of these planets. A key question is whether the atmosphere is able to transport the energy incident upon the dayside to the nightside, which will determine the temperature at different points on the planet's surface. Here we report observations of HD 189733, the closest of these eclipsing planetary systems, over half an orbital period, from which we can construct a 'map' of the distribution of temperatures. We detected the increase in brightness as the dayside of the planet rotated into view. We estimate a minimum brightness temperature of 973 +/- 33 K and a maximum brightness temperature of 1,212 +/- 11 K at a wavelength of 8 mum, indicating that energy from the irradiated dayside is efficiently redistributed throughout the atmosphere, in contrast to a recent claim for another hot Jupiter. Our data indicate that the peak hemisphere-integrated brightness occurs 16 +/- 6 degrees before opposition, corresponding to a hotspot shifted east of the substellar point. The secondary eclipse (when the planet moves behind the star) occurs 120 +/- 24 s later than predicted, which may indicate a slightly eccentric orbit.

  9. Three-dimensional motion detection of a 20-nm gold nanoparticle using twilight-field digital holography with coherence regulation.

    PubMed

    Goto, Kazufumi; Hayasaki, Yoshio

    2015-07-15

    In the twilight-field method for obtaining interference fringes with high contrast in in-line digital holography, only the intensity of the reference light is regulated to be close to the intensity of the object light, which is the ultra-weak scattered light from a nanoparticle, by using a low-frequency attenuation filter. Coherence of the light also strongly affects the contrast of the interference fringes. High coherence causes a lot of undesired coherent noise, which masks the fringes derived from the nanoparticles. Too-low coherence results in fringes with low contrast and a correspondingly low signal-to-noise ratio. Consequently, proper regulation of the coherence of the light source, in this study the spectral width, improves the minimum detectable size in holographic three-dimensional position measurement of nanoparticles. By using these methods, we were able to measure the position of a gold nanoparticle with a minimum diameter of 20 nm.

  10. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  11. Hypnosis control based on the minimum concentration of anesthetic drug for maintaining appropriate hypnosis.

    PubMed

    Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro

    2013-01-01

    This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.

  12. Heating, moisture, and water budgets of tropical and midlatitude squall lines - Comparisons and sensitivity to longwave radiation

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Simpson, J.; Sui, C.-H.; Ferrier, B.; Lang, S.; Scala, J.; Chou, M.-D.; Pickering, K.

    1993-01-01

    A 2D time-dependent and nonhydrostatic numerical cloud model is presently used to estimate the heating, moisture, and water budgets in the convective and stratiform regions for both a tropical and a midlatitude squall line. The model encompasses a parameterized, three-class ice phase microphysical scheme and longwave radiative transfer process. It is noted that the convective region plays an important role in the generation of stratiform rainfall for both cases. While a midlevel minimum in the moisture profile for the tropical case is due to vertical eddy transport in the convective region, the contribution to the heating budget by the cloud-scale fluxes is minor; by contrast, the vertical eddy heat-flux is relatively important for the midlatitude case due to the stronger vertical velocities present in the convective cells.

  13. Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.

    PubMed

    Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L

    2017-05-31

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.

  14. Minimum Wage Increases and the Working Poor. Changing Domestic Priorities Discussion Paper.

    ERIC Educational Resources Information Center

    Mincy, Ronald B.

    Most economists agree that the difficulties of targeting minimum wage increases to low-income families make such increases ineffective tools for reducing poverty. This paper provides estimates of the impact of minimum wage increases on the poverty gap and the number of poor families, and shows which factors are barriers to decreasing poverty…

  15. Minimum Wages and School Enrollment of Teenagers: A Look at the 1990's.

    ERIC Educational Resources Information Center

    Chaplin, Duncan D.; Turner, Mark D.; Pape, Andreas, D.

    2003-01-01

    Estimates the effects of higher minimum wages on school enrollment using the Common Core of Data. Controlling for local labor market conditions and state and year fixed effects, finds some evidence that higher minimum wages reduce teen school enrollment in states where students drop out before age 18. (23 references) (Author/PKP)

  16. Influence of regional-scale anthropogenic activity in northeast Asia on seasonal variations of surface ozone and carbon monoxide observed at Oki, Japan

    NASA Astrophysics Data System (ADS)

    Pochanart, Pakpong; Hirokawa, Jun; Kajii, Yoshizumi; Akimoto, Hajime; Nakao, Makoto

    1999-02-01

    Surface O3 and CO measurements were carried out at Oki, Japan during March 1994 to February 1996 in order to elucidate the processes determining temporal variations of O3 and CO in the northeast Asian Pacific rim region. The isentropic trajectory analysis was applied to sort out the influences of the air mass exchange under the Asian monsoon system and the regional-scale photochemical buildup of O3. The trajectories were categorized into five groups which cover background and regionally polluted air masses. The seasonal cycles of O3 and CO in the background continental air mass revealed spring maximum-summer minimum with averaged concentrations ranging from 32 and 120 ppb to 45 and 208 ppb, respectively. In contrast, O3 concentrations in the regionally polluted continental air mass ranged from 44 to 57 ppb and showed a winter minimum and a spring-summer-autumn broad maximum, which was characterized by photochemical O3 production due to anthropogenic activities in northeast Asia. CO concentrations in the same air mass showed a spring maximum of 271 ppb and a summer-autumn minimum of 180 ppb. The photochemical buildup of O3 resulting from anthropogenic activities in this region was estimated to be 21 ppb in summer, while its production was insignificant, an average 3 ppb, in winter. A comparison between data in northeast Asia and in Europe shows many similarities, supporting the contention that photochemical buildup of O3 from large-scale precursor emissions in both regions is very significant.

  17. Psychophysical Calibration of Mobile Touch-Screens for Vision Testing in the Field

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    2015-01-01

    The now ubiquitous nature of touch-screen displays in cell phones and tablet computers makes them an attractive option for vision testing outside of the laboratory or clinic. Accurate measurement of parameters such as contrast sensitivity, however, requires precise control of absolute and relative screen luminances. The nonlinearity of the display response (gamma) can be measured or checked using a minimum motion technique similar to that developed by Anstis and Cavanagh (1983) for the determination of isoluminance. While the relative luminances of the color primaries vary between subjects (due to factors such as individual differences in pre-retinal pigment densities), the gamma nonlinearity can be checked in the lab using a photometer. Here we compare results obtained using the psychophysical method with physical measurements for a number of different devices. In addition, we present a novel physical method using the device's built-in front-facing camera in conjunction with a mirror to jointly calibrate the camera and display. A high degree of consistency between devices is found, but some departures from ideal performance are observed. In spite of this, the effects of calibration errors and display artifacts on estimates of contrast sensitivity are found to be small.

  18. Image data compression having minimum perceptual error

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1995-01-01

    A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  19. Preclinical evaluation of biodegradable macromolecular contrast agents for magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Feng, Yi

    Macromolecular contrast agents have been shown to be superior to small molecular weight contrast agents for MRI in blood pool imaging, tumor diagnosis and grading. However, none has been approved by the FDA because they circulate in the bloodstream much longer than small molecular weight contrast agents and result in high tissue accumulation of toxic Gd(III) ions. Biodegradable macromolecular contrast agents (BMCA) were invented to alleviate the toxic accumulation. They have a cleavable disulfide bond based backbone that can be degraded in vivo and excreted out of the body via renal filtration. Furthermore, the side chain of the backbone can be modified to achieve various degradation rates. Three BMCA, (Gd-DTPA)-cystamine copolymers (GDCC), Gd-DTPA cystine copolymers (GDCP), and Gd-DTPA cystine diethyl ester copolymers (GDCEP), were evaluated as blood pool contrast agents in a rat model. They have excellent blood pool enhancement, preferred pharmacokinetics, and only minimal long-term tissue retention of toxic Gd(III) ions. GDCC and GDCP, the lead agents with desired degradation rates, with molecular weights of 20 KDa and 70 KDa, were chosen for dynamic contrast enhanced MRI (DCE-MRI) to differentiate human prostate tumor models of different malignancy and growth rates. GDCC and GDCP could differentiate these tumor models, providing more accurate estimations of plasma volume, flow leakage rate, and permeability surface area product than a small molecular weight contrast agent Gd-DTPA-BMA when compared to the prototype macromolecular contrast agent albumin-Gd-DTPA. GDCC was favored for its neutral charge side chain and reasonable uptake rate by the tumors. GDCC with a molecular weight of 40 KDa (GDCC-40, above the renal filtration cutoff size) was used to assess the efficacy of two photothermal therapies (interstitial and indocyanine green enhanced). GDCC-40 provided excellent tumor enhancement shortly after its injection. Acute tumor response (4 hr) after therapies was revealed by DCE-MRI using GDCC-40. The region of the tumor with suspicious uptake of GDCC-40 could be correlated to the residual tumor. With only minimum tissue accumulation, BMCA have applications in blood pool imaging, cancer diagnosis, and efficacy assessment of anticancer treatment. Therefore, BMCA are promising for clinical applications.

  20. The Effect of Minimum Wages on Youth Employment in Canada: A Panel Study.

    ERIC Educational Resources Information Center

    Yuen, Terence

    2003-01-01

    Canadian panel data 1988-90 were used to compare estimates of minimum-wage effects based on a low-wage/high-worker sample and a low-wage-only sample. Minimum-wage effect for the latter is nearly zero. Different results for low-wage subgroups suggest a significant effect for those with longer low-wage histories. (Contains 26 references.) (SK)

  1. A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.

    PubMed

    Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M

    2018-04-30

    Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.

  2. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    PubMed

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  3. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  4. Assessment of Refueling Hose Visibility: Post-Cleaning Evaluation

    DTIC Science & Technology

    2013-10-01

    Cott, H.P., and Kinkade, R.G. (Eds.), Human Engineering Guide to Equipment Design. Washington, DC: GPO. F-35 Gen 5 HMDS spec for minimum...spectral band sensors: visible, near IR, and short-wave IR. These images will be captured for each of the samples set up for both the geometries...Contrast Ratio Near IR Black Reflectance White Reflectance Contrast Ratio Short Wave IR Black

  5. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part I: humidity

    NASA Astrophysics Data System (ADS)

    Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.

    2017-07-01

    This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.

  6. Emplacement conditions of komatiite magmas from the 3.49 Ga Komati Formation, Barberton Greenstone Belt, South Africa1

    NASA Astrophysics Data System (ADS)

    Parman, S. W.; Dann, J. C.; Grove, T. L.; de Wit, M. J.

    1997-08-01

    This paper provides new constraints on the crystallization conditions of the 3.49 Ga Barberton komatiites. The compositional evidence from igneous pyroxene in the olivine spinifex komatiite units indicates that the magma contained significant quantities of dissolved H2O. Estimates are made from comparisons of the compositions of pyroxene preserved in Barberton komatiites with pyroxene produced in laboratory experiments at 0.1 MPa (1 bar) under anhydrous conditions and at 100 and 200 MPa (1 and 2 kbar) under H2O-saturated conditions on an analog Barberton composition. Pyroxene thermobarometry on high-Ca clinopyroxene compositions from ten samples requires a range of minimum magmatic water contents of 6 wt.% or greater at the time of pyroxene crystallization and minimum emplacement pressures of 190 MPa (6 km depth). Since high-Ca pyroxene appears after 30% crystallization of olivine and spinel, the liquidus H2O contents could be 4 to 6 wt.% H2O. The liquidus temperature of the Barberton komatiite composition studied is between 1370 and 1400°C at 200 MPa under H2O-saturated conditions. When compared to the temperature-depth regime of modern melt generation environments, the komatiite mantle source temperatures are 200°C higher than the hydrous mantle melting temperatures inferred in modern subduction zone environments and 100°C higher than mean mantle melting temperatures estimated at mid-ocean ridges. When compared to previous estimates of komatiite liquidus temperatures, melting under hydrous conditions occurs at temperatures that are ˜ 250°C lower than previous estimates for anhydrous komatiite. Mantle melting by near-fractional, adiabatic decompression takes place in a melting column that spans ˜ 38 km depth range under hydrous conditions. This depth interval for melting is only slightly greater than that observed in modern mid-ocean ridge environments. In contrast, anhydrous fractional melting models of komatiite occur over a larger depth range (˜ 130 km) and place the base of the melting column into the transition zone.

  7. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  8. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  9. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE PAGES

    Chylek, Petr; Augustine, John A.; Klett, James D.; ...

    2017-09-30

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  10. Cross-scale modeling of surface temperature and tree seedling establishment inmountain landscapes

    USGS Publications Warehouse

    Dingman, John; Sweet, Lynn C.; McCullough, Ian M.; Davis, Frank W.; Flint, Alan L.; Franklin, Janet; Flint, Lorraine E.

    2013-01-01

    Abstract: Introduction: Estimating surface temperature from above-ground field measurements is important for understanding the complex landscape patterns of plant seedling survival and establishment, processes which occur at heights of only several centimeters. Currently, future climate models predict temperature at 2 m above ground, leaving ground-surface microclimate not well characterized. Methods: Using a network of field temperature sensors and climate models, a ground-surface temperature method was used to estimate microclimate variability of minimum and maximum temperature. Temperature lapse rates were derived from field temperature sensors and distributed across the landscape capturing differences in solar radiation and cold air drainages modeled at a 30-m spatial resolution. Results: The surface temperature estimation method used for this analysis successfully estimated minimum surface temperatures on north-facing, south-facing, valley, and ridgeline topographic settings, and when compared to measured temperatures yielded an R2 of 0.88, 0.80, 0.88, and 0.80, respectively. Maximum surface temperatures generally had slightly more spatial variability than minimum surface temperatures, resulting in R2 values of 0.86, 0.77, 0.72, and 0.79 for north-facing, south-facing, valley, and ridgeline topographic settings. Quasi-Poisson regressions predicting recruitment of Quercus kelloggii (black oak) seedlings from temperature variables were significantly improved using these estimates of surface temperature compared to air temperature modeled at 2 m. Conclusion: Predicting minimum and maximum ground-surface temperatures using a downscaled climate model coupled with temperature lapse rates estimated from field measurements provides a method for modeling temperature effects on plant recruitment. Such methods could be applied to improve projections of species’ range shifts under climate change. Areas of complex topography can provide intricate microclimates that may allow species to redistribute locally as climate changes.

  11. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chylek, Petr; Augustine, John A.; Klett, James D.

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  12. The SME gauge sector with minimum length

    NASA Astrophysics Data System (ADS)

    Belich, H.; Louzada, H. L. C.

    2017-12-01

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.

  13. Economic policy and the double burden of malnutrition: cross-national longitudinal analysis of minimum wage and women's underweight and obesity.

    PubMed

    Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody

    2018-04-01

    To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.

  14. Lithology-dependent minimum horizontal stress and in-situ stress estimate

    NASA Astrophysics Data System (ADS)

    Zhang, Yushuai; Zhang, Jincai

    2017-04-01

    Based on the generalized Hooke's law with coupling stresses and pore pressure, the minimum horizontal stress is solved with assumption that the vertical, minimum and maximum horizontal stresses are in equilibrium in the subsurface formations. From this derivation, we find that the uniaxial strain method is the minimum value or lower bound of the minimum stress. Using Anderson's faulting theory and this lower bound of the minimum horizontal stress, the coefficient of friction of the fault is derived. It shows that the coefficient of friction may have a much smaller value than what it is commonly assumed (e.g., μf = 0.6-0.7) for in-situ stress estimate. Using the derived coefficient of friction, an improved stress polygon is drawn, which can reduce the uncertainty of in-situ stress calculation by narrowing the area of the conventional stress polygon. It also shows that the coefficient of friction of the fault is dependent on lithology. For example, if the formation in the fault is composed of weak shales, then the coefficient of friction of the fault may be small (as low as μf = 0.2). This implies that this fault is weaker and more likely to have shear failures than the fault composed of sandstones. To avoid the weak fault from shear sliding, it needs to have a higher minimum stress and a lower shear stress. That is, the critically stressed weak fault maintains a higher minimum stress, which explains why a low shear stress appears in the frictionally weak fault.

  15. On the relationship of minimum detectable contrast to dose and lesion size in abdominal CT

    NASA Astrophysics Data System (ADS)

    Zhou, Yifang; Scott, Alexander, II; Allahverdian, Janet; Lee, Christina; Kightlinger, Blake; Azizyan, Avetis; Miller, Joseph

    2015-10-01

    CT dose optimization is typically guided by pixel noise or contrast-to-noise ratio that does not delineate low contrast details adequately. We utilized the statistically defined low contrast detectability to study its relationship to dose and lesion size in abdominal CT. A realistically shaped medium sized abdomen phantom was customized to contain a cylindrical void of 4 cm diameter. The void was filled with a low contrast (1% and 2%) insert containing six groups of cylindrical targets ranging from 1.2 mm to 7 mm in size. Helical CT scans were performed using a Siemens 64-slice mCT and a GE Discovery 750 HD at various doses. After the subtractions between adjacent slices, the uniform sections of the filtered backprojection reconstructed images were partitioned to matrices of square elements matching the sizes of the targets. It was verified that the mean values from all the elements in each matrix follow a Gaussian distribution. The minimum detectable contrast (MDC), quantified by the mean signal to background difference equal to the distribution’s standard deviation multiplied by 3.29, corresponding to 95% confidence level, was found to be related to the phantom specific dose and the element size by a power law (R^2  >  0.990). Independent readings on the 5 mm and 7 mm targets were compared to the measured contrast to the MDC ratios. The results showed that 93% of the cases were detectable when the measured contrast exceeds the MDC. The correlation of the MDC to the pixel noise and target size was also identified and the relationship was found to be the same for the scanners in the study. To quantify the impact of iterative reconstructions to the low contrast detectability, the noise structure was studied in a similar manner at different doses and with different ASIR blending fractions. The relationship of the dose to the blending fraction and low contrast detectability is presented.

  16. Airborne optical detection of oil on water.

    NASA Technical Reports Server (NTRS)

    Millard, J. P.; Arvesen, J. C.

    1972-01-01

    Airborne measurements were made over controlled oil-spill test sites to evaluate various techniques, utilizing reflected sunlight, for detecting oil on water. The results of these measurements show that (1) maximum contrast between oil and water is in the UV and red portions of the spectrum; (2) minimum contrast is in the blue-green; (3) differential polarization appears to be a very promising technique; (4) no characteristic absorption bands, which would permit one oil to be distinguished from another, were discovered in the spectral regions measured; (5) sky conditions greatly influence the contrast between oil and water; and (6) highest contrast was achieved under overcast sky conditions.

  17. Thermal biology, population fluctuations and implications of temperature extremes for the management of two globally significant insect pests.

    PubMed

    Nyamukondiwa, Casper; Weldon, Christopher W; Chown, Steven L; le Roux, Peter C; Terblanche, John S

    2013-12-01

    The link between environmental temperature, physiological processes and population fluctuations is a significant aspect of insect pest management. Here, we explore how thermal biology affects the population abundance of two globally significant pest fruit fly species, Ceratitis capitata (medfly) and C. rosa (Natal fruit fly), including irradiated individuals and those expressing a temperature sensitive lethal (tsl) mutation that are used in the sterile insect technique. Results show that upper and lower lethal temperatures are seldom encountered at the field sites, while critical minimum temperatures for activity and lower developmental thresholds are crossed more frequently. Estimates of abundance revealed that C. capitata are active year-round, but abundance declines markedly during winter. Temporal autocorrelation of average fortnightly trap captures and of development time, estimated from an integrated model to calculate available degree days, show similar seasonal lags suggesting that population increases in early spring occur after sufficient degree-days have accumulated. By contrast, population collapses coincide tightly with increasing frequency of low temperature events that fall below critical minimum temperatures for activity. Individuals of C. capitata expressing the tsl mutation show greater critical thermal maxima and greater longevity under field conditions than reference individuals. Taken together, this evidence suggests that low temperatures limit populations in the Western Cape, South Africa and likely do so elsewhere. Increasing temperature extremes and warming climates generally may extend the season over which these species are active, and could increase abundance. The sterile insect technique may prove profitable as climates change given that laboratory-reared tsl flies have an advantage under warmer conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Polydactyly in the American Indian.

    PubMed

    Bingle, G J; Niswander, J D

    1975-01-01

    Polydactyly has an incidence in the American Indian twice that of Caucasians. A minimum estimate of this incidence is 2.40 per 1,000 live births. Preaxial type 1 has an incidence three to four times that reported for Caucasians or Negroes. The overall sex ratio in Indians is distorted with more males affected than females. The preaxial type 1 anomaly has a strong predilection for the hands and always is unilateral in contrast to postaxial type B where more than one-half are bilateral. The evidence to date, consisting of varying incidences of specific types of polydactyly among American whites, Negroes, and Indians in varying enviroments, suggests different gene-frequencies for polydactyly in each population. The incidence in Indians with 50% Caucasian admixture suggests that the factors controlling polydactyly are in large part genetically determined. Family studies and twin studies reported elsewhere offer no clear-cut genetic model which explains the highly variable gene frequencies.

  19. Polydactyly in the American Indian.

    PubMed Central

    Bingle, G J; Niswander, J D

    1975-01-01

    Polydactyly has an incidence in the American Indian twice that of Caucasians. A minimum estimate of this incidence is 2.40 per 1,000 live births. Preaxial type 1 has an incidence three to four times that reported for Caucasians or Negroes. The overall sex ratio in Indians is distorted with more males affected than females. The preaxial type 1 anomaly has a strong predilection for the hands and always is unilateral in contrast to postaxial type B where more than one-half are bilateral. The evidence to date, consisting of varying incidences of specific types of polydactyly among American whites, Negroes, and Indians in varying enviroments, suggests different gene-frequencies for polydactyly in each population. The incidence in Indians with 50% Caucasian admixture suggests that the factors controlling polydactyly are in large part genetically determined. Family studies and twin studies reported elsewhere offer no clear-cut genetic model which explains the highly variable gene frequencies. PMID:1155454

  20. On-line photolithography modeling using spectrophotometry and Prolith/2

    NASA Astrophysics Data System (ADS)

    Engstrom, Herbert L.; Beacham, Jeanne E.

    1994-05-01

    Spectrophotometry has been applied to optimizing photolithography processes in semiconductor manufacturing. For many years thin film measurement systems have been used in manufacturing for controlling film deposition processes. The combination of film thickness mapping with photolithography modeling has expanded the applications of this technology. Experimental measurements of dose-to-clear, the minimum light exposure dose required to fully develop a photoresist, are described. It is shown how dose-to-clear and photoresist contrast may be determined rapidly and conveniently from measurements of a dose exposure matrix on a monitor wafer. Such experimental measurements may underestimate the dose-to- clear because of thickness variations of the photoresist and underlying layers on the product wafer. Online modeling of the photolithographic process together with film thickness maps of the entire wafer can overcome this problem. Such modeling also provides maps of dose-to- clear and resist linewidth that can be used to estimate and optimize yield.

  1. Modeling Encapsulated Microbubble Dynamics at High Pressure Amplitudes

    NASA Astrophysics Data System (ADS)

    Heyse, Jan F.; Bose, Sanjeeb; Iaccarino, Gianluca

    2017-11-01

    Encapsulated microbubbles are commonly used in ultrasound contrast imaging and are of growing interest in therapeutic applications where local cavitation creates temporary perforations in cell membranes allowing for enhanced drug delivery. Clinically used microbubbles are encapsulated by a shell commonly consisting of protein, polymer, or phospholipid; the response of these bubbles to externally imposed ultrasound waves is sensitive to the compressibility of the encapsulating shell. Existing models approximate the shell compressibility via an effective surface tension (Marmottant et al. 2005). We present simulations of microbubbles subjected to high amplitude ultrasound waves (on the order of 106 Pa) and compare the results with the experimental measurements of Helfield et al. (2016). Analysis of critical points (corresponding to maximum and minimum expansion) in the governing Rayleigh-Plesset equation is used to make estimates of the parameters used to characterize the effective surface tension of the encapsulating shell. Stanford Graduate Fellowship.

  2. Benefit adequacy among elderly Social Security retired-worker beneficiaries and the SSI federal benefit rate.

    PubMed

    Rupp, Kalman; Strand, Alexander; Davies, Paul; Sears, Jim

    2007-01-01

    Both target effectiveness and administrative simplicity are desirable properties in the design of minimum benefit packages for public retirement programs. The federal benefit rate (FBR) of the Supplemental Security Income (SSI) program has been proposed by some analysts as a potentially attractive basis of establishing a new minimum benefit for Social Security on both of these grounds. This type of proposal is related to a broader array of minimum benefit proposals that would establish a Social Security benefit floor based on the poverty rate. In contrast to Social Security, the SSI program is means tested, including both an income and asset screen and also a categorical eligibility screen (the requirement to qualify as aged or disabled). The SSI FBR provides an inflation-adjusted, guaranteed income floor for aged and disabled people with low assets. The FBR has been perceived by proponents as a minimal measure of Social Security benefit adequacy because it represents a subpoverty income level for a family of one or two depending on marital status. For this same reason it has been seen as a target-effective tool of designing a minimum Social Security benefit. An FBR-based minimum benefit has also been viewed as administratively simple to implement; the benefit can be calculated from Social Security administrative records using a completely automated electronic process. Therefore-in contrast to the SSI program itself-an FBR-based minimum benefit would incur virtually no ongoing administrative costs, would not require a separate application for a means-tested program, and would avoid the perception of welfare stigma. While these ideas have been discussed in the literature and among policymakers in the United States over the years, and similar proposals have been considered or implemented in several foreign countries, there have been no previous analyses measuring the size of the potentially affected beneficiary population. Nor has there been any systematic assessment of the FBR as a measure of benefit adequacy or the tradeoffs between potential target effectiveness and administrative simplicity. Based on a series of simulations, we assess the FBR as a potential foundation for minimum Social Security benefits and we examine the tradeoffs between administrative simplicity and target effectiveness using microdata from the 1996 panel of the Survey of Income and Program Participation (SIPP). Our empirical analysis is limited to Social Security retired-worker beneficiaries aged 65 or older. We start with the assessment of the FBR as a measure of benefit adequacy. We are particularly concerned about two types of error: (1) incorrectly identifying some Social Security beneficiaries as "economically vulnerable," and (2) incorrectly identifying others as "not economically vulnerable." Operationally we measure economic vulnerability by two alternative standards. One of our measures considers beneficiaries with family income below the official poverty threshold as vulnerable. Our second measure is more restrictive; it uses a family income threshold equal to 75 percent of the official poverty threshold. We find that a substantial minority of retired workers have Social Security benefits below the FBR. The results also show that the FBR-based measure of Social Security benefit adequacy is very imprecise in terms of identifying economically vulnerable people. We estimate that the vast majority of beneficiaries with Social Security benefits below the FBR are not economically vulnerable. Conversely, an FBR-level Social Security benefit threshold fails to identify some beneficiaries who are economically vulnerable. Thus an FBR-level minimum benefit would be poorly targeted in terms of both types of errors we are concerned about. An FBR-level minimum benefit would provide minimum Social Security benefits to many people who are clearly not poor. Conversely, an FBR-level minimum benefit would not provide any income relief to some who are poor. The administrative simplicity behind these screening errors also results in additional program cost that may be perceived as substantial. We estimate that an FBR-level minimum benefit would increase aggregate program cost for retired workers aged 65 or older by roughly 2 percent. There are two fundamental reasons for these findings. First, the concept of an FBR-level minimum benefit looks at the individual or married couple in artificial isolation; however, the family is the main consumption unit in our society. The income of an unmarried partner or family members other than a married spouse is ignored. Second, individuals and couples may also have income from sources other than Social Security or SSI, which is also ignored by a simple FBR-based minimum benefit concept. The substantial empirical magnitude of measurement error arising from these conceptual simplifications naturally leads to the assessment of the tradeoff between target effectiveness and administrative simplicity. To facilitate this analysis, we simulate the potential effect of alternative screening methods designed to increase target effectiveness; while reducing program cost, such alternatives also may increase administrative complexity. For example, considering the combined Social Security benefit of a married couple (rather than looking at the husband and wife in isolation) might substantially increase target effectiveness with a relatively small increase in administrative complexity. Adding a family income screen might increase administrative complexity to a greater degree, but also would increase target effectiveness dramatically. The results also suggest that at some point adding new screens-such as a comprehensive asset test-may drastically increase administrative complexity with diminishing returns in terms of increased target effectiveness and reduced program cost. Whether a broad-based minimum benefit concept that is not tied to previous work experience is perceived by policymakers as desirable or not may depend on several factors not addressed in this article. However, to the extent that this type of minimum benefit design is regarded as potentially desirable, the tradeoffs between administrative simplicity and target effectiveness need to be considered.

  3. DC-9/JT8D refan, Phase 1. [technical and economic feasibility of retrofitting DC-9 aircraft with refan engine to achieve desired acoustic levels

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Analyses and design studies were conducted on the technical and economic feasibility of installing the JT8D-109 refan engine on the DC-9 aircraft. Design criteria included minimum change to the airframe to achieve desired acoustic levels. Several acoustic configurations were studied with two selected for detailed investigations. The minimum selected acoustic treatment configuration results in an estimated aircraft weight increase of 608 kg (1,342 lb) and the maximum selected acoustic treatment configuration results in an estimated aircraft weight increase of 809 kg (1,784 lb). The range loss for the minimum and maximum selected acoustic treatment configurations based on long range cruise at 10 668 m (35,000 ft) altitude with a typical payload of 6 804 kg (15,000 lb) amounts to 54 km (86 n. mi.) respectively. Estimated reduction in EPNL's for minimum selected treatment show 8 EPNdB at approach, 12 EPNdB for takeoff with power cutback, 15 EPNdB for takeoff without power cutback and 12 EPNdB for sideline using FAR Part 36. Little difference was estimated in EPNL between minimum and maximum treatments due to reduced performance of maximum treatment. No major technical problems were encountered in the study. The refan concept for the DC-9 appears technically feasible and economically viable at approximately $1,000,000 per airplane. An additional study of the installation of JT3D-9 refan engine on the DC-8-50/61 and DC-8-62/63 aircraft is included. Three levels of acoustic treatment were suggested for DC-8-50/61 and two levels for DC-8-62/63. Results indicate the DC-8 technically can be retrofitted with refan engines for approximately $2,500,000 per airplane.

  4. Binoculars with mil scale as a training aid for estimating form class

    Treesearch

    H.W. Camp, J.R.; C.A. Bickford

    1949-01-01

    In an extensive forest inventory, estimates involving personal judgment cannot be eliminated. However, every means should be taken to keep these estimates to a minimum and to provide on-the-job training that is adequate for obtaining the best estimates possible.

  5. Space shuttle engineering and operations support. Orbiter to spacelab electrical power interface. Avionics system engineering

    NASA Technical Reports Server (NTRS)

    Emmons, T. E.

    1976-01-01

    The results are presented of an investigation of the factors which affect the determination of Spacelab (S/L) minimum interface main dc voltage and available power from the orbiter. The dedicated fuel cell mode of powering the S/L is examined along with the minimum S/L interface voltage and available power using the predicted fuel cell power plant performance curves. The values obtained are slightly lower than current estimates and represent a more marginal operating condition than previously estimated.

  6. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  7. Lung imaging in rodents using dual energy micro-CT

    NASA Astrophysics Data System (ADS)

    Badea, C. T.; Guo, X.; Clark, D.; Johnston, S. M.; Marshall, C.; Piantadosi, C.

    2012-03-01

    Dual energy CT imaging is expected to play a major role in the diagnostic arena as it provides material decomposition on an elemental basis. The purpose of this work is to investigate the use of dual energy micro-CT for the estimation of vascular, tissue, and air fractions in rodent lungs using a post-reconstruction three-material decomposition method. We have tested our method using both simulations and experimental work. Using simulations, we have estimated the accuracy limits of the decomposition for realistic micro-CT noise levels. Next, we performed experiments involving ex vivo lung imaging in which intact lungs were carefully removed from the thorax, were injected with an iodine-based contrast agent and inflated with air at different volume levels. Finally, we performed in vivo imaging studies in (n=5) C57BL/6 mice using fast prospective respiratory gating in endinspiration and end-expiration for three different levels of positive end-expiratory pressure (PEEP). Prior to imaging, mice were injected with a liposomal blood pool contrast agent. The mean accuracy values were for Air (95.5%), Blood (96%), and Tissue (92.4%). The absolute accuracy in determining all fraction materials was 94.6%. The minimum difference that we could detect in material fractions was 15%. As expected, an increase in PEEP levels for the living mouse resulted in statistically significant increases in air fractions at end-expiration, but no significant changes in end-inspiration. Our method has applicability in preclinical pulmonary studies where various physiological changes can occur as a result of genetic changes, lung disease, or drug effects.

  8. Contrasts in Sea Ice Formation and Production in the Arctic Seasonal and Perennial Ice Zones

    NASA Technical Reports Server (NTRS)

    Kwok, R.

    2006-01-01

    Four years (1997-2000) of RADARSAT Geophysical Processor System (RGPS) data are used to contrast the sea ice deformation and production regionally, and in the seasonal (SIZ) and perennial (PIZ) ice zones. Ice production is of seasonal ice in openings during the winter. 3-day estimates of these quantities are provided within Lagrangian elements initially 10 km on a side. A distinct seasonal cycle is seen in both zones with these estimates highest in the late fall and with seasonal minimums in the mid-winter. Regional divergence over the winter could be up to 30%. Spatially, the highest deformation is in the SIZ north of coastal Alaska. Both ice deformation and production are higher in the SIZ: deformation-related ice production in the SIZ (approx.0.5 m) is 1.5-2.3 times that of the PIZ (approx.0.3 m) - this is connected to ice strength and thickness. Atmospheric forcing and boundary layer structure contribute to only the seasonal and interannual variability. Seasonal ice growth in ice fractures accounts for approx.25-40% of the total ice production of the Arctic Ocean. By itself, this deformation-ice production relationship could be considered a negative feedback when thickness is perturbed. However, the overall effect on ice production in the face of increasing seasonal and thinner/weaker ice coverage could be modified by: local destabilization of the water column promoting overturning of warmer water due to increased brine rejection; and, the upwelling of the pynocline associated with increased occurrence of large shear motion in sea ice.

  9. MRTD: man versus machine

    NASA Astrophysics Data System (ADS)

    van Rheenen, Arthur D.; Taule, Petter; Thomassen, Jan Brede; Madsen, Eirik Blix

    2018-04-01

    We present Minimum-Resolvable Temperature Difference (MRTD) curves obtained by letting an ensemble of observers judge how many of the six four-bar patterns they can "see" in a set of images taken with different bar-to-background contrasts. The same images are analyzed using elemental signal analysis algorithms and machine-analysis based MRTD curves are obtained. We show that by adjusting the minimum required signal-to-noise ratio the machine-based MRTDs are very similar to the ones obtained with the help of the human observers.

  10. The Einstein-Hilbert gravitation with minimum length

    NASA Astrophysics Data System (ADS)

    Louzada, H. L. C.

    2018-05-01

    We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.

  11. Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...

  12. Spectral factorization of wavefields and wave operators

    NASA Astrophysics Data System (ADS)

    Rickett, James Edward

    Spectral factorization is the problem of finding a minimum-phase function with a given power spectrum. Minimum phase functions have the property that they are causal with a causal (stable) inverse. In this thesis, I factor multidimensional systems into their minimum-phase components. Helical boundary conditions resolve any ambiguities over causality, allowing me to factor multi-dimensional systems with conventional one-dimensional spectral factorization algorithms. In the first part, I factor passive seismic wavefields recorded in two-dimensional spatial arrays. The result provides an estimate of the acoustic impulse response of the medium that has higher bandwidth than autocorrelation-derived estimates. Also, the function's minimum-phase nature mimics the physics of the system better than the zero-phase autocorrelation model. I demonstrate this on helioseismic data recorded by the satellite-based Michelson Doppler Imager (MDI) instrument, and shallow seismic data recorded at Long Beach, California. In the second part of this thesis, I take advantage of the stable-inverse property of minimum-phase functions to solve wave-equation partial differential equations. By factoring multi-dimensional finite-difference stencils into minimum-phase components, I can invert them efficiently, facilitating rapid implicit extrapolation without the azimuthal anisotropy that is observed with splitting approximations. The final part of this thesis describes how to calculate diagonal weighting functions that approximate the combined operation of seismic modeling and migration. These weighting functions capture the effects of irregular subsurface illumination, which can be the result of either the surface-recording geometry, or focusing and defocusing of the seismic wavefield as it propagates through the earth. Since they are diagonal, they can be easily both factored and inverted to compensate for uneven subsurface illumination in migrated images. Experimental results show that applying these weighting functions after migration leads to significantly improved estimates of seismic reflectivity.

  13. The DSM-5 diagnostic criteria for anorexia nervosa may change its population prevalence and prognostic value.

    PubMed

    Mustelin, Linda; Silén, Yasmina; Raevuori, Anu; Hoek, Hans W; Kaprio, Jaakko; Keski-Rahkonen, Anna

    2016-06-01

    The definition of anorexia nervosa was revised for the Fifth Edition of the Diagnostic and Statistical Manual (DSM-5). We examined the impact of these changes on the prevalence and prognosis of anorexia nervosa. In a nationwide longitudinal study of Finnish twins born 1975-1979, the women (N = 2825) underwent a 2-stage screening for eating disorders at mean age 24. Fifty-five women fulfilled DSM-IV criteria for lifetime anorexia nervosa. When we recoded the interviews using DSM-5 criteria, we detected 37 new cases. We contrasted new DSM-5 vs. DSM-IV cases to assess their clinical characteristics and prognosis. We also estimated lifetime prevalences and incidences and tested the association of minimum BMI with prognosis. We observed a 60% increase in the lifetime prevalence of anorexia nervosa using the new diagnostic boundaries, from 2.2% to 3.6%. The new cases had a later age of onset (18.8 y vs. 16.5, p = 0.002), higher minimum BMI (16.9 vs. 15.5 kg/m(2), p = 0.0004), a shorter duration of illness (one year vs. three years, p = 0.002), and a higher 5-year probability or recovery (81% vs. 67%, p = 0.002). Minimum BMI was not associated with prognosis. It therefore appears that the substantial increase in prevalence of anorexia nervosa is offset by a more benign course of illness in new cases. Increased diagnostic heterogeneity underscores the need for reliable indicators of disease severity. Our findings indicate that BMI may not be an ideal severity marker, but should be complemented by prognostically informative criteria. Future studies should focus on identifying such factors in prospective settings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas

    We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequencemore » (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less

  15. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    DOE PAGES

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; ...

    2017-08-08

    Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less

  16. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas

    Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less

  17. Robust linear discriminant models to solve financial crisis in banking sectors

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni

    2014-12-01

    Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.

  18. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  19. The minimum follow-up required for radial head arthroplasty: a meta-analysis.

    PubMed

    Laumonerie, P; Reina, N; Kerezoudis, P; Declaux, S; Tibbo, M E; Bonnevialle, N; Mansat, P

    2017-12-01

    The primary aim of this study was to define the standard minimum follow-up required to produce a reliable estimate of the rate of re-operation after radial head arthroplasty (RHA). The secondary objective was to define the leading reasons for re-operation. Four electronic databases, between January 2000 and March 2017 were searched. Articles reporting reasons for re-operation (Group I) and results (Group II) after RHA were included. In Group I, a meta-analysis was performed to obtain the standard minimum follow-up, the mean time to re-operation and the reason for failure. In Group II, the minimum follow-up for each study was compared with the standard minimum follow-up. A total of 40 studies were analysed: three were Group I and included 80 implants and 37 were Group II and included 1192 implants. In Group I, the mean time to re-operation was 1.37 years (0 to 11.25), the standard minimum follow-up was 3.25 years; painful loosening was the main indication for re-operation. In Group II, 33 Group II articles (89.2%) reported a minimum follow-up of < 3.25 years. The literature does not provide a reliable estimate of the rate of re-operation after RHA. The reproducibility of results would be improved by using a minimum follow-up of three years combined with a consensus of the definition of the reasons for failure after RHA. Cite this article: Bone Joint J 2017;99-B:1561-70. ©2017 The British Editorial Society of Bone & Joint Surgery.

  20. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  1. Rates of anterior tooth wear in Middle Pleistocene hominins from Sima de los Huesos (Sierra de Atapuerca, Spain).

    PubMed

    Bermúdez de Castro, J M; Martinón-Torres, M; Sarmiento, S; Lozano, M; Arsuaga, J L; Carbonell, E

    2003-10-14

    This study presents quantitative data on the rates of anterior tooth wear in a Pleistocene human population. The data were obtained for the hominin sample of the Sima de los Huesos site in Atapuerca, Spain. The fossil record belongs to a minimum of 28 individuals of the same biological population, assigned to the species Homo heidelbergensis. We have estimated the original and the preserved crown height of the mandibular incisors (I1 and I2) of 11 individuals, whose age at death can be ascertained from the mineralization stage and tooth eruption. Results provide a range of 0.276-0.348 and 0.288-0.360 mm per year for the mean wear rate of the mandibular I1 and I2, respectively, in individuals approximately 16-18 years old. These data suggest that incisors' crowns would be totally worn out toward the fifth decade of life. Thus, we expect the life expectancy of this population to be seriously limited. These data, which could be contrasted with results obtained on hominins at other sites, could be of interest for estimating the death age of adult individuals.

  2. Comparison of two probability distributions used to model sizes of undiscovered oil and gas accumulations: Does the tail wag the assessment?

    USGS Publications Warehouse

    Attanasi, E.D.; Charpentier, R.R.

    2002-01-01

    Undiscovered oil and gas assessments are commonly reported as aggregate estimates of hydrocarbon volumes. Potential commercial value and discovery costs are, however, determined by accumulation size, so engineers, economists, decision makers, and sometimes policy analysts are most interested in projected discovery sizes. The lognormal and Pareto distributions have been used to model exploration target sizes. This note contrasts the outcomes of applying these alternative distributions to the play level assessments of the U.S. Geological Survey's 1995 National Oil and Gas Assessment. Using the same numbers of undiscovered accumulations and the same minimum, medium, and maximum size estimates, substitution of the shifted truncated lognormal distribution for the shifted truncated Pareto distribution reduced assessed undiscovered oil by 16% and gas by 15%. Nearly all of the volume differences resulted because the lognormal had fewer larger fields relative to the Pareto. The lognormal also resulted in a smaller number of small fields relative to the Pareto. For the Permian Basin case study presented here, reserve addition costs were 20% higher with the lognormal size assumption. ?? 2002 International Association for Mathematical Geology.

  3. Rates of anterior tooth wear in Middle Pleistocene hominins from Sima de los Huesos (Sierra de Atapuerca, Spain)

    PubMed Central

    de Castro, J. M. Bermúdez; Martinón-Torres, M.; Sarmiento, S.; Lozano, M.; Arsuaga, J. L.; Carbonell, E.

    2003-01-01

    This study presents quantitative data on the rates of anterior tooth wear in a Pleistocene human population. The data were obtained for the hominin sample of the Sima de los Huesos site in Atapuerca, Spain. The fossil record belongs to a minimum of 28 individuals of the same biological population, assigned to the species Homo heidelbergensis. We have estimated the original and the preserved crown height of the mandibular incisors (I1 and I2) of 11 individuals, whose age at death can be ascertained from the mineralization stage and tooth eruption. Results provide a range of 0.276–0.348 and 0.288–0.360 mm per year for the mean wear rate of the mandibular I1 and I2, respectively, in individuals ≈16–18 years old. These data suggest that incisors' crowns would be totally worn out toward the fifth decade of life. Thus, we expect the life expectancy of this population to be seriously limited. These data, which could be contrasted with results obtained on hominins at other sites, could be of interest for estimating the death age of adult individuals. PMID:14528001

  4. Do dichromats see colours in this way? Assessing simulation tools without colorimetric measurements.

    PubMed

    Lillo Jover, Julio A; Álvaro Llorente, Leticia; Moreira Villegas, Humberto; Melnikova, Anna

    2016-11-01

    Simulcheck evaluates Colour Simulation Tools (CSTs, they transform colours to mimic those seen by colour vision deficients). Two CSTs (Variantor and Coblis) were used to know if the standard Simulcheck version (direct measurement based, DMB) can be substituted by another (RGB values based) not requiring sophisticated measurement instruments. Ten normal trichromats performed the two psychophysical tasks included in the Simulcheck method. The Pseudoachromatic Stimuli Identification task provided the h uv (hue angle) values of the pseudoachromatic stimuli: colours seen as red or green by normal trichromats but as grey by colour deficient people. The Minimum Achromatic Contrast task was used to compute the L R (relative luminance) values of the pseudoachromatic stimuli. Simulcheck DMB version showed that Variantor was accurate to simulate protanopia but neither Variantor nor Coblis were accurate to simulate deuteranopia. Simulcheck RGB version provided accurate h uv values, so this variable can be adequately estimated when lacking a colorimeter —an expensive and unusual apparatus—. Contrary, the inaccuracy of the L R estimations provided by Simulcheck RGB version makes it advisable to compute this variable from the measurements performed with a photometer, a cheap and easy to find apparatus.

  5. Idiopathic granulomatous mastitis: magnetic resonance imaging findings with diffusion MRI.

    PubMed

    Aslan, Hulya; Pourbagher, Aysin; Colakoglu, Tamer

    2016-07-01

    Idiopathic granulomatous mastitis (IGM) is a rare benign breast disease with unknown etiology which can mimic breast carcinoma, both clinically and radiologically. Magnetic resonance imaging (MRI) findings of IGM have been previously described; however there is no study evaluating diffusion-weighted MRI findings of IGM. To analyze conventional, dynamic contrast-enhanced, and diffusion-weighted MRI signal characteristics of IGM by comparing it with the contralateral normal breast parenchyma. A total of 39 patients were included in the study. On dynamic contrast-enhanced MRI, the distribution and enhancement patterns of the lesions were evaluated. We also detected the frequencies of involving quadrants, retroareolar involvement, accompanying abscess, and skin edema. T2-weighted (T2W) and STIR signal intensities and both mean and minimum apparent diffusion coefficient (ADC) values were compared with the contralateral normal parenchyma. IGM showed significantly lower mean and minimum ADC values when compared with the normal parenchyma. Signal intensities on T2W and STIR sequences of the lesion were significantly higher than the normal parenchyma. On dynamic contrast-enhanced MRI, 7.7% of the patients had mass-like contrast enhancement, 92.3% of the patients had non-mass-like contrast enhancement. Abscess was positive in 33.3% of the patients. As a result, IGM showed commonly non-mass-like lesions with restricted diffusion. Although it is a benign pathology, it may show clustered ring-like enhancement like malignant lesions. © The Foundation Acta Radiologica 2015.

  6. How Dusty Is Alpha Centauri? Excess or Non-excess over the Infrared Photospheres of Main-sequence Stars

    NASA Technical Reports Server (NTRS)

    Wiegert, J.; Liseau, R.; Thebault, P.; Olofsson, G.; Mora, A.; Bryden, G.; Marshall, J. P.; Eiroa, C.; Montesinos, B.; Ardila, D.; hide

    2014-01-01

    Context. Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby, solar-type binary Centauri have metallicities that are higher than solar, which is thought to promote giant planet formation. Aims. We aim to determine the level of emission from debris around the stars in the Cen system. This requires knowledge of their photospheres.Having already detected the temperature minimum, Tmin, of CenA at far-infrared wavelengths, we here attempt to do the same for the moreactive companion Cen B. Using the Cen stars as templates, we study the possible eects that Tmin may have on the detectability of unresolveddust discs around other stars. Methods.We used Herschel-PACS, Herschel-SPIRE, and APEX-LABOCA photometry to determine the stellar spectral energy distributions in thefar infrared and submillimetre. In addition, we used APEX-SHeFI observations for spectral line mapping to study the complex background around Cen seen in the photometric images. Models of stellar atmospheres and of particulate discs, based on particle simulations and in conjunctionwith radiative transfer calculations, were used to estimate the amount of debris around these stars. Results. For solar-type stars more distant than Cen, a fractional dust luminosity fd LdustLstar 2 107 could account for SEDs that do not exhibit the Tmin eect. This is comparable to estimates of fd for the Edgeworth-Kuiper belt of the solar system. In contrast to the far infrared,slight excesses at the 2:5 level are observed at 24 m for both CenA and B, which, if interpreted as due to zodiacal-type dust emission, wouldcorrespond to fd (13) 105, i.e. some 102 times that of the local zodiacal cloud. Assuming simple power-law size distributions of the dustgrains, dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the Cen stars, viz.4106 M$ of 4 to 1000 msize grains, distributed according to n(a) a3:5. Similarly, for filled-in Tmin emission, corresponding Edgeworth-Kuiper belts could account for103 M$ of dust. Conclusions. Our far-infrared observations lead to estimates of upper limits to the amount of circumstellar dust around the stars CenA and B.Light scattered andor thermally emitted by exo-Zodi discs will have profound implications for future spectroscopic missions designed to searchfor biomarkers in the atmospheres of Earth-like planets. The far-infrared spectral energy distribution of Cen B is marginally consistent with thepresence of a minimum temperature region in the upper atmosphere of the star. We also show that an Cen A-like temperature minimum mayresult in an erroneous apprehension about the presence of dust around other, more distant stars.

  7. Polarized object detection in crabs: a two-channel system.

    PubMed

    Basnak, Melanie Ailín; Pérez-Schuster, Verónica; Hermitte, Gabriela; Berón de Astrada, Martín

    2018-05-25

    Many animal species take advantage of polarization vision for vital tasks such as orientation, communication and contrast enhancement. Previous studies have suggested that decapod crustaceans use a two-channel polarization system for contrast enhancement. Here, we characterize the polarization contrast sensitivity in a grapsid crab . We estimated the polarization contrast sensitivity of the animals by quantifying both their escape response and changes in heart rate when presented with polarized motion stimuli. The motion stimulus consisted of an expanding disk with an 82 deg polarization difference between the object and the background. More than 90% of animals responded by freezing or trying to avoid the polarized stimulus. In addition, we co-rotated the electric vector (e-vector) orientation of the light from the object and background by increments of 30 deg and found that the animals' escape response varied periodically with a 90 deg period. Maximum escape responses were obtained for object and background e-vectors near the vertical and horizontal orientations. Changes in cardiac response showed parallel results but also a minimum response when e-vectors of object and background were shifted by 45 deg with respect to the maxima. These results are consistent with an orthogonal receptor arrangement for the detection of polarized light, in which two channels are aligned with the vertical and horizontal orientations. It has been hypothesized that animals with object-based polarization vision rely on a two-channel detection system analogous to that of color processing in dichromats. Our results, obtained by systematically varying the e-vectors of object and background, provide strong empirical support for this theoretical model of polarized object detection. © 2018. Published by The Company of Biologists Ltd.

  8. The capability of fluoroscopic systems to determine differential Roentgen-ray absorption

    NASA Technical Reports Server (NTRS)

    Baily, N. A.; Crepeau, R. L.

    1975-01-01

    A clinical fluoroscopic unit used in conjunction with a TV image digitization system was investigated to determine its capability to evaluate differential absorption between two areas in the same field. Fractional contrasts and minimum detectability for air, several concentrations of Renografin-60, and aluminum were studied using phantoms of various thicknesses. Results showed that the videometric response, when treated as contrast, shows a linear response with absorber thickness up to considerable thicknesses.

  9. Methods and Systems for Characterization of an Anomaly Using Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M. (Inventor)

    2013-01-01

    A method for characterizing an anomaly in a material comprises (a) extracting contrast data; (b) measuring a contrast evolution; (c) filtering the contrast evolution; (d) measuring a peak amplitude of the contrast evolution; (d) determining a diameter and a depth of the anomaly, and (e) repeating the step of determining the diameter and the depth of the anomaly until a change in the estimate of the depth is less than a set value. The step of determining the diameter and the depth of the anomaly comprises estimating the depth using a diameter constant C.sub.D equal to one for the first iteration of determining the diameter and the depth; estimating the diameter; and comparing the estimate of the depth of the anomaly after each iteration of estimating to the prior estimate of the depth to calculate the change in the estimate of the depth of the anomaly.

  10. Estimation of Surface Air Temperature from MODIS 1km Resolution Land Surface Temperature Over Northern China

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina

    2010-01-01

    Surface air temperature is a critical variable to describe the energy and water cycle of the Earth-atmosphere system and is a key input element for hydrology and land surface models. It is a very important variable in agricultural applications and climate change studies. This is a preliminary study to examine statistical relationships between ground meteorological station measured surface daily maximum/minimum air temperature and satellite remotely sensed land surface temperature from MODIS over the dry and semiarid regions of northern China. Studies were conducted for both MODIS-Terra and MODIS-Aqua by using year 2009 data. Results indicate that the relationships between surface air temperature and remotely sensed land surface temperature are statistically significant. The relationships between the maximum air temperature and daytime land surface temperature depends significantly on land surface types and vegetation index, but the minimum air temperature and nighttime land surface temperature has little dependence on the surface conditions. Based on linear regression relationship between surface air temperature and MODIS land surface temperature, surface maximum and minimum air temperatures are estimated from 1km MODIS land surface temperature under clear sky conditions. The statistical errors (sigma) of the estimated daily maximum (minimum) air temperature is about 3.8 C(3.7 C).

  11. Low Streamflow Forcasting using Minimum Relative Entropy

    NASA Astrophysics Data System (ADS)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  12. Minimum area thresholds for rattlesnakes and colubrid snakes on islands in the Gulf of California, Mexico.

    PubMed

    Meik, Jesse M; Makowsky, Robert

    2018-01-01

    We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.

  13. Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.

    PubMed

    Cadena, Brian C

    2014-03-01

    This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.

  14. The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.

    PubMed

    Bullinger, Lindsey Rose

    2017-03-01

    To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.

  15. Tritium as an indicator of ground-water age in Central Wisconsin

    USGS Publications Warehouse

    Bradbury, Kenneth R.

    1991-01-01

    In regions where ground water is generally younger than about 30 years, developing the tritium input history of an area for comparison with the current tritium content of ground water allows quantitative estimates of minimum ground-water age. The tritium input history for central Wisconsin has been constructed using precipitation tritium measured at Madison, Wisconsin and elsewhere. Weighted tritium inputs to ground water reached a peak of over 2,000 TU in 1964, and have declined since that time to about 20-30 TU at present. In the Buena Vista basin in central Wisconsin, most ground-water samples contained elevated levels of tritium, and estimated minimum ground-water ages in the basin ranged from less than one year to over 33 years. Ground water in mapped recharge areas was generally younger than ground water in discharge areas, and estimated ground-water ages were consistent with flow system interpretations based on other data. Estimated minimum ground-water ages increased with depth in areas of downward ground-water movement. However, water recharging through thick moraine sediments was older than water in other recharge areas, reflecting slower infiltration through the sandy till of the moraine.

  16. Local introduction and heterogeneous spatial spread of dengue-suppressing Wolbachia through an urban population of Aedes aegypti

    PubMed Central

    Schmidt, Tom L.; Barton, Nicholas H.; Rašić, Gordana; Turley, Andrew P.; Montgomery, Brian L.; Iturbe-Ormaetxe, Inaki; Cook, Peter E.; Ryan, Peter A.; Ritchie, Scott A.; Hoffmann, Ary A.; O’Neill, Scott L.

    2017-01-01

    Dengue-suppressing Wolbachia strains are promising tools for arbovirus control, particularly as they have the potential to self-spread following local introductions. To test this, we followed the frequency of the transinfected Wolbachia strain wMel through Ae. aegypti in Cairns, Australia, following releases at 3 nonisolated locations within the city in early 2013. Spatial spread was analysed graphically using interpolation and by fitting a statistical model describing the position and width of the wave. For the larger 2 of the 3 releases (covering 0.97 km2 and 0.52 km2), we observed slow but steady spatial spread, at about 100–200 m per year, roughly consistent with theoretical predictions. In contrast, the smallest release (0.11 km2) produced erratic temporal and spatial dynamics, with little evidence of spread after 2 years. This is consistent with the prediction concerning fitness-decreasing Wolbachia transinfections that a minimum release area is needed to achieve stable local establishment and spread in continuous habitats. Our graphical and likelihood analyses produced broadly consistent estimates of wave speed and wave width. Spread at all sites was spatially heterogeneous, suggesting that environmental heterogeneity will affect large-scale Wolbachia transformations of urban mosquito populations. The persistence and spread of Wolbachia in release areas meeting minimum area requirements indicates the promise of successful large-scale population transformation. PMID:28557993

  17. Local introduction and heterogeneous spatial spread of dengue-suppressing Wolbachia through an urban population of Aedes aegypti.

    PubMed

    Schmidt, Tom L; Barton, Nicholas H; Rašić, Gordana; Turley, Andrew P; Montgomery, Brian L; Iturbe-Ormaetxe, Inaki; Cook, Peter E; Ryan, Peter A; Ritchie, Scott A; Hoffmann, Ary A; O'Neill, Scott L; Turelli, Michael

    2017-05-01

    Dengue-suppressing Wolbachia strains are promising tools for arbovirus control, particularly as they have the potential to self-spread following local introductions. To test this, we followed the frequency of the transinfected Wolbachia strain wMel through Ae. aegypti in Cairns, Australia, following releases at 3 nonisolated locations within the city in early 2013. Spatial spread was analysed graphically using interpolation and by fitting a statistical model describing the position and width of the wave. For the larger 2 of the 3 releases (covering 0.97 km2 and 0.52 km2), we observed slow but steady spatial spread, at about 100-200 m per year, roughly consistent with theoretical predictions. In contrast, the smallest release (0.11 km2) produced erratic temporal and spatial dynamics, with little evidence of spread after 2 years. This is consistent with the prediction concerning fitness-decreasing Wolbachia transinfections that a minimum release area is needed to achieve stable local establishment and spread in continuous habitats. Our graphical and likelihood analyses produced broadly consistent estimates of wave speed and wave width. Spread at all sites was spatially heterogeneous, suggesting that environmental heterogeneity will affect large-scale Wolbachia transformations of urban mosquito populations. The persistence and spread of Wolbachia in release areas meeting minimum area requirements indicates the promise of successful large-scale population transformation.

  18. Determination of the minimal clinically important difference for seven measures of fatigue in Swedish patients with systemic lupus erythematosus.

    PubMed

    Pettersson, S; Lundberg, I E; Liang, M H; Pouchot, J; Henriksson, E Welin

    2015-05-01

    To estimate the minimal clinically important difference (MCID) in seven self-administered measures assessing fatigue in Swedish patients with systemic lupus erythematosus (SLE). The participants (n = 51, women 98%, age 52.8 ± 12.1 years, disease duration 18.7 ± 13.6 years) met in groups of six to nine persons. After completing seven fatigue questionnaires [the Fatigue Severity Scale (FSS); the Multidimensional Assessment of Fatigue (MAF) scale; the 20-item Multidimensional Fatigue Inventory (MFI); the Chalder Fatigue Scale (CFS); the Short Form-36 Vitality subscale (VT); the Functional Assessment of Chronic Illness Therapy - Fatigue (FACIT-F) scale; and the Numeric Rating Scale (NRS)], each respondent had a minimum of five face-to-face discussions, followed by an individual comparative assessment of their own level of fatigue (seven-grade scale). This method resulted in 260 contrasting assessments; MCIDs were first calculated using the paired differences and then established by a regression approach. Patients were asked to comment on their experience with the questionnaires and whether they captured their fatigue adequately. The paired approach (using 'little more fatigue' as an anchor for MCID during the face-to-face comparative assessments) provided estimates of 4.6-17.0; the regression approach provided estimates of 4.3-10.8. Estimates using the regression approach were consistently lower than those using the paired model. The MCID estimates were least favourable and fewer respondents supported the use of the NRS compared to the other self-reported questionnaires. All seven instruments detect MCIDs for fatigue in Swedish patients with SLE. However, the single-question measure was not supported by the MCID estimates or by comments from the respondents.

  19. Updating estimates of low streamflow statistics to account for possible trends

    NASA Astrophysics Data System (ADS)

    Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.

    2017-12-01

    Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.

  20. Estimating Renewable Energy Economic Potential in the United States. Methodology and Initial Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Austin; Beiter, Philipp; Heimiller, Donna

    This report describes a geospatial analysis method to estimate the economic potential of several renewable resources available for electricity generation in the United States. Economic potential, one measure of renewable generation potential, may be defined in several ways. For example, one definition might be expected revenues (based on local market prices) minus generation costs, considered over the expected lifetime of the generation asset. Another definition might be generation costs relative to a benchmark (e.g., a natural gas combined cycle plant) using assumptions of fuel prices, capital cost, and plant efficiency. Economic potential in this report is defined as the subsetmore » of the available resource technical potential where the cost required to generate the electricity (which determines the minimum revenue requirements for development of the resource) is below the revenue available in terms of displaced energy and displaced capacity. The assessment is conducted at a high geospatial resolution (more than 150,000 technology-specific sites in the continental United States) to capture the significant variation in local resource, costs, and revenue potential. This metric can be a useful screening factor for understanding the economic viability of renewable generation technologies at a specific location. In contrast to many common estimates of renewable energy potential, economic potential does not consider market dynamics, customer demand, or most policy drivers that may incent renewable energy generation.« less

  1. Requirements for Coregistration Accuracy in On-Scalp MEG.

    PubMed

    Zetter, Rasmus; Iivanainen, Joonas; Stenroos, Matti; Parkkonen, Lauri

    2018-06-22

    Recent advances in magnetic sensing has made on-scalp magnetoencephalography (MEG) possible. In particular, optically-pumped magnetometers (OPMs) have reached sensitivity levels that enable their use in MEG. In contrast to the SQUID sensors used in current MEG systems, OPMs do not require cryogenic cooling and can thus be placed within millimetres from the head, enabling the construction of sensor arrays that conform to the shape of an individual's head. To properly estimate the location of neural sources within the brain, one must accurately know the position and orientation of sensors in relation to the head. With the adaptable on-scalp MEG sensor arrays, this coregistration becomes more challenging than in current SQUID-based MEG systems that use rigid sensor arrays. Here, we used simulations to quantify how accurately one needs to know the position and orientation of sensors in an on-scalp MEG system. The effects that different types of localisation errors have on forward modelling and source estimates obtained by minimum-norm estimation, dipole fitting, and beamforming are detailed. We found that sensor position errors generally have a larger effect than orientation errors and that these errors affect the localisation accuracy of superficial sources the most. To obtain similar or higher accuracy than with current SQUID-based MEG systems, RMS sensor position and orientation errors should be [Formula: see text] and [Formula: see text], respectively.

  2. 49 CFR 38.75 - Priority seating signs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., with a minimum character height (using an upper case “X”) of 5/8 inch, with “wide” spacing (generally, the space between letters shall be 1/16 the height of upper case letters), and shall contrast with the...

  3. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  4. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Jon

    2009-06-15

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  5. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  6. Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment.

    ERIC Educational Resources Information Center

    Brown, Charles; And Others

    1983-01-01

    The study finds that a 10 percent increase in the federal minimum wage (or the coverage rate) would reduce teenage (16-19) employment by about one percent, which is at the lower end of the range of estimates from previous studies. (Author/SSH)

  7. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  8. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  9. Estimation of daily minimum land surface air temperature using MODIS data in southern Iran

    NASA Astrophysics Data System (ADS)

    Didari, Shohreh; Norouzi, Hamidreza; Zand-Parsa, Shahrokh; Khanbilvardi, Reza

    2017-11-01

    Land surface air temperature (LSAT) is a key variable in agricultural, climatological, hydrological, and environmental studies. Many of their processes are affected by LSAT at about 5 cm from the ground surface (LSAT5cm). Most of the previous studies tried to find statistical models to estimate LSAT at 2 m height (LSAT2m) which is considered as a standardized height, and there is not enough study for LSAT5cm estimation models. Accurate measurements of LSAT5cm are generally acquired from meteorological stations, which are sparse in remote areas. Nonetheless, remote sensing data by providing rather extensive spatial coverage can complement the spatiotemporal shortcomings of meteorological stations. The main objective of this study was to find a statistical model from the previous day to accurately estimate spatial daily minimum LSAT5cm, which is very important in agricultural frost, in Fars province in southern Iran. Land surface temperature (LST) data were obtained using the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Aqua and Terra satellites at daytime and nighttime periods with normalized difference vegetation index (NDVI) data. These data along with geometric temperature and elevation information were used in a stepwise linear model to estimate minimum LSAT5cm during 2003-2011. The results revealed that utilization of MODIS Aqua nighttime data of previous day provides the most applicable and accurate model. According to the validation results, the accuracy of the proposed model was suitable during 2012 (root mean square difference ( RMSD) = 3.07 °C, {R}_{adj}^2 = 87 %). The model underestimated (overestimated) high (low) minimum LSAT5cm. The accuracy of estimation in the winter time was found to be lower than the other seasons ( RMSD = 3.55 °C), and in summer and winter, the errors were larger than in the remaining seasons.

  10. Film viewing conditions in mammography.

    PubMed

    Hill, S J; Faulkner, K; Law, J; Starritt, H C

    1997-04-01

    A requirement for a minimum viewing box brightness of 3000 cd m-2 for reading mammograms has been widely advocated. Some recent work has challenged that opinion by reporting no significant variation in visibility of low contrast and fine detail objects over a wide range of brightness levels. This paper provides further experimental evidence to support the latter conclusion, at least over the range 1340-4190 cd m-2, and suggests that the currently recommended minimum viewing box brightness levels need to be revised. The importance of reducing room lighting levels is fully confirmed.

  11. Low-signal, coronagraphic wavefront estimation with Kalman filtering in the high contrast imaging testbed

    NASA Astrophysics Data System (ADS)

    Riggs, A. J. Eldorado; Cady, Eric J.; Prada, Camilo M.; Kern, Brian D.; Zhou, Hanying; Kasdin, N. Jeremy; Groff, Tyler D.

    2016-07-01

    For direct imaging and spectral characterization of cold exoplanets in reflected light, the proposed Wide-Field Infrared Survey Telescope (WFIRST) Coronagraph Instrument (CGI) will carry two types of coronagraphs. The High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory has been testing both coronagraph types and demonstrated their abilities to achieve high contrast. Focal plane wavefront correction is used to estimate and mitigate aberrations. As the most time-consuming part of correction during a space mission, the acquisition of probed images for electric field estimation needs to be as short as possible. We present results from the HCIT of narrowband, low-signal wavefront estimation tests using a shaped pupil Lyot coronagraph (SPLC) designed for the WFIRST CGI. In the low-flux regime, the Kalman filter and iterated extended Kalman filter provide faster correction, better achievable contrast, and more accurate estimates than batch process estimation.

  12. Estimation of Surface Air Temperature Over Central and Eastern Eurasia from MODIS Land Surface Temperature

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.

    2011-01-01

    Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.

  13. Waking the undead: Implications of a soft explosive model for the timing of placental mammal diversification.

    PubMed

    Springer, Mark S; Emerling, Christopher A; Meredith, Robert W; Janečka, Jan E; Eizirik, Eduardo; Murphy, William J

    2017-01-01

    The explosive, long fuse, and short fuse models represent competing hypotheses for the timing of placental mammal diversification. Support for the explosive model, which posits both interordinal and intraordinal diversification after the KPg mass extinction, derives from morphological cladistic studies that place Cretaceous eutherians outside of crown Placentalia. By contrast, most molecular studies favor the long fuse model wherein interordinal cladogenesis occurred in the Cretaceous followed by intraordinal cladogenesis after the KPg boundary. Phillips (2016) proposed a soft explosive model that allows for the emergence of a few lineages (Xenarthra, Afrotheria, Euarchontoglires, Laurasiatheria) in the Cretaceous, but otherwise agrees with the explosive model in positing the majority of interordinal diversification after the KPg mass extinction. Phillips (2016) argues that rate transference errors associated with large body size and long lifespan have inflated previous estimates of interordinal divergence times, and further suggests that most interordinal divergences are positioned after the KPg boundary when rate transference errors are avoided through the elimination of calibrations in large-bodied and/or long lifespan clades. Here, we show that rate transference errors can also occur in the opposite direction and drag forward estimated divergence dates when calibrations in large-bodied/long lifespan clades are omitted. This dragging forward effect results in the occurrence of more than half a billion years of 'zombie lineages' on Phillips' preferred timetree. By contrast with ghost lineages, which are a logical byproduct of an incomplete fossil record, zombie lineages occur when estimated divergence dates are younger than the minimum age of the oldest crown fossils. We also present the results of new timetree analyses that address the rate transference problem highlighted by Phillips (2016) by deleting taxa that exceed thresholds for body size and lifespan. These analyses recover all interordinal divergence times in the Cretaceous and are consistent with the long fuse model of placental diversification. Finally, we outline potential problems with morphological cladistic analyses of higher-level relationships among placental mammals that may account for the perceived discrepancies between molecular and paleontological estimates of placental divergence times. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Deriving the number of jobs in proximity services from the number of inhabitants in French rural municipalities.

    PubMed

    Lenormand, Maxime; Huet, Sylvie; Deffuant, Guillaume

    2012-01-01

    We use a minimum requirement approach to derive the number of jobs in proximity services per inhabitant in French rural municipalities. We first classify the municipalities according to their time distance in minutes by car to the municipality where the inhabitants go the most frequently to get services (called MFM). For each set corresponding to a range of time distance to MFM, we perform a quantile regression estimating the minimum number of service jobs per inhabitant that we interpret as an estimation of the number of proximity jobs per inhabitant. We observe that the minimum number of service jobs per inhabitant is smaller in small municipalities. Moreover, for municipalities of similar sizes, when the distance to the MFM increases, the number of jobs of proximity services per inhabitant increases.

  15. Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*

    PubMed Central

    Cadena, Brian C.

    2014-01-01

    This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288

  16. Continuity vs. the Crowd-Tradeoffs Between Continuous and Intermittent Citizen Hydrology Streamflow Observations.

    PubMed

    Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine

    2017-07-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.

  17. Comparison of irrigation pumpage with change in ground-water storage in the High Plains aquifer in Chase, Dundy, and Perkins counties, Nebraska, 1975-83

    USGS Publications Warehouse

    Heimes, F.J.; Ferrigno, C.F.; Gutentag, E.D.; Lucky, R.R.; Stephens, D.M.; Weeks, J.B.

    1987-01-01

    The relation between pumpage and change in storage was evaluated for most of a three-county area in southwestern Nebraska from 1975 through 1983. Initial comparison of the 1975-83 pumpage with change in storage in the study area indicated that the 1 ,042,300 acre-ft of change in storage was only about 30% of the 3,425,000 acre-ft of pumpage. An evaluation of the data used to calculate pumpage and change in storage indicated that there was a relatively large potential for error in estimates of specific yield. As a result, minimum and maximum values of specific yield were estimated and used to recalculate change in storage. Estimates also were derived for the minimum and maximum amounts of recharge that could occur as a result of cultivation practices. The minimum and maximum estimates for specific yield and for recharge from cultivation practices were used to compute a range of values for the potential amount of additional recharge that occurred as a result of irrigation. The minimum and maximum amounts of recharge that could be caused by irrigation in the study area were 953,200 acre-ft (28% of pumpage) and 2,611,200 acre-ft (76% of pumpage), respectively. These values indicate that a substantial percentage of the water pumped from the aquifer is resupplied to storage in the aquifer as a result of a combination of irrigation return flow and enhanced recharge from precipitation that results from cultivation and irrigation practices. (Author 's abstract)

  18. Dual Energy Method for Breast Imaging: A Simulation Study.

    PubMed

    Koukou, V; Martini, N; Michail, C; Sotiropoulou, P; Fountzoula, C; Kalyvas, N; Kandarakis, I; Nikiforidis, G; Fountos, G

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNR tc ) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels.

  19. Dual Energy Method for Breast Imaging: A Simulation Study

    PubMed Central

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNRtc) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels. PMID:26246848

  20. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  1. Relative dynamics and motion control of nanosatellite formation flying

    NASA Astrophysics Data System (ADS)

    Pimnoo, Ammarin; Hiraki, Koju

    2016-04-01

    Orbit selection is a necessary factor in nanosatellite formation mission design/meanwhile, to keep the formation, it is necessary to consume fuel. Therefore, the best orbit design for nanosatellite formation flying should be one that requires the minimum fuel consumption. The purpose of this paper is to analyse orbit selection with respect to the minimum fuel consumption, to provide a convenient way to estimate the fuel consumption for keeping nanosatellite formation flying and to present a simplified method of formation control. The formation structure is disturbed by J2 gravitational perturbation and other perturbing accelerations such as atmospheric drag. First, Gauss' Variation Equations (GVE) are used to estimate the essential ΔV due to the J2 perturbation and atmospheric drag. The essential ΔV presents information on which orbit is good with respect to the minimum fuel consumption. Then, the linear equations which account for J2 gravitational perturbation of Schweighart-Sedwick are presented and used to estimate the fuel consumption to maintain the formation structure. Finally, the relative dynamics motion is presented as well as a simplified motion control of formation structure by using GVE.

  2. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error☆☆☆

    PubMed Central

    Stenroos, Matti; Hauk, Olaf

    2013-01-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259

  3. How long is enough to detect terrestrial animals? Estimating the minimum trapping effort on camera traps

    PubMed Central

    Si, Xingfeng; Kays, Roland

    2014-01-01

    Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha) study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period. PMID:24868493

  4. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  5. The size of the irregular migrant population in the European Union – counting the uncountable?

    PubMed

    Vogel, Dita; Kovacheva, Vesela; Prescott, Hannah

    2011-01-01

    It is difficult to estimate the size of the irregular migrant population in a specific city or country, and even more difficult to arrive at estimates at the European level. A review of past attempts at European-level estimates reveals that they rely on rough and outdated rules-of-thumb. In this paper, we present our own European level estimates for 2002, 2005, and 2008. We aggregate country-specific information, aiming at approximate comparability by consistent use of minimum and maximum estimates and by adjusting for obvious differences in definition and timescale. While the aggregated estimates are not considered highly reliable, they do -- for the first time -- provide transparency. The provision of more systematic medium quality estimates is shown to be the most promising way for improvement. The presented estimate indicates a minimum of 1.9 million and a maximum of 3.8 million irregular foreign residents in the 27 member states of the European Union (2008). Unlike rules-of-thumb, the aggregated EU estimates indicate a decline in the number of irregular foreign residents between 2002 and 2008. This decline has been influenced by the EU enlargement and legalisation programmes.

  6. Behavioral and physiological significance of minimum resting metabolic rate in king penguins.

    PubMed

    Halsey, L G; Butler, P J; Fahlman, A; Woakes, A J; Handrich, Y

    2008-01-01

    Because fasting king penguins (Aptenodytes patagonicus) need to conserve energy, it is possible that they exhibit particularly low metabolic rates during periods of rest. We investigated the behavioral and physiological aspects of periods of minimum metabolic rate in king penguins under different circumstances. Heart rate (f(H)) measurements were recorded to estimate rate of oxygen consumption during periods of rest. Furthermore, apparent respiratory sinus arrhythmia (RSA) was calculated from the f(H) data to determine probable breathing frequency in resting penguins. The most pertinent results were that minimum f(H) achieved (over 5 min) was higher during respirometry experiments in air than during periods ashore in the field; that minimum f(H) during respirometry experiments on water was similar to that while at sea; and that RSA was apparent in many of the f(H) traces during periods of minimum f(H) and provides accurate estimates of breathing rates of king penguins resting in specific situations in the field. Inferences made from the results include that king penguins do not have the capacity to reduce their metabolism to a particularly low level on land; that they can, however, achieve surprisingly low metabolic rates at sea while resting in cold water; and that during respirometry experiments king penguins are stressed to some degree, exhibiting an elevated metabolism even when resting.

  7. Probe-Specific Procedure to Estimate Sensitivity and Detection Limits for 19F Magnetic Resonance Imaging.

    PubMed

    Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M

    2016-01-01

    Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.

  8. Estimating the dose response relationship for occupational radiation exposure measured with minimum detection level.

    PubMed

    Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y

    2004-10-01

    Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.

  9. Estimation of transformation parameters for microarray data.

    PubMed

    Durbin, Blythe; Rocke, David M

    2003-07-22

    Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.

  10. Water Content of Lunar Alkali Fedlspar

    NASA Technical Reports Server (NTRS)

    Mills, R. D.; Simon, J. I.; Wang, J.; Alexander, C. M. O'D.; Hauri, E. H.

    2016-01-01

    Detection of indigenous hydrogen in a diversity of lunar materials, including volcanic glass, melt inclusions, apatite, and plagioclase suggests water may have played a role in the chemical differentiation of the Moon. Spectroscopic data from the Moon indicate a positive correlation between water and Th. Modeling of lunar magma ocean crystallization predicts a similar chemical differentiation with the highest levels of water in the K- and Th-rich melt residuum of the magma ocean (i.e. urKREEP). Until now, the only sample-based estimates of water content of KREEP-rich magmas come from measurements of OH, F, and Cl in lunar apatites, which suggest a water concentration of < 1 ppm in urKREEP. Using these data, predict that the bulk water content of the magma ocean would have <10 ppm. In contrast, estimate water contents of 320 ppm for the bulk Moon and 1.4 wt % for urKREEP from plagioclase in ferroan anorthosites. Results and interpretation: NanoSIMS data from granitic clasts from Apollo sample 15405,78 show that alkali feldspar, a common mineral in K-enriched rocks, can have approx. 20 ppm of water, which implies magmatic water contents of approx. 1 wt % in the high-silica magmas. This estimate is 2 to 3 orders of magnitude higher than that estimated from apatite in similar rocks. However, the Cl and F contents of apatite in chemically similar rocks suggest that these melts also had high Cl/F ratios, which leads to spuriously low water estimates from the apatite. We can only estimate the minimum water content of urKREEP (+ bulk Moon) from our alkali feldspar data because of the unknown amount of degassing that led to the formation of the granites. Assuming a reasonable 10 to 100 times enrichment of water from urKREEP into the granites produces an estimate of 100-1000 ppm of water for the urKREEP reservoir. Using the modeling of and the 100-1000 ppm of water in urKREEP suggests a minimum bulk silicate Moon water content between 2 and 20 ppm. However, hydrogen loss was likely very significant in the evolution of the lunar mantle. Conclusions: Lunar granites crystallized between 4.3-3.8 Ga from relatively wet melts that degassed upon crystallization. The formation of these granites likely removed significant amounts of water from some mantle source regions, e.g. later mare basalts predicting derivation from a mantle with <10 ppm water. However, this would have been a heterogeneous pro-cess based on K distribution. Thus some, if not most of the mantle may not have been devolatilized by this process; as seen by water in volcanic glasses and melt inclusions.

  11. Advances in Focal Plane Wavefront Estimation for Directly Imaging Exoplanets

    NASA Astrophysics Data System (ADS)

    Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Groff, Tyler Dean

    2015-01-01

    To image cold exoplanets directly in visible light, an instrument on a telescope needs to suppress starlight by about 9 orders of magnitude at small separations from the star. A coronagraph changes the point spread function to create regions of high contrast where exoplanets or disks can be seen. Aberrations on the optics degrade the contrast by several orders of magnitude, so all high-contrast imaging systems incorporate one or more deformable mirrors (DMs) to recover regions of high contrast. With a coronagraphic instrument planned for the WFIRST-AFTA space telescope, there is a pressing need for faster, more robust estimation and control schemes for the DMs. Non-common path aberrations limit conventional phase conjugation schemes to medium star-to-planet contrast ratios of about 1e-6. High-contrast imaging requires estimation and control of both phase and amplitude in the same beam path as the science camera. Field estimation is a challenge since only intensity is measured; the most common approach, including that planned for WFIRST-AFTA, is to use DMs to create diversity, via pairs of small probe shapes, thereby allowing disambiguation of the electric field. Most implementations of DM Diversity require at least five images per electric field estimate and require narrowband measurements. This paper describes our new estimation algorithms that improve the speed (by using fewer images) and bandwidth of focal plane wavefront estimation. For narrowband estimation, we are testing nonlinear, recursive algorithms such as an iterative extended Kalman filter (IEKF) to use three images each iteration and build better, more robust estimates. We are also exploring the use of broadband estimation without the need for narrowband sub-filters and measurements. Here we present simulations of these algorithms with realistic noise and small signals to show how they might perform for WFIRST-AFTA. Once validated in simulations, we will test these algorithms experimentally in Princeton's HCIL and in the Jet Propulsion Laboratory's (JPL's) High Contrast Imaging Testbed (HCIT). Developing these faster, more robust wavefront estimators is a crucial for increasing the science yield of the WFIRST-AFTA coronagraphic instrument.

  12. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  13. Sparse EEG/MEG source estimation via a group lasso

    PubMed Central

    Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor

    2017-01-01

    Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790

  14. Trends in annual minimum exposed snow and ice cover in High Mountain Asia from MODIS

    NASA Astrophysics Data System (ADS)

    Rittger, Karl; Brodzik, Mary J.; Painter, Thomas H.; Racoviteanu, Adina; Armstrong, Richard; Dozier, Jeff

    2016-04-01

    Though a relatively short record on climatological scales, data from the Moderate Resolution Imaging Spectroradiometer (MODIS) from 2000-2014 can be used to evaluate changes in the cryosphere and provide a robust baseline for future observations from space. We use the MODIS Snow Covered Area and Grain size (MODSCAG) algorithm, based on spectral mixture analysis, to estimate daily fractional snow and ice cover and the MODICE Persistent Ice (MODICE) algorithm to estimate the annual minimum snow and ice fraction (fSCA) for each year from 2000 to 2014 in High Mountain Asia. We have found that MODSCAG performs better than other algorithms, such as the Normalized Difference Index (NDSI), at detecting snow. We use MODICE because it minimizes false positives (compared to maximum extents), for example, when bright soils or clouds are incorrectly classified as snow, a common problem with optical satellite snow mapping. We analyze changes in area using the annual MODICE maps of minimum snow and ice cover for over 15,000 individual glaciers as defined by the Randolph Glacier Inventory (RGI) Version 5, focusing on the Amu Darya, Syr Darya, Upper Indus, Ganges, and Brahmaputra River basins. For each glacier with an area of at least 1 km2 as defined by RGI, we sum the total minimum snow and ice covered area for each year from 2000 to 2014 and estimate the trends in area loss or gain. We find the largest loss in annual minimum snow and ice extent for 2000-2014 in the Brahmaputra and Ganges with 57% and 40%, respectively, of analyzed glaciers with significant losses (p-value<0.05). In the Upper Indus River basin, we see both gains and losses in minimum snow and ice extent, but more glaciers with losses than gains. Our analysis shows that a smaller proportion of glaciers in the Amu Darya and Syr Darya are experiencing significant changes in minimum snow and ice extent (3.5% and 12.2%), possibly because more of the glaciers in this region are smaller than 1 km2 than in the Indus, Ganges, and Brahmaputra making analysis from MODIS (pixel area ~0.25 km2) difficult. Overall, we see 23% of the glaciers in the 5 river basins with significant trends (in either direction). We relate these changes in area to topography and climate to understand the driving processes related to these changes. In addition to annual minimum snow and ice cover, the MODICE algorithm also provides the date of minimum fSCA for each pixel. To determine whether the surface was snow or ice we use the date of minimum fSCA from MODICE to index daily maps of snow on ice (SOI), or exposed glacier ice (EGI) and systematically derive an equilibrium line altitude (ELA) for each year from 2000-2014. We test this new algorithm in the Upper Indus basin and produce annual estimates of ELA. For the Upper Indus basin we are deriving annual ELAs that range from 5350 m to 5450 m which is slightly higher than published values of 5200 m for this region.

  15. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  16. Probablilistic evaluation of earthquake detection and location capability for Illinois, Indiana, Kentucky, Ohio, and West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauk, F.J.; Christensen, D.H.

    1980-09-01

    Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less

  17. Reassessing Wind Potential Estimates for India: Economic and Policy Implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phadke, Amol; Bharvirkar, Ranjit; Khangura, Jagmeet

    2011-09-15

    We assess developable on-shore wind potential in India at three different hub-heights and under two sensitivity scenarios – one with no farmland included, the other with all farmland included. Under the “no farmland included” case, the total wind potential in India ranges from 748 GW at 80m hub-height to 976 GW at 120m hub-height. Under the “all farmland included” case, the potential with a minimum capacity factor of 20 percent ranges from 984 GW to 1,549 GW. High quality wind energy sites, at 80m hub-height with a minimum capacity factor of 25 percent, have a potential between 253 GW (nomore » farmland included) and 306 GW (all farmland included). Our estimates are more than 15 times the current official estimate of wind energy potential in India (estimated at 50m hub height) and are about one tenth of the official estimate of the wind energy potential in the US.« less

  18. Sunspot variation and selected associated phenomena: A look at solar cycle 21 and beyond

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.

    1982-01-01

    Solar sunspot cycles 8 through 21 are reviewed. Mean time intervals are calculated for maximum to maximum, minimum to minimum, minimum to maximum, and maximum to minimum phases for cycles 8 through 20 and 8 through 21. Simple cosine functions with a period of 132 years are compared to, and found to be representative of, the variation of smoothed sunspot numbers at solar maximum and minimum. A comparison of cycles 20 and 21 is given, leading to a projection for activity levels during the Spacelab 2 era (tentatively, November 1984). A prediction is made for cycle 22. Major flares are observed to peak several months subsequent to the solar maximum during cycle 21 and to be at minimum level several months after the solar minimum. Additional remarks are given for flares, gradual rise and fall radio events and 2800 MHz radio emission. Certain solar activity parameters, especially as they relate to the near term Spacelab 2 time frame are estimated.

  19. Lesion contrast and detection using sonoelastographic shear velocity imaging: preliminary results

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Parker, Kevin J.

    2007-03-01

    This paper assesses lesion contrast and detection using sonoelastographic shear velocity imaging. Shear wave interference patterns, termed crawling waves, for a two phase medium were simulated assuming plane wave conditions. Shear velocity estimates were computed using a spatial autocorrelation algorithm that operates in the direction of shear wave propagation for a given kernel size. Contrast was determined by analyzing shear velocity estimate transition between mediums. Experimental results were obtained using heterogeneous phantoms with spherical inclusions (5 or 10 mm in diameter) characterized by elevated shear velocities. Two vibration sources were applied to opposing phantom edges and scanned (orthogonal to shear wave propagation) with an ultrasound scanner equipped for sonoelastography. Demodulated data was saved and transferred to an external computer for processing shear velocity images. Simulation results demonstrate shear velocity transition between contrasting mediums is governed by both estimator kernel size and source vibration frequency. Experimental results from phantoms further indicates that decreasing estimator kernel size produces corresponding decrease in shear velocity estimate transition between background and inclusion material albeit with an increase in estimator noise. Overall, results demonstrate the ability to generate high contrast shear velocity images using sonoelastographic techniques and detect millimeter-sized lesions.

  20. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  1. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    PubMed

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.

  2. The Lactate Minimum Test: Concept, Methodological Aspects and Insights for Future Investigations in Human and Animal Models

    PubMed Central

    Messias, Leonardo H. D.; Gobatto, Claudio A.; Beck, Wladimir R.; Manchado-Gobatto, Fúlvia B.

    2017-01-01

    In 1993, Uwe Tegtbur proposed a useful physiological protocol named the lactate minimum test (LMT). This test consists of three distinct phases. Firstly, subjects must perform high intensity efforts to induce hyperlactatemia (phase 1). Subsequently, 8 min of recovery are allowed for transposition of lactate from myocytes (for instance) to the bloodstream (phase 2). Right after the recovery, subjects are submitted to an incremental test until exhaustion (phase 3). The blood lactate concentration is expected to fall during the first stages of the incremental test and as the intensity increases in subsequent stages, to rise again forming a “U” shaped blood lactate kinetic. The minimum point of this curve, named the lactate minimum intensity (LMI), provides an estimation of the intensity that represents the balance between the appearance and clearance of arterial blood lactate, known as the maximal lactate steady state intensity (iMLSS). Furthermore, in addition to the iMLSS estimation, studies have also determined anaerobic parameters (e.g., peak, mean, and minimum force/power) during phase 1 and also the maximum oxygen consumption in phase 3; therefore, the LMT is considered a robust physiological protocol. Although, encouraging reports have been published in both human and animal models, there are still some controversies regarding three main factors: (1) the influence of methodological aspects on the LMT parameters; (2) LMT effectiveness for monitoring training effects; and (3) the LMI as a valid iMLSS estimator. Therefore, the aim of this review is to provide a balanced discussion between scientific evidence of the aforementioned issues, and insights for future investigations are suggested. In summary, further analyses is necessary to determine whether these factors are worthy, since the LMT is relevant in several contexts of health sciences. PMID:28642717

  3. Minimum number of days required for a reliable estimate of daily step count and energy expenditure, in people with MS who walk unaided.

    PubMed

    Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan

    2017-03-01

    The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  5. A genome-wide signature of positive selection in ancient and recent invasive expansions of the honey bee Apis mellifera

    PubMed Central

    Zayed, Amro; Whitfield, Charles W.

    2008-01-01

    Apis mellifera originated in Africa and extended its range into Eurasia in two or more ancient expansions. In 1956, honey bees of African origin were introduced into South America, their descendents admixing with previously introduced European bees, giving rise to the highly invasive and economically devastating “Africanized” honey bee. Here we ask whether the honey bee's out-of-Africa expansions, both ancient and recent (invasive), were associated with a genome-wide signature of positive selection, detected by contrasting genetic differentiation estimates (FST) between coding and noncoding SNPs. In native populations, SNPs in protein-coding regions had significantly higher FST estimates than those in noncoding regions, indicating adaptive evolution in the genome driven by positive selection. This signal of selection was associated with the expansion of honey bees from Africa into Western and Northern Europe, perhaps reflecting adaptation to temperate environments. We estimate that positive selection acted on a minimum of 852–1,371 genes or ≈10% of the bee's coding genome. We also detected positive selection associated with the invasion of African-derived honey bees in the New World. We found that introgression of European-derived alleles into Africanized bees was significantly greater for coding than noncoding regions. Our findings demonstrate that Africanized bees exploited the genetic diversity present from preexisting introductions in an adaptive way. Finally, we found a significant negative correlation between FST estimates and the local GC content surrounding coding SNPs, suggesting that AT-rich genes play an important role in adaptive evolution in the honey bee. PMID:18299560

  6. A genome-wide signature of positive selection in ancient and recent invasive expansions of the honey bee Apis mellifera.

    PubMed

    Zayed, Amro; Whitfield, Charles W

    2008-03-04

    Apis mellifera originated in Africa and extended its range into Eurasia in two or more ancient expansions. In 1956, honey bees of African origin were introduced into South America, their descendents admixing with previously introduced European bees, giving rise to the highly invasive and economically devastating "Africanized" honey bee. Here we ask whether the honey bee's out-of-Africa expansions, both ancient and recent (invasive), were associated with a genome-wide signature of positive selection, detected by contrasting genetic differentiation estimates (F(ST)) between coding and noncoding SNPs. In native populations, SNPs in protein-coding regions had significantly higher F(ST) estimates than those in noncoding regions, indicating adaptive evolution in the genome driven by positive selection. This signal of selection was associated with the expansion of honey bees from Africa into Western and Northern Europe, perhaps reflecting adaptation to temperate environments. We estimate that positive selection acted on a minimum of 852-1,371 genes or approximately 10% of the bee's coding genome. We also detected positive selection associated with the invasion of African-derived honey bees in the New World. We found that introgression of European-derived alleles into Africanized bees was significantly greater for coding than noncoding regions. Our findings demonstrate that Africanized bees exploited the genetic diversity present from preexisting introductions in an adaptive way. Finally, we found a significant negative correlation between F(ST) estimates and the local GC content surrounding coding SNPs, suggesting that AT-rich genes play an important role in adaptive evolution in the honey bee.

  7. Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model

    PubMed Central

    Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.

    2012-01-01

    Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315

  8. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    PubMed Central

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  9. Variable Selection for Confounder Control, Flexible Modeling and Collaborative Targeted Minimum Loss-Based Estimation in Causal Inference.

    PubMed

    Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan

    2016-05-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.

  10. A basin-scale approach to estimating stream temperatures of tributaries to the lower Klamath River, California

    USGS Publications Warehouse

    Flint, L.E.; Flint, A.L.

    2008-01-01

    Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  11. The cyclical variation of energy flux and photospheric magnetic field strength from coronal holes

    NASA Technical Reports Server (NTRS)

    Webb, D. F.; Davis, J. M.

    1985-01-01

    The average soft X-ray emission from coronal holes observed on images obtained during rocket flights from 1974 to 1981 is measured. The variation of this emission over the solar cycle was then compared with photospheric magnetic flux measurements within coronal holes over the same period. It was found that coronal hole soft X-ray emission could be detected and that this emission appeared to increase with the rise of the sunspot cycle from activity minimum to maximum. These quantitative results confirmed previous suggestions that the coronal brightness contrast between holes and large-scale structure decreased during this period of the cycle. Gas pressures at the hole base were estimated for assumed temperatures and found to vary from about 0.03 dyne/sq cm in 1974 to 0.35 dyne/sq cm in 1981. The increase in coronal hole X-ray emission was accompanied by a similar trend in the surface magnetic flux of near-equatorial holes between 1975 and 1980 (Harvey et al., 1982).

  12. A semi-automatic method for extracting thin line structures in images as rooted tree network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the targetmore » network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.« less

  13. Midlatitude cooling caused by geomagnetic field minimum during polarity reversal.

    PubMed

    Kitaba, Ikuko; Hyodo, Masayuki; Katoh, Shigehiro; Dettman, David L; Sato, Hiroshi

    2013-01-22

    The climatic effects of cloud formation induced by galactic cosmic rays (CRs) has recently become a topic of much discussion. The CR-cloud connection suggests that variations in geomagnetic field intensity could change climate through modulation of CR flux. This hypothesis, however, is not well-tested using robust geological evidence. Here we present paleoclimate and paleoenvironment records of five interglacial periods that include two geomagnetic polarity reversals. Marine oxygen isotope stages 19 and 31 contain both anomalous cooling intervals during the sea-level highstands and the Matuyama-Brunhes and Lower Jaramillo reversals, respectively. This contrasts strongly with the typical interglacial climate that has the temperature maximum at the sea-level peak. The cooling occurred when the field intensity dropped to <40% of its present value, for which we estimate >40% increase in CR flux. The climate warmed rapidly when field intensity recovered. We suggest that geomagnetic field intensity can influence global climate through the modulation of CR flux.

  14. Midlatitude cooling caused by geomagnetic field minimum during polarity reversal

    PubMed Central

    Kitaba, Ikuko; Hyodo, Masayuki; Katoh, Shigehiro; Dettman, David L.; Sato, Hiroshi

    2013-01-01

    The climatic effects of cloud formation induced by galactic cosmic rays (CRs) has recently become a topic of much discussion. The CR–cloud connection suggests that variations in geomagnetic field intensity could change climate through modulation of CR flux. This hypothesis, however, is not well-tested using robust geological evidence. Here we present paleoclimate and paleoenvironment records of five interglacial periods that include two geomagnetic polarity reversals. Marine oxygen isotope stages 19 and 31 contain both anomalous cooling intervals during the sea-level highstands and the Matuyama–Brunhes and Lower Jaramillo reversals, respectively. This contrasts strongly with the typical interglacial climate that has the temperature maximum at the sea-level peak. The cooling occurred when the field intensity dropped to <40% of its present value, for which we estimate >40% increase in CR flux. The climate warmed rapidly when field intensity recovered. We suggest that geomagnetic field intensity can influence global climate through the modulation of CR flux. PMID:23297205

  15. A Simulation-Based Comparison of Several Stochastic Linear Regression Methods in the Presence of Outliers.

    ERIC Educational Resources Information Center

    Rule, David L.

    Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…

  16. 12 CFR Appendix M1 to Part 226 - Repayment Disclosures

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...

  17. 12 CFR Appendix M1 to Part 226 - Repayment Disclosures

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...

  18. Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm

    Treesearch

    Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney

    2014-01-01

    Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...

  19. UNUSUAL TRENDS IN SOLAR P-MODE FREQUENCIES DURING THE CURRENT EXTENDED MINIMUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathy, S. C.; Jain, K.; Hill, F.

    2010-03-10

    We investigate the behavior of the intermediate-degree mode frequencies of the Sun during the current extended minimum phase to explore the time-varying conditions in the solar interior. Using contemporaneous helioseismic data from the Global Oscillation Network Group (GONG) and the Michelson Doppler Imager (MDI), we find that the changes in resonant mode frequencies during the activity minimum period are significantly greater than the changes in solar activity as measured by different proxies. We detect a seismic minimum in MDI p-mode frequency shifts during 2008 July-August but no such signature is seen in mean shifts computed from GONG frequencies. We alsomore » analyze the frequencies of individual oscillation modes from GONG data as a function of latitude and observe a signature of the onset of the solar cycle 24 in early 2009. Thus, the intermediate-degree modes do not confirm the onset of the cycle 24 during late 2007 as reported from the analysis of the low-degree Global Oscillations at Low Frequency frequencies. Further, both the GONG and MDI frequencies show a surprising anti-correlation between frequencies and activity proxies during the current minimum, in contrast to the behavior during the minimum between cycles 22 and 23.« less

  20. SU-F-18C-01: Minimum Detectability Analysis for Comprehensive Sized Based Optimization of Image Quality and Radiation Dose Across CT Protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smitherman, C; Chen, B; Samei, E

    2014-06-15

    Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF),more » Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.« less

  1. Sunspot Observations During the Maunder Minimum from the Correspondence of John Flamsteed

    NASA Astrophysics Data System (ADS)

    Carrasco, V. M. S.; Vaquero, J. M.

    2016-11-01

    We compile and analyze the sunspot observations made by John Flamsteed for the period 1672 - 1703, which corresponds to the second part of the Maunder Minimum. They appear in the correspondence of the famous astronomer. We include in an appendix the original texts of the sunspot records kept by Flamsteed. We compute an estimate of the level of solar activity using these records, and compare the results with the latest reconstructions of solar activity during the Maunder Minimum, obtaining values characteristic of a grand solar minimum. Finally, we discuss a phenomenon observed and described by Stephen Gray in 1705 that has been interpreted as a white-light flare.

  2. Estimation of beverage consumption and associated caloric intake in adult Czech population. An observational study.

    PubMed

    Adámková, Věra; Hubáček, Jaroslav A; Zimmelová, Petra; Velemínský, Miloš

    2011-01-01

    Food intake is a commonly monitored issue in many studies. In contrast, almost no information has been published on beverage intake in adults. To evaluate beverage intake, we studied a population of 1, 200 adults (656 males and 544 females, aged 18-54 years). The volumes and types of beverages were obtained from self-reported questionnaires. The mean beverage intake was highly variable, with a minimum of 450 mL/day and a maximum of 5,330 mL/day. A mean of 1,575 mL/day was found in the entire population (2,300 mL in males and 840 mL in females). Different patterns in the consumption of beverage types were observed between the males and females. For both males and females, the most common beverage consumed was water followed by tea. The next preferable beverages were alcoholic beer, coffee, and non-alcoholic beer in males and coffee, milk, and alcoholic beer in females. The estimated caloric intake from beverages covers, in most individuals, 10-30% of the recommended daily caloric intake. There is substantial variation among individuals, both in beverage intake and in caloric intake through beverages. The caloric intake from beverages reaches, in some individuals, one-third of the recommended daily caloric rate. © 2011 Neuroendocrinology Letters

  3. Education and health knowledge: evidence from UK compulsory schooling reform.

    PubMed

    Johnston, David W; Lordan, Grace; Shields, Michael A; Suziedelyte, Agne

    2015-02-01

    We investigate if there is a causal link between education and health knowledge using data from the 1984/85 and 1991/92 waves of the UK Health and Lifestyle Survey (HALS). Uniquely, the survey asks respondents what they think are the main causes of ten common health conditions, and we compare these answers to those given by medical professionals to form an index of health knowledge. For causal identification we use increases in the UK minimum school leaving age in 1947 (from 14 to 15) and 1972 (from 15 to 16) to provide exogenous variation in education. These reforms predominantly induced adolescents who would have left school to stay for one additionally mandated year. OLS estimates suggest that education significantly increases health knowledge, with a one-year increase in schooling increasing the health knowledge index by 15% of a standard deviation. In contrast, estimates from instrumental-variable models show that increased schooling due to the education reforms did not significantly affect health knowledge. This main result is robust to numerous specification tests and alternative formulations of the health knowledge index. Further research is required to determine whether there is also no causal link between higher levels of education - such as post-school qualifications - and health knowledge. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Harmonic demodulation and minimum enhancement factors in field-enhanced near-field optical microscopy.

    PubMed

    Scarpettini, A F; Bragas, A V

    2015-01-01

    Field-enhanced scanning optical microscopy relies on the design and fabrication of plasmonic probes which had to provide optical and chemical contrast at the nanoscale. In order to do so, the scattering containing the near-field information recorded in a field-enhanced scanning optical microscopy experiment, has to surpass the background light, always present due to multiple interferences between the macroscopic probe and sample. In this work, we show that when the probe-sample distance is modulated with very low amplitude, the higher the harmonic demodulation is, the better the ratio between the near-field signal and the interferometric background results. The choice of working at a given n harmonic is dictated by the experiment when the signal at the n + 1 harmonic goes below the experimental noise. We demonstrate that the optical contrast comes from the nth derivative of the near-field scattering, amplified by the interferometric background. By modelling the far and near field we calculate the probe-sample approach curves, which fit very well the experimental ones. After taking a great amount of experimental data for different probes and samples, we conclude with a table of the minimum enhancement factors needed to have optical contrast with field-enhanced scanning optical microscopy. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  5. Minimum Expected Risk Estimation for Near-neighbor Classification

    DTIC Science & Technology

    2006-04-01

    We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights

  6. Thermodynamics of RNA structures by Wang–Landau sampling

    PubMed Central

    Lou, Feng; Clote, Peter

    2010-01-01

    Motivation: Thermodynamics-based dynamic programming RNA secondary structure algorithms have been of immense importance in molecular biology, where applications range from the detection of novel selenoproteins using expressed sequence tag (EST) data, to the determination of microRNA genes and their targets. Dynamic programming algorithms have been developed to compute the minimum free energy secondary structure and partition function of a given RNA sequence, the minimum free-energy and partition function for the hybridization of two RNA molecules, etc. However, the applicability of dynamic programming methods depends on disallowing certain types of interactions (pseudoknots, zig-zags, etc.), as their inclusion renders structure prediction an nondeterministic polynomial time (NP)-complete problem. Nevertheless, such interactions have been observed in X-ray structures. Results: A non-Boltzmannian Monte Carlo algorithm was designed by Wang and Landau to estimate the density of states for complex systems, such as the Ising model, that exhibit a phase transition. In this article, we apply the Wang-Landau (WL) method to compute the density of states for secondary structures of a given RNA sequence, and for hybridizations of two RNA sequences. Our method is shown to be much faster than existent software, such as RNAsubopt. From density of states, we compute the partition function over all secondary structures and over all pseudoknot-free hybridizations. The advantage of the WL method is that by adding a function to evaluate the free energy of arbitary pseudoknotted structures and of arbitrary hybridizations, we can estimate thermodynamic parameters for situations known to be NP-complete. This extension to pseudoknots will be made in the sequel to this article; in contrast, the current article describes the WL algorithm applied to pseudoknot-free secondary structures and hybridizations. Availability: The WL RNA hybridization web server is under construction at http://bioinformatics.bc.edu/clotelab/. Contact: clote@bc.edu PMID:20529917

  7. δ13C values of soil organic matter in semiarid grassland with mesquite (Prosopis) encroachment in southeastern Arizona

    USGS Publications Warehouse

    Biggs, Thomas H.; Quade, Jay; Webb, Robert H.

    2002-01-01

    Over the past century, C3 woody plants and trees have increased in abundance in many semiarid ecosystems, displacing native C4 grasses. Livestock grazing, climatic fluctuations, and fire suppression are several reasons proposed for this shift. Soil carbon isotopic signatures are an ideal technique to evaluate carbon turnover rates in such ecosystems. On the gunnery ranges of Fort Huachuca in southeastern Arizona, study sites were established on homogeneous granitic alluvium to investigate the effects of fire frequency on δ13C values in surface soil organic matter (SOM). These ranges have had no livestock grazing for 50 years and a well-documented history of fires. Prosopis velutina Woot. (mesquite) trees have altered SOM δ13C pools by the concentration of plant nutrients and the addition of isotopically light litter. These soil carbon changes do not extend beyond canopy margins. Elevated total organic carbon (TOC), plant nutrient (N and P) concentrations, and depleted SOM δ13C values are associated with C3Prosopis on an unburned plot, which enables recognition of former Prosopis-occupied sites on plots with recent fire histories. Elevated nutrient concentrations associated with former Prosopis are retained in SOM for many decades. Surface SOM δ13C values indicate the estimated minimum turnover time of C4-derived carbon beneath large mature Prosopis is about 100–300 years. In contrast, complete turnover of original C3 carbon to C4 carbon under grasslands is estimated to take a minimum of 150–500 years. Our study confirms that C4 grass cover has declined over the past 100 years, although isolated C3 trees or shrubs were not uncommon on the historic C4-dominated grasslands. We find evidence in surface soil layers for a modern C3 plant expansion reflected in the substantial shift of SOM δ13C values from C4 grasses to C3 shrublands.

  8. Estimation of proliferative potentiality of central neurocytoma: correlational analysis of minimum ADC and maximum SUV with MIB-1 labeling index.

    PubMed

    Sakamoto, Ryo; Okada, Tomohisa; Kanagaki, Mitsunori; Yamamoto, Akira; Fushimi, Yasutaka; Kakigi, Takahide; Arakawa, Yoshiki; Takahashi, Jun C; Mikami, Yoshiki; Togashi, Kaori

    2015-01-01

    Central neurocytoma was initially believed to be benign tumor type, although atypical cases with more aggressive behavior have been reported. Preoperative estimation for proliferating activity of central neurocytoma is one of the most important considerations for determining tumor management. To investigate predictive values of image characteristics and quantitative measurements of minimum apparent diffusion coefficient (ADCmin) and maximum standardized uptake value (SUVmax) for proliferative activity of central neurocytoma measured by MIB-1 labeling index (LI). Twelve cases of central neurocytoma including one recurrence from January 2001 to December 2011 were included. Preoperative scans were conducted in 11, nine, and five patients for computed tomography (CT), diffusion-weighted imaging (DWI), and fluorine-18-fluorodeoxyglucose positron emission tomography (FDG-PET), respectively, and ADCmin and SUVmax of the tumors were measured. Image characteristics were investigated using CT, T2-weighted (T2W) imaging and contrast-enhanced T1-weighted (T1W) imaging, and their differences were examined using the Fisher's exact test between cases with MIB-1 LI below and above 2%, which is recognized as typical and atypical central neurocytoma, respectively. Correlational analysis was conducted for ADCmin and SUVmax with MIB-1 LI. A P value <0.05 was considered significant. Morphological appearances had large variety, and there was no significant correlation with MIB-1 LI except a tendency that strong enhancement was observed in central neurocytomas with higher MIB-1 LI (P = 0.061). High linearity with MIB-1 LI was observed in ADCmin and SUVmax (r = -0.91 and 0.74, respectively), but only ADCmin was statistically significant (P = 0.0006). Central neurocytoma had a wide variety of image appearance, and assessment of proliferative potential was considered difficult only by morphological aspects. ADCmin was recognized as a potential marker for differentiation of atypical central neurocytomas from the typical ones. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. Flying After Conducting an Aircraft Excessive Cabin Leakage Test.

    PubMed

    Houston, Stephen; Wilkinson, Elizabeth

    2016-09-01

    Aviation medical specialists should be aware that commercial airline aircraft engineers may undertake a 'dive equivalent' operation while conducting maintenance activities on the ground. We present a worked example of an occupational risk assessment to determine a minimum safe preflight surface interval (PFSI) for an engineer before flying home to base after conducting an Excessive Cabin Leakage Test (ECLT) on an unserviceable aircraft overseas. We use published dive tables to determine the minimum safe PFSI. The estimated maximum depth acquired during the procedure varies between 10 and 20 fsw and the typical estimated bottom time varies between 26 and 53 min for the aircraft types operated by the airline. Published dive tables suggest that no minimum PFSI is required for such a dive profile. Diving tables suggest that no minimum PFSI is required for the typical ECLT dive profile within the airline; however, having conducted a risk assessment, which considered peak altitude exposure during commercial flight, the worst-case scenario test dive profile, the variability of interindividual inert gas retention, and our existing policy among other occupational groups within the airline, we advised that, in the absence of a bespoke assessment of the particular circumstances on the day, the minimum PFSI after conducting ECLT should be 24 h. Houston S, Wilkinson E. Flying after conducting an aircraft excessive cabin leakage test. Aerosp Med Hum Perform. 2016; 87(9):816-820.

  10. High Intensity Focused Ultrasound Monitoring using Harmonic Motion Imaging for Focused Ultrasound (HMIFU) under boiling or slow denaturation conditions

    PubMed Central

    Hou, Gary Y.; Marquet, Fabrice; Wang, Shutao; Apostolakis, Iason-Zacharias; Konofagou, Elisa E.

    2015-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a recently developed High-Intensity Focused Ultrasound (HIFU) treatment monitoring method that utilizes an amplitude-modulated therapeutic ultrasound beam to induce an oscillatory radiation force at the HIFU focus and estimates the focal tissue displacement to monitor the HIFU thermal treatment. In this study, the performance of HMIFU under acoustic, thermal and mechanical effects were investigated. The performance of HMIFU was assessed in ex vivo canine liver specimens (n=13) under slow denaturation or boiling regimes. Passive Cavitation Detector (PCD) was used to assess the acoustic cavitation activity while a bare-wire thermocouple was used to monitor the focal temperature change. During lesioning with slow denaturation, high quality displacements (correlation coefficient above 0.97) were observed under minimum cavitation noise, indicating tissue the initial-softening-then-stiffening property change. During HIFU with boiling, HMIFU monitored a consistent change in lesion-to-background displacement contrast (0.46±0.37) despite the presence of strong cavitation noise due to boiling during lesion formation. Therefore, HMIFU effectively monitored softening-then-stiffening during lesioning under slow denaturation, and detected lesioning under boiling with a distinct change in displacement contrast under boiling in the presence of cavitation. In conclusion, HMIFU was shown effective in HIFU monitoring and lesioning identification without being significantly affected by cavitation noise. PMID:26168177

  11. Prediction of HDR quality by combining perceptually transformed display measurements with machine learning

    NASA Astrophysics Data System (ADS)

    Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott

    2017-09-01

    We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.

  12. Electric Field Reconstruction in the Image Plane of a High-Contrast Coronagraph Using a Set of Pinholes Around the Lyot Plane

    NASA Technical Reports Server (NTRS)

    Giveon, Amir; Kern, Brian; Shaklan, Stuart; Wallace, Kent; Noecker, Charley

    2012-01-01

    The pair-wise estimation has been used now on various testbeds with different coronagraphs with the best contrast results to date. Pinholes estimate has been implemented and ready to be tested in closed loop correction. Pinholes estimate offers an independent method. We hope to improve the calibration process to gain better estimates.

  13. Comparison of intravascular ultrasound to contrast-enhanced 64-slice computed tomography to assess the significance of angiographically ambiguous coronary narrowings.

    PubMed

    Okabe, Teruo; Weigold, Wm Guy; Mintz, Gary S; Roswell, Robert; Joshi, Subodh; Lee, Sung Yun; Lee, Bongryeol; Steinberg, Daniel H; Roy, Probal; Slottow, Tina L Pinto; Smith, Kimberly; Torguson, Rebecca; Xue, Zhenyi; Satler, Lowell F; Kent, Kenneth M; Pichard, Augusto D; Weissman, Neil J; Lindsay, Joseph; Waksman, Ron

    2008-10-15

    The efficacy of contrast-enhanced multislice computed tomography (MSCT) for assessment of ambiguous lesions is unknown. We compared both quantitative coronary angiography (QCA) and MSCT to the gold standard for a significant stenosis-minimum luminal area (MLA) by intravascular ultrasound (IVUS)-in 51 patients (64 +/- 10 years old, 19 men) with 69 angiographically ambiguous, nonleft main lesions. The MSCT was performed 17 +/- 18 days before IVUS analysis. Overall diameter stenosis by QCAwas 51.0 +/- 9.8%; 39 of 51 patients (76%) eventually underwent revascularization (38 by percutaneous coronary intervention and 1 by coronary artery bypass graft). By univariate analysis, minimum luminal diameter, MLA, lumen visibility by MSCT, and minimum luminal diameter by QCA were significant predictors of MLA by IVUS

  14. Low-flow characteristics of streams in Ohio through water year 1997

    USGS Publications Warehouse

    Straub, David E.

    2001-01-01

    This report presents selected low-flow and flow-duration characteristics for 386 sites throughout Ohio. These sites include 195 long-term continuous-record stations with streamflow data through water year 1997 (October 1 to September 30) and for 191 low-flow partial-record stations with measurements into water year 1999. The characteristics presented for the long-term continuous-record stations are minimum daily streamflow; average daily streamflow; harmonic mean flow; 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 5-, 10-, 20-, and 50-year recurrence intervals; and 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent daily duration flows. The characteristics presented for the low-flow partial-record stations are minimum observed streamflow; estimated 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 10-, and 20-year recurrence intervals; and estimated 98-, 95-, 90-, 85- and 80-percent daily duration flows. The low-flow frequency and duration analyses were done for three seasonal periods (warm weather, May 1 to November 30; winter, December 1 to February 28/29; and autumn, September 1 to November 30), plus the annual period based on the climatic year (April 1 to March 31).

  15. General Learning at a Women's College.

    ERIC Educational Resources Information Center

    Ratcliff, James L.; Jones, Elizabeth A.

    A study was conducted at a women's college to compare contrasting ideas of general education curriculum requirements: the prescribed core curriculum in which all students take the same courses versus the distributive model in which students are required to choose a minimum number of courses within specified academic areas. Graduating seniors were…

  16. Establishment of a center of excellence for applied mathematical and statistical research

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Gray, H. L.

    1983-01-01

    The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.

  17. Inference of pCO2 Levels during the Late Cretaceous Using Fossil Lauraceae

    NASA Astrophysics Data System (ADS)

    Richey, J. D.; Upchurch, G. R.

    2011-12-01

    Botanical estimates of pCO2 for the Late Cretaceous have most commonly used Stomatal Index (SI) in fossil Ginkgo. Recently, SI in fossil Lauraceae has been used to infer changes in pCO2 across the Cenomanian-Turonian boundary, based on the relation between SI and pCO2 in extant Laurus and Hypodaphnis. To provide a broad-scale picture of pCO2 based on fossil Lauraceae, we examined dispersed cuticle of the leaf macrofossil genus Pandemophyllum from: 1) the early to middle Cenomanian of the Potomac Group of Maryland (Mauldin Mountain locality, lower Zone III) and 2) the Maastrichtian of southern Colorado (Raton Basin, Starkville South and Berwind Canyon localities). These samples fall within the Late Cretaceous decline in pCO2 inferred from geochemical modeling and other proxies. SI was calculated from fossil cuticle fragments using ImageJ and counts of up to 56,000 cells per sample, a far greater number of cells than are counted in most studies. CO2 levels were estimated using the relation between SI and CO2 published for Laurus nobilis and Hypodaphnis zenkeri. Early to middle Cenomanian atmospheric pCO2 is estimated at 362-536 parts per million (ppm). This represents the absolute minimum and maximum estimated CO2 levels from the ±95% confidence intervals (CI) of the relation between SI and CO2 for the modern equivalents, and SI ± 1 Standard Deviation (SD) in the fossil genus Pandemophyllum. Late Maastrichtian atmospheric pCO2 is estimated at 358-534 ppm. The Maastrichtian estimates falls within the range of published estimates from other proxies. The Cenomanian estimate, in contrast, is low relative to most other estimates. The 95% confidence intervals of our pCO2 estimates overlap each other and many of the assemblages published by Barclay et al. (2010) for Lauraceae across the Cenomanian-Turonian boundary. This could indicate that 1) pCO2 did not undergo a major long-term decline during the Late Cretaceous, 2) Lauraceae show low sensitivity to high pCO2, or 3) additional sampling is necessary to find the mid-Cretaceous pCO2 maximum inferred by other proxy methods.

  18. Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elgered, G.; Davis, J.L.; Herring, T.A.

    1991-04-10

    An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less

  19. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  20. Comparing different policy scenarios to reduce the consumption of ultra-processed foods in UK: impact on cardiovascular disease mortality using a modelling approach.

    PubMed

    Moreira, Patricia V L; Baraldi, Larissa Galastri; Moubarac, Jean-Claude; Monteiro, Carlos Augusto; Newton, Alex; Capewell, Simon; O'Flaherty, Martin

    2015-01-01

    The global burden of non-communicable diseases partly reflects growing exposure to ultra-processed food products (UPPs). These heavily marketed UPPs are cheap and convenient for consumers and profitable for manufacturers, but contain high levels of salt, fat and sugars. This study aimed to explore the potential mortality reduction associated with future policies for substantially reducing ultra-processed food intake in the UK. We obtained data from the UK Living Cost and Food Survey and from the National Diet and Nutrition Survey. By the NOVA food typology, all food items were categorized into three groups according to the extent of food processing: Group 1 describes unprocessed/minimally processed foods. Group 2 comprises processed culinary ingredients. Group 3 includes all processed or ultra-processed products. Using UK nutrient conversion tables, we estimated the energy and nutrient profile of each food group. We then used the IMPACT Food Policy model to estimate reductions in cardiovascular mortality from improved nutrient intakes reflecting shifts from processed or ultra-processed to unprocessed/minimally processed foods. We then conducted probabilistic sensitivity analyses using Monte Carlo simulation. Approximately 175,000 cardiovascular disease (CVD) deaths might be expected in 2030 if current mortality patterns persist. However, halving the intake of Group 3 (processed) foods could result in approximately 22,055 fewer CVD related deaths in 2030 (minimum estimate 10,705, maximum estimate 34,625). An ideal scenario in which salt and fat intakes are reduced to the low levels observed in Group 1 and 2 could lead to approximately 14,235 (minimum estimate 6,680, maximum estimate 22,525) fewer coronary deaths and approximately 7,820 (minimum estimate 4,025, maximum estimate 12,100) fewer stroke deaths, comprising almost 13% mortality reduction. This study shows a substantial potential for reducing the cardiovascular disease burden through a healthier food system. It highlights the crucial importance of implementing healthier UK food policies.

  1. Estimated Minimum Discharge Rates of the Deepwater Horizon Spill-Interim Report to the Flow Rate Technical Group from the Mass Balance Team

    USGS Publications Warehouse

    Labson, Victor F.; Clark, Roger N.; Swayze, Gregg A.; Hoefen, Todd M.; Kokaly, Raymond F.; Livo, K. Eric; Powers, Michael H.; Plumlee, Geoffrey S.; Meeker, Gregory P.

    2010-01-01

    All of the calculations and results in this report are preliminary and intended for the purpose, and only for the purpose, of aiding the incident team in assessing the extent of the spilled oil for ongoing response efforts. Other applications of this report are not authorized and are not considered valid. Because of time constraints and limitations of data available to the experts, many of their estimates are approximate, are subject to revision, and certainly should not be used as the Federal Government's final values for assessing volume of the spill or its impact to the environment or to coastal communities. Each expert that contributed to this report reserves the right to alter his conclusions based upon further analysis or additional information. An estimated minimum total oil discharge was determined by calculations of oil volumes measured as of May 17, 2010. This included oil on the ocean surface measured with satellite and airborne images and with spectroscopic data (129,000 barrels to 246,000 barrels using less and more aggressive assumptions, respectively), oil skimmed off the surface (23,500 barrels from U.S. Coast Guard [USCG] estimates), oil burned off the surface (11,500 barrels from USCG estimates), dispersed subsea oil (67,000 to 114,000 barrels), and oil evaporated or dissolved (109,000 to 185,000 barrels). Sedimentation (oil captured from Mississippi River silt and deposited on the ocean bottom), biodegradation, and other processes may indicate significant oil volumes beyond our analyses, as will any subsurface volumes such as suspended tar balls or other emulsions that are not included in our estimates. The lower bounds of total measured volumes are estimated to be within the range of 340,000 to 580,000 barrels as of May 17, 2010, for an estimated average minimum discharge rate of 12,500 to 21,500 barrels per day for 27 days from April 20 to May 17, 2010.

  2. Community reductions in youth smoking after raising the minimum tobacco sales age to 21.

    PubMed

    Kessel Schneider, Shari; Buka, Stephen L; Dash, Kim; Winickoff, Jonathan P; O'Donnell, Lydia

    2016-05-01

    Raising the tobacco sales age to 21 has gained support as a promising strategy to reduce youth cigarette access, but there is little direct evidence of its impact on adolescent smoking. Using regional youth survey data, we compared youth smoking trends in Needham, Massachusetts--which raised the minimum purchase age in 2005--with those of 16 surrounding communities. The MetroWest Adolescent Health Survey is a biennial census survey of high school youth in communities west of Boston; over 16,000 students participated at each of four time points from 2006 to 2012. Using these pooled cross-section data, we used generalised estimating equation models to compare trends in current cigarette smoking and cigarette purchases in Needham relative to 16 comparison communities without similar ordinances. To determine whether trends were specific to tobacco, we also examined trends in youth alcohol use over the same time period. From 2006 to 2010, the decrease in 30-day smoking in Needham (from 13% to 7%) was significantly greater than in the comparison communities (from 15% to 12%; p<.001). This larger decline was consistent for both genders, Caucasian and non-Caucasian youth, and grades 10, 11 and 12. Cigarette purchases among current smokers also declined significantly more in Needham than in the comparison communities during this time. In contrast, there were no comparable differences for current alcohol use. Our results suggest that raising the minimum sales age to 21 for tobacco contributes to a greater decline in youth smoking relative to communities that did not pass this ordinance. These findings support local community-level action to raise the tobacco sales age to 21. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    NASA Astrophysics Data System (ADS)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  4. Refractive Error and Visual Functions in Children with Special Needs Compared with the First Grade School Students in Oman

    PubMed Central

    Vora, Urmi; Khandekar, Rajiv; Natrajan, Sarvanan; Al-Hadrami, Khalfan

    2010-01-01

    Background: We evaluated the refractive status and visual function of children with special needs (other handicap) in 2010 and compared them with healthy 1st grade school students in Oman. Materials and Methods: This was a cohort study. Optometrists recorded vision using a logarithm of minimum angle of resolution (LogMAR) chart. Preferential looking method was used for testing 31 children. Cycloplegic refraction was performed on all children. Contrast sensitivity was tested using 2.5%, 10%, and 100% contrast charts. Ocular movement, alignment, and anterior segment were also assessed. A pediatrician reviewed the health records of all the children at the time of their enrollment in this study to determine if the child had been diagnosed with a systemic condition or syndromes. The visual functions were assessed by study investigators. We estimated the rates and the risk of different visual function defects in children with special needs. Result: The prevalence of refractive error in 70 children (4.7 ± 0.8 years) with special needs (group 1) and 175 normal healthy first grade students (group 2) were 58.5% and 2.9%, respectively. The risk of refractive error was significantly higher in children with special needs [relative risk, 48.1 (95% confidence interval, 17.54–131.8)]. Hyperopia (>1.00 D), myopia (≥ 1.00D) and astigmatism (≥ ±1.00 D) were found in 18.6%, 24.3%, and 27.1%, respectively, in group 1. Six children in this group had defective near vision. Sixteen (80%) children with Down syndrome had refractive error. Seven (50%) children with developmental disorder showed decreased contrast sensitivity. Conclusion: Prevalence of uncorrected refractive error was much higher in children with special needs. Prevalence of strabismus, nystagmus, and reduced contrast sensitivity was also higher in children with special needs. Early vision screening, visual function assessment, correction of refractive error, and frequent follow-up are recommended. PMID:21180428

  5. Electrofishing distance needed to estimate consistent Index of Biotic Integrity (IBI) scores in raftable Oregon rivers

    EPA Science Inventory

    An important issue surrounding assessment of riverine fish assemblages is the minimum amount of sampling distance needed to adequately determine biotic condition. Determining adequate sampling distance is important because sampling distance affects estimates of fish assemblage c...

  6. 38 CFR 36.4365 - Appraisal requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...

  7. 38 CFR 36.4365 - Appraisal requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...

  8. 38 CFR 36.4365 - Appraisal requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...

  9. 38 CFR 36.4365 - Appraisal requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...

  10. The minimum distance approach to classification

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1971-01-01

    The work to advance the state-of-the-art of miminum distance classification is reportd. This is accomplished through a combination of theoretical and comprehensive experimental investigations based on multispectral scanner data. A survey of the literature for suitable distance measures was conducted and the results of this survey are presented. It is shown that minimum distance classification, using density estimators and Kullback-Leibler numbers as the distance measure, is equivalent to a form of maximum likelihood sample classification. It is also shown that for the parametric case, minimum distance classification is equivalent to nearest neighbor classification in the parameter space.

  11. The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight

    PubMed Central

    Livingston, Melvin D.; Markowitz, Sara; Wagenaar, Alexander C.

    2016-01-01

    Objectives. To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. Methods. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28–364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Results. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. Conclusions. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year. PMID:27310355

  12. The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight.

    PubMed

    Komro, Kelli A; Livingston, Melvin D; Markowitz, Sara; Wagenaar, Alexander C

    2016-08-01

    To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28-364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year.

  13. Comparison of linear and nonlinear implementation of the compartmental tissue uptake model for dynamic contrast-enhanced MRI.

    PubMed

    Kallehauge, Jesper F; Sourbron, Steven; Irving, Benjamin; Tanderup, Kari; Schnabel, Julia A; Chappell, Michael A

    2017-06-01

    Fitting tracer kinetic models using linear methods is much faster than using their nonlinear counterparts, although this comes often at the expense of reduced accuracy and precision. The aim of this study was to derive and compare the performance of the linear compartmental tissue uptake (CTU) model with its nonlinear version with respect to their percentage error and precision. The linear and nonlinear CTU models were initially compared using simulations with varying noise and temporal sampling. Subsequently, the clinical applicability of the linear model was demonstrated on 14 patients with locally advanced cervical cancer examined with dynamic contrast-enhanced magnetic resonance imaging. Simulations revealed equal percentage error and precision when noise was within clinical achievable ranges (contrast-to-noise ratio >10). The linear method was significantly faster than the nonlinear method, with a minimum speedup of around 230 across all tested sampling rates. Clinical analysis revealed that parameters estimated using the linear and nonlinear CTU model were highly correlated (ρ ≥ 0.95). The linear CTU model is computationally more efficient and more stable against temporal downsampling, whereas the nonlinear method is more robust to variations in noise. The two methods may be used interchangeably within clinical achievable ranges of temporal sampling and noise. Magn Reson Med 77:2414-2423, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  14. Image-adapted visually weighted quantization matrices for digital image compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    1994-01-01

    A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

  15. Hybrid Stochastic Models for Remaining Lifetime Prognosis

    DTIC Science & Technology

    2004-08-01

    literature for techniques and comparisons. Os- ogami and Harchol-Balter [70], Perros [73], Johnson [36], and Altiok [5] provide excellent summaries of...and type of PH-distribution approximation for c2 > 0.5 is not as obvious. In order to use the minimum distance estimation, Perros [73] indicated that...moment-matching techniques. Perros [73] indicated that the maximum likelihood and minimum distance techniques require nonlinear optimization. Johnson

  16. Theoretical considerations in measurement of time discrepancies between input and myocardial time-signal intensity curves in estimates of regional myocardial perfusion with first-pass contrast-enhanced MRI.

    PubMed

    Natsume, Takahiro; Ishida, Masaki; Kitagawa, Kakuya; Nagata, Motonori; Sakuma, Hajime; Ichihara, Takashi

    2015-11-01

    The purpose of this study was to develop a method to determine time discrepancies between input and myocardial time-signal intensity (TSI) curves for accurate estimation of myocardial perfusion with first-pass contrast-enhanced MRI. Estimation of myocardial perfusion with contrast-enhanced MRI using kinetic models requires faithful recording of contrast content in the blood and myocardium. Typically, the arterial input function (AIF) is obtained by setting a region of interest in the left ventricular cavity. However, there is a small delay between the AIF and the myocardial curves, and such time discrepancies can lead to errors in flow estimation using Patlak plot analysis. In this study, the time discrepancies between the arterial TSI curve and the myocardial tissue TSI curve were estimated based on the compartment model. In the early phase after the arrival of the contrast agent in the myocardium, the relationship between rate constant K1 and the concentrations of Gd-DTPA contrast agent in the myocardium and arterial blood (LV blood) can be described by the equation K1={dCmyo(tpeak)/dt}/Ca(tpeak), where Cmyo(t) and Ca(t) are the relative concentrations of Gd-DTPA contrast agent in the myocardium and in the LV blood, respectively, and tpeak is the time corresponding to the peak of Ca(t). In the ideal case, the time corresponding to the maximum upslope of Cmyo(t), tmax, is equal to tpeak. In practice, however, there is a small difference in the arrival times of the contrast agent into the LV and into the myocardium. This difference was estimated to correspond to the difference between tpeak and tmax. The magnitudes of such time discrepancies and the effectiveness of the correction for these time discrepancies were measured in 18 subjects who underwent myocardial perfusion MRI under rest and stress conditions. The effects of the time discrepancies could be corrected effectively in the myocardial perfusion estimates. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. On the minimum quantum requirement of photosynthesis.

    PubMed

    Zeinalov, Yuzeir

    2009-01-01

    An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.

  18. Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.

    PubMed

    Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L

    2018-01-01

    Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1

  19. Stacking fault effects in Mg-doped GaN

    NASA Astrophysics Data System (ADS)

    Schmidt, T. M.; Miwa, R. H.; Orellana, W.; Chacham, H.

    2002-01-01

    First-principles total energy calculations are performed to investigate the interaction of a stacking fault with a p-type impurity in both zinc-blende and wurtzite GaN. For both structures we find that, in the presence of a stacking fault, the impurity level is a more localized state in the band gap. In zinc-blende GaN, the minimum energy position of the substitutional Mg atom is at the plane of the stacking fault. In contrast, in wurtzite GaN the substitutional Mg atom at the plane of the stacking fault is a local minimum and the global minimum is the substitutional Mg far from the fault. This behavior can be understood as a packing effect which induces a distinct strain relief process, since the local structure of the stacking fault in zinc-blende GaN is similar to fault-free wurtzite GaN and vice-versa.

  20. MRI contrast agent concentration and tumor interstitial fluid pressure.

    PubMed

    Liu, L J; Schlesinger, M

    2016-10-07

    The present work describes the relationship between tumor interstitial fluid pressure (TIFP) and the concentration of contrast agent for dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We predict the spatial distribution of TIFP based on that of contrast agent concentration. We also discuss the cases for estimating tumor interstitial volume fraction (void fraction or porosity of porous medium), ve, and contrast volume transfer constant, K(trans), by measuring the ratio of contrast agent concentration in tissue to that in plasma. A linear fluid velocity distribution may reflect a quadratic function of TIFP distribution and lead to a practical method for TIFP estimation. To calculate TIFP, the parameters or variables should preferably be measured along the direction of the linear fluid velocity (this is in the same direction as the gray value distribution of the image, which is also linear). This method may simplify the calculation for estimating TIFP. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  1. Efficiency and large deviations in time-asymmetric stochastic heat engines

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...

    2014-10-24

    In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less

  2. Pair-Wise, Deformable Mirror, Image Plane-Based Diversity Electric Field Estimation for High Contrast Coronagraphy

    NASA Technical Reports Server (NTRS)

    Give'on, Amir; Kern, Brian D.; Shaklan, Stuart

    2011-01-01

    In this paper we describe the complex electric field reconstruction from image plane intensity measurements for high contrast coronagraphic imaging. A deformable mirror (DM) surface is modied with pairs of complementary shapes to create diversity in the image plane of the science camera where the intensity of the light is measured. Along with the Electric Field Conjugation correction algorithm, this estimation method has been used in various high contrast imaging testbeds to achieve the best contrasts to date both in narrow and in broad band light. We present the basic methodology of estimation in easy to follow list of steps, present results from HCIT and raise several open quations we are confronted with using this method.

  3. Portfolio optimization and the random magnet problem

    NASA Astrophysics Data System (ADS)

    Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.

    2002-08-01

    Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.

  4. Estimation of additive forces and moments for supersonic inlets

    NASA Technical Reports Server (NTRS)

    Perkins, Stanley C., Jr.; Dillenius, Marnix F. E.

    1991-01-01

    A technique for estimating the additive forces and moments associated with supersonic, external compression inlets as a function of mass flow ratio has been developed. The technique makes use of a low order supersonic paneling method for calculating minimum additive forces at maximum mass flow conditions. A linear relationship between the minimum additive forces and the maximum values for fully blocked flow is employed to obtain the additive forces at a specified mass flow ratio. The method is applicable to two-dimensional inlets at zero or nonzero angle of attack, and to axisymmetric inlets at zero angle of attack. Comparisons with limited available additive drag data indicate fair to good agreement.

  5. Auger electron and characteristic energy loss spectra for electro-deposited americium-241

    NASA Astrophysics Data System (ADS)

    Varma, Matesh N.; Baum, John W.

    1983-07-01

    Auger electron energy spectra for electro-deposited americium-241 on platinum substrate were obtained using a cylindrical mirror analyzer. Characteristic energy loss spectra for this sample were also obtained at primary electron beam energies of 990 and 390 eV. From these measurements PI, PII, and PIII energy levels for americium-241 are determined. Auger electron energies are compared with theoretically calculated values. Minimum detectability under the present condition of sample preparation and equipment was estimated at approximately 1.2×10-8 g/cm2 or 3.9×10-8 Ci/cm2. Minimum detectability for plutonium-239 under similar conditions was estimated at about 7.2×10-10 Ci/cm2.

  6. Forecasting the global shortage of physicians: an economic- and needs-based approach

    PubMed Central

    Liu, Jenny X; Kinfu, Yohannes; Dal Poz, Mario R

    2008-01-01

    Abstract Objective Global achievements in health may be limited by critical shortages of health-care workers. To help guide workforce policy, we estimate the future demand for, need for and supply of physicians, by WHO region, to determine where likely shortages will occur by 2015, the target date of the Millennium Development Goals. Methods Using World Bank and WHO data on physicians per capita from 1980 to 2001 for 158 countries, we employ two modelling approaches for estimating the future global requirement for physicians. A needs-based model determines the number of physicians per capita required to achieve 80% coverage of live births by a skilled health-care attendant. In contrast, our economic model identifies the number of physicians per capita that are likely to be demanded, given each country’s economic growth. These estimates are compared to the future supply of physicians projected by extrapolating the historical rate of increase in physicians per capita for each country. Findings By 2015, the global supply of physicians appears to be in balance with projected economic demand. Because our measure of need reflects the minimum level of workforce density required to provide a basic health service that is met in all but the least developed countries, the needs-based estimates predict a global surplus of physicians. However, on a regional basis, both models predict shortages for many countries in the WHO African Region in 2015, with some countries experiencing a needs-based shortage, a demand-based shortage, or both. Conclusion The type of policy intervention needed to alleviate projected shortages, such as increasing health-care training or adopting measures to discourage migration, depends on the type of shortage projected. PMID:18670663

  7. AOAC SMPR 2015.009: Estimation of total phenolic content using Folin-C Assay

    USDA-ARS?s Scientific Manuscript database

    This AOAC Standard Method Performance Requirements (SMPR) is for estimation of total soluble phenolic content in dietary supplement raw materials and finished products using the Folin-C assay for comparison within same matrices. SMPRs describe the minimum recommended performance characteristics to b...

  8. ELECTROFISHING DISTANCE NEEDED TO ESTIMATE FISH SPECIES RICHNESS IN RAFTABLE WESTERN USA RIVERS

    EPA Science Inventory

    A critical issue in river monitoring is the minimum amount of sampling distance required to adequately represent the fish assemblage of a reach. Determining adequate sampling distance is important because it affects estimates of fish assemblage integrity and diversity at local a...

  9. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  10. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  11. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  12. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  13. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  14. Color Discrimination Is Affected by Modulation of Luminance Noise in Pseudoisochromatic Stimuli

    PubMed Central

    Cormenzana Méndez, Iñaki; Martín, Andrés; Charmichael, Teaire L.; Jacob, Mellina M.; Lacerda, Eliza M. C. B.; Gomes, Bruno D.; Fitzgerald, Malinda E. C.; Ventura, Dora F.; Silveira, Luiz C. L.; O'Donell, Beatriz M.; Souza, Givago S.

    2016-01-01

    Pseudoisochromatic stimuli have been widely used to evaluate color discrimination and to identify color vision deficits. Luminance noise is one of the stimulus parameters used to ensure that subject's response is due to their ability to discriminate target stimulus from the background based solely on the hue between the colors that compose such stimuli. We studied the influence of contrast modulation of the stimulus luminance noise on threshold and reaction time color discrimination. We evaluated color discrimination thresholds using the Cambridge Color Test (CCT) at six different stimulus mean luminances. Each mean luminance condition was tested using two protocols: constant absolute difference between maximum and minimum luminance of the luminance noise (constant delta protocol, CDP), and constant contrast modulation of the luminance noise (constant contrast protocol, CCP). MacAdam ellipses were fitted to the color discrimination thresholds in the CIE 1976 color space to quantify the color discrimination ellipses at threshold level. The same CDP and CCP protocols were applied in the experiment measuring RTs at three levels of stimulus mean luminance. The color threshold measurements show that for the CDP, ellipse areas decreased as a function of the mean luminance and they were significantly larger at the two lowest mean luminances, 10 cd/m2 and 13 cd/m2, compared to the highest one, 25 cd/m2. For the CCP, the ellipses areas also decreased as a function of the mean luminance, but there was no significant difference between ellipses areas estimated at six stimulus mean luminances. The exponent of the decrease of ellipse areas as a function of stimulus mean luminance was steeper in the CDP than CCP. Further, reaction time increased linearly with the reciprocal of the length of the chromatic vectors varying along the four chromatic half-axes. It decreased as a function of stimulus mean luminance in the CDP but not in the CCP. The findings indicated that visual performance using pseudoisochromatic stimuli was dependent on the Weber's contrast of the luminance noise. Low Weber's contrast in the luminance noise is suggested to have a reduced effect on chromatic information and, hence, facilitate desegregation of the hue-defined target from the background. PMID:27458404

  15. Do minimum wages improve early life health? Evidence from developing countries.

    PubMed

    Majid, Muhammad Farhan; Mendoza Rodríguez, José M; Harper, Sam; Frank, John; Nandi, Arijit

    2016-06-01

    The impact of legislated minimum wages on the early-life health of children living in low and middle-income countries has not been examined. For our analyses, we used data from the Demographic and Household Surveys (DHS) from 57 countries conducted between 1999 and 2013. Our analyses focus on height-for-age z scores (HAZ) for children under 5 years of age who were surveyed as part of the DHS. To identify the causal effect of minimum wages, we utilized plausibly exogenous variation in the legislated minimum wages during each child's year of birth, the identifying assumption being that mothers do not time their births around changes in the minimum wage. As a sensitivity exercise, we also made within family comparisons (mother fixed effect models). Our final analysis on 49 countries reveal that a 1% increase in minimum wages was associated with 0.1% (95% CI = -0.2, 0) decrease in HAZ scores. Adverse effects of an increase in the minimum wage were observed among girls and for children of fathers who were less than 35 years old, mothers aged 20-29, parents who were married, parents who were less educated, and parents involved in manual work. We also explored heterogeneity by region and GDP per capita at baseline (1999). Adverse effects were concentrated in lower-income countries and were most pronounced in South Asia. By contrast, increases in the minimum wage improved children's HAZ in Latin America, and among children of parents working in a skilled sector. Our findings are inconsistent with the hypothesis that increases in the minimum wage unconditionally improve child health in lower-income countries, and highlight heterogeneity in the impact of minimum wages around the globe. Future work should involve country and occupation specific studies which can explore not only different outcomes such as infant mortality rates, but also explore the role of parental investments in shaping these effects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Estimation of the transmissivity of thin leaky-confined aquifers from single-well pumping tests

    NASA Astrophysics Data System (ADS)

    Worthington, Paul F.

    1981-01-01

    Data from the quasi-equilibrium phases of a step-drawdown test are used to evaluate the coefficient of non-linear head losses subject to the assumption of a constant effective well radius. After applying a well-loss correction to the observed drawdowns of the first step, an approximation method is used to estimate a pseudo-transmissivity of the aquifer from a single value of time-variant drawdown. The pseudo-transmissivities computed for each of a sequence of values of time pass through a minimum when there is least manifestation of casing-storage and leakage effects, phenomena to which pumping-test data of this kind are particularly susceptible. This minimum pseudo-transmissivity, adjusted for partial penetration effects where appropriate, constitutes the best possible estimate of aquifer transmissivity. The ease of application of the overall procedure is illustrated by a practical example.

  18. Virtual K-12 Leadership: A Postmodern Paradigm

    ERIC Educational Resources Information Center

    Tucker, Tommy N.

    2014-01-01

    This phenomenological, mixed-method study compared and contrasted virtual K-12 school leadership with traditional face-to-face leadership. All 106 participants served for a minimum of two years in each setting. The study was conducted in two phases in order to reveal consensus and dissensus points of view. Conceptually, a postmodern framework was…

  19. A Contrastive Study of Determiner Usage in EST Research Articles

    ERIC Educational Resources Information Center

    Master, Peter

    1993-01-01

    The determiners in English include three categories: predeterminers, central determiners, and postdeterminers. The focus of the present study is the central determiners because they comprise the largest group and because a minimum of one central determiner is required in the generation of any noun phrase. Furthermore, the central determiners have…

  20. Age validation of canary rockfish (Sebastes pinniger) using two independent otolith techniques: lead-radium and bomb radiocarbon dating.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, A H; Kerr, L A; Cailliet, G M

    2007-11-04

    Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, themore » unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.« less

  1. ERTS evaluation for land use inventory

    NASA Technical Reports Server (NTRS)

    Hardy, E. E. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The feasibility of accomplishing a general inventory of any given region based on spectral categories from satellite data has been demonstrated in a pilot study for an area of 6300 square kilometers in central New York State. This was accomplished by developing special processing techniques to improve and balance contrast and density for each spectral band of an image scene to compare with a standard range of density and contrast found to be acceptable for interpretation of the scene. Diazo film transparencies were made from enlarged black and white transparencies of each spectral band. Color composites were constructed from these diazo films in combinations of hue and spectral bands to enhance different spectral features in the scene. Interpretation and data takeoff was accomplished manually by translating interpreted areas onto an overlay to construct a spectral map. The minimum area interpreted was 25 hectares. The minimum area geographically referenced was one square kilometer. The interpretation and referencing of data from ERTS-1 was found to be about 88% accurate for eight primary spectral categories.

  2. UNCOVERING THE INTRINSIC VARIABILITY OF GAMMA-RAY BURSTS

    NASA Astrophysics Data System (ADS)

    Golkhou, V. Zach; Butler, Nathaniel R

    2014-08-01

    We develop a robust technique to determine the minimum variability timescale for gamma-ray burst (GRB) light curves, utilizing Haar wavelets. Our approach averages over the data for a given GRB, providing an aggregate measure of signal variation while also retaining sensitivity to narrow pulses within complicated time series. In contrast to previous studies using wavelets, which simply define the minimum timescale in reference to the measurement noise floor, our approach identifies the signature of temporally smooth features in the wavelet scaleogram and then additionally identifies a break in the scaleogram on longer timescales as a signature of a true, temporally unsmooth light curve feature or features. We apply our technique to the large sample of Swift GRB gamma-ray light curves and for the first time—due to the presence of a large number of GRBs with measured redshift—determine the distribution of minimum variability timescales in the source frame. We find a median minimum timescale for long-duration GRBs in the source frame of Δtmin = 0.5 s, with the shortest timescale found being on the order of 10 ms. This short timescale suggests a compact central engine (3000 km). We discuss further implications for the GRB fireball model and present a tantalizing correlation between the minimum timescale and redshift, which may in part be due to cosmological time dilation.

  3. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera

    PubMed Central

    Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin

    2016-01-01

    This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292

  4. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  5. Minimum average 7-day, 10-year flows in the Hudson River basin, New York, with release-flow data on Rondout and Ashokan reservoirs

    USGS Publications Warehouse

    Archer, Roger J.

    1978-01-01

    Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.

  6. Setting population targets for mammals using body mass as a predictor of population persistence.

    PubMed

    Hilbers, Jelle P; Santini, Luca; Visconti, Piero; Schipper, Aafke M; Pinto, Cecilia; Rondinini, Carlo; Huijbregts, Mark A J

    2017-04-01

    Conservation planning and biodiversity assessments need quantitative targets to optimize planning options and assess the adequacy of current species protection. However, targets aiming at persistence require population-specific data, which limit their use in favor of fixed and nonspecific targets, likely leading to unequal distribution of conservation efforts among species. We devised a method to derive equitable population targets; that is, quantitative targets of population size that ensure equal probabilities of persistence across a set of species and that can be easily inferred from species-specific traits. In our method, we used models of population dynamics across a range of life-history traits related to species' body mass to estimate minimum viable population targets. We applied our method to a range of body masses of mammals, from 2 g to 3825 kg. The minimum viable population targets decreased asymptotically with increasing body mass and were on the same order of magnitude as minimum viable population estimates from species- and context-specific studies. Our approach provides a compromise between pragmatic, nonspecific population targets and detailed context-specific estimates of population viability for which only limited data are available. It enables a first estimation of species-specific population targets based on a readily available trait and thus allows setting equitable targets for population persistence in large-scale and multispecies conservation assessments and planning. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  7. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part III: reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Lobit, P.; Gómez Tagle, A.; Bautista, F.; Lhomme, J. P.

    2017-07-01

    We evaluated two methods to estimate evapotranspiration (ETo) from minimal weather records (daily maximum and minimum temperatures) in Mexico: a modified reduced set FAO-Penman-Monteith method (Allen et al. 1998, Rome, Italy) and the Hargreaves and Samani (Appl Eng Agric 1(2): 96-99, 1985) method. In the reduced set method, the FAO-Penman-Monteith equation was applied with vapor pressure and radiation estimated from temperature data using two new models (see first and second articles in this series): mean temperature as the average of maximum and minimum temperature corrected for a constant bias and constant wind speed. The Hargreaves-Samani method combines two empirical relationships: one between diurnal temperature range ΔT and shortwave radiation Rs, and another one between average temperature and the ratio ETo/Rs: both relationships were evaluated and calibrated for Mexico. After performing a sensitivity analysis to evaluate the impact of different approximations on the estimation of Rs and ETo, several model combinations were tested to predict ETo from daily maximum and minimum temperature alone. The quality of fit of these models was evaluated on 786 weather stations covering most of the territory of Mexico. The best method was found to be a combination of the FAO-Penman-Monteith reduced set equation with the new radiation estimation and vapor pressure model. As an alternative, a recalibration of the Hargreaves-Samani equation is proposed.

  8. Local scattering property scales flow speed estimation in laser speckle contrast imaging

    NASA Astrophysics Data System (ADS)

    Miao, Peng; Chao, Zhen; Feng, Shihan; Yu, Hang; Ji, Yuanyuan; Li, Nan; Thakor, Nitish V.

    2015-07-01

    Laser speckle contrast imaging (LSCI) has been widely used in in vivo blood flow imaging. However, the effect of local scattering property (scattering coefficient µ s ) on blood flow speed estimation has not been well investigated. In this study, such an effect was quantified and involved in relation between speckle autocorrelation time τ c and flow speed v based on simulation flow experiments. For in vivo blood flow imaging, an improved estimation strategy was developed to eliminate the estimation bias due to the inhomogeneous distribution of the scattering property. Compared to traditional LSCI, a new estimation method significantly suppressed the imaging noise and improves the imaging contrast of vasculatures. Furthermore, the new method successfully captured the blood flow changes and vascular constriction patterns in rats’ cerebral cortex from normothermia to mild and moderate hypothermia.

  9. Optimal wavefront estimation of incoherent sources

    NASA Astrophysics Data System (ADS)

    Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler

    2014-08-01

    Direct imaging is in general necessary to characterize exoplanets and disks. A coronagraph is an instrument used to create a dim (high-contrast) region in a star's PSF where faint companions can be detected. All coronagraphic high-contrast imaging systems use one or more deformable mirrors (DMs) to correct quasi-static aberrations and recover contrast in the focal plane. Simulations show that existing wavefront control algorithms can correct for diffracted starlight in just a few iterations, but in practice tens or hundreds of control iterations are needed to achieve high contrast. The discrepancy largely arises from the fact that simulations have perfect knowledge of the wavefront and DM actuation. Thus, wavefront correction algorithms are currently limited by the quality and speed of wavefront estimates. Exposures in space will take orders of magnitude more time than any calculations, so a nonlinear estimation method that needs fewer images but more computational time would be advantageous. In addition, current wavefront correction routines seek only to reduce diffracted starlight. Here we present nonlinear estimation algorithms that include optimal estimation of sources incoherent with a star such as exoplanets and debris disks.

  10. Minimizing instrumentation requirement for estimating crop water stress index and transpiration of maize

    USDA-ARS?s Scientific Manuscript database

    Research was conducted in northern Colorado in 2011 to estimate the Crop Water Stress Index (CWSI) and actual water transpiration (Ta) of maize under a range of irrigation regimes. The main goal was to obtain these parameters with minimum instrumentation and measurements. The results confirmed that ...

  11. State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.

    DTIC Science & Technology

    1978-12-01

    The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared

  12. Applications of harvesting system simulation to timber management and utilization analyses

    Treesearch

    John E. Baumgras; Chris B. LeDoux

    1990-01-01

    Applications of timber harvesting system simulation to the economic analysis of forest management and wood utilization practices are presented. These applications include estimating thinning revenue by stand age, estimating impacts of minimum merchantable tree diameter on harvesting revenue, and evaluating wood utilization alternatives relative to pulpwood quotas and...

  13. Sampling effort needed to estimate condition and species richness in the Ohio River, USA

    EPA Science Inventory

    The level of sampling effort required to characterize fish assemblage condition in a river for the purposes of bioassessment may be estimated via different approaches. However, the goal with any approach is to determine the minimum level of effort necessary to reach some specific...

  14. Optical parameters of TN display with dichroic dye

    NASA Astrophysics Data System (ADS)

    Olifierczuk, Marek; Zielinski, Jerzy; Perkowski, Pawel

    2000-05-01

    The present work contain the studies on optical parameters (contrast ratio, viewing angle, birefringence and brightness) of twisted nematic display with black dichroic dye which is designed for an application in large-area information and advertising systems. The numerical optimization of display with a dye has been done. The absorption characteristic of the dye has been obtained. Birefringence of doped mixtures (Delta) n has been measured. The contrast ratio of doped mixtures has been measured in wide temperature range from -25 degree(s)C to +70 degree(s)C. The angle characteristics of contrast ratio for +20 degree(s)C have been obtained. In the work the detailed results describing the effect of a dye on temperature dependence of birefringence and contrast ratio, moreover, the effect of dye on the viewing angle for the first and second transmission minimum will be presented. Additionally, the dielectric characteristics of different mixtures will be shown.

  15. Treating Vomiting

    MedlinePlus

    ... those descibed below. Estimated Oral Fluid and Electrolyte Requirements by Body Weight Body Weight (in pounds) Minimum Daily Fluid Requirements (in ounces)* Electrolyte Solution Requirements for Mild Diarrhea ( ...

  16. Estimation of Nasal Tip Support Using Computer-Aided Design and 3-Dimensional Printed Models

    PubMed Central

    Gray, Eric; Maducdoc, Marlon; Manuel, Cyrus; Wong, Brian J. F.

    2016-01-01

    IMPORTANCE Palpation of the nasal tip is an essential component of the preoperative rhinoplasty examination. Measuring tip support is challenging, and the forces that correspond to ideal tip support are unknown. OBJECTIVE To identify the integrated reaction force and the minimum and ideal mechanical properties associated with nasal tip support. DESIGN, SETTING, AND PARTICIPANTS Three-dimensional (3-D) printed anatomic silicone nasal models were created using a computed tomographic scan and computer-aided design software. From this model, 3-D printing and casting methods were used to create 5 anatomically correct nasal models of varying constitutive Young moduli (0.042, 0.086, 0.098, 0.252, and 0.302 MPa) from silicone. Thirty rhinoplasty surgeons who attended a regional rhinoplasty course evaluated the reaction force (nasal tip recoil) of each model by palpation and selected the model that satisfied their requirements for minimum and ideal tip support. Data were collected from May 3 to 4, 2014. RESULTS Of the 30 respondents, 4 surgeons had been in practice for 1 to 5 years; 9 surgeons, 6 to 15 years; 7 surgeons, 16 to 25 years; and 10 surgeons, 26 or more years. Seventeen surgeons considered themselves in the advanced to expert skill competency levels. Logistic regression estimated the minimum threshold for the Young moduli for adequate and ideal tip support to be 0.096 and 0.154 MPa, respectively. Logistic regression estimated the thresholds for the reaction force associated with the absolute minimum and ideal requirements for good tip recoil to be 0.26 to 4.74 N and 0.37 to 7.19 N during 1- to 8-mm displacement, respectively. CONCLUSIONS AND RELEVANCE This study presents a method to estimate clinically relevant nasal tip reaction forces, which serve as a proxy for nasal tip support. This information will become increasingly important in computational modeling of nasal tip mechanics and ultimately will enhance surgical planning for rhinoplasty. LEVEL OF EVIDENCE NA. PMID:27124818

  17. Empirical and Bayesian approaches to fossil-only divergence times: A study across three reptile clades.

    PubMed

    Turner, Alan H; Pritchard, Adam C; Matzke, Nicholas J

    2017-01-01

    Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a 'smoothed' timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches.

  18. Empirical and Bayesian approaches to fossil-only divergence times: A study across three reptile clades

    PubMed Central

    Turner, Alan H.; Pritchard, Adam C.; Matzke, Nicholas J.

    2017-01-01

    Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a ‘smoothed’ timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches. PMID:28187191

  19. Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study.

    PubMed

    Meier, Petra S; Holmes, John; Angus, Colin; Ally, Abdallah K; Meng, Yang; Brennan, Alan

    2016-02-01

    While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO "best buy" intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, -3.2%; value-based tax, -2.9%; strength-based tax, -6.1%; minimum unit pricing, -7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, -1.3%; value-based tax, -1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, -3.6% [95% uncertainty interval (UI) -6.1%, -0.6%]; value-based tax, -3.3% [UI -5.1%, -1.7%]; strength-based tax, -7.5% [UI -13.7%, -3.9%]; minimum unit pricing, -10.3% [UI -10.3%, -7.0%]) and professional/managerial occupation groups (current tax increase, -1.8% [UI -4.7%, +1.6%]; value-based tax, -1.9% [UI -3.6%, +0.4%]; strength-based tax, -0.8% [UI -6.9%, +4.0%]; minimum unit pricing, -0.7% [UI -5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices. Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation.

  20. Contrast media controversies in 2015: imaging patients with renal impairment or risk of contrast reaction.

    PubMed

    Davenport, Matthew S; Cohan, Richard H; Ellis, James H

    2015-06-01

    The incidence and significance of complications related to intravascular contrast material administration have become increasingly controversial. This review will highlight current thinking regarding the imaging of patients with renal impairment and those at risk for an allergiclike contrast reaction. The risk of contrast-induced acute kidney injury remains uncertain for patients with an estimated glomerular filtration rate (GFR) less than 45 mL/min/1.73 m(2), but if there is a risk, it is greatest in those with estimated GFR less than 30 mL/min/1.73 m(2). In this population, low-risk gadolinium-based contrast agents appear to have a large safety margin. Corticosteroid prophylaxis remains the standard of care in the United States for patients identified to be at high risk of a contrast reaction, but it has an incomplete mitigating effect on contrast reaction rates and the number needed to treat is large.

  1. Probability of reduced renal function after contrast-enhanced CT: a model based on serum creatinine level, patient age, and estimated glomerular filtration rate.

    PubMed

    Herts, Brian R; Schneider, Erika; Obuchowski, Nancy; Poggio, Emilio; Jain, Anil; Baker, Mark E

    2009-08-01

    The objectives of our study were to develop a model to predict the probability of reduced renal function after outpatient contrast-enhanced CT (CECT)--based on patient age, sex, and race and on serum creatinine level before CT or directly based on estimated glomerular filtration rate (GFR) before CT--and to determine the relationship between patients with changes in creatinine level that characterize contrast-induced nephropathy and patients with reduced GFR after CECT. Of 5,187 outpatients who underwent CECT, 963 (18.6%) had serum creatinine levels obtained within 6 months before and 4 days after CECT. The estimated GFR was calculated before and after CT using the four-variable Modification of Diet in Renal Disease (MDRD) Study equation. Pre-CT serum creatinine level, age, race, sex, and pre-CT estimated GFR were tested using multiple-variable logistic regression models to determine the probability of having an estimated GFR of < 60 and < 45 mL/min/1.73 m(2) after CECT. Two thirds of the patients were used to create and one third to test the models. We also determined discordance between patients who met standard definitions of contrast-induced nephropathy and those with a reduced estimated GFR after CECT. Significant (p < 0.002) predictors for a post-CT estimated GFR of < 60 mL/min/1.73 m(2) were age, race, sex, pre-CT serum creatinine level, and pre-CT estimated GFR. Sex, serum creatinine level, and pre-CT estimated GFR were significant factors (p < 0.001) for predicting a post-CT estimated GFR of < 45 mL/min/1.73 m(2). The probability is [exp(y) / (1 + exp(y))], where y = 6.21 - (0.10 x pre-CT estimated GFR) for an estimated GFR of < 60 mL/min/1.73 m(2), and y = 3.66 - (0.087 x pre-CT estimated GFR) for an estimated GFR of < 45 mL/min/1.73 m(2). A discrepancy between those who met contrast-induced nephropathy criteria by creatinine changes and those with a post-CT estimated GFR of < 60 mL/min/1.73 m(2) was detected in 208 of the 963 patients (21.6%). The probability of a reduced estimated GFR after CECT can be predicted by the pre-CT estimated GFR using the four-variable MDRD equation. Furthermore, standard criteria for contrast-induced nephropathy are poor predictors of poor renal function after CECT. Criteria need to be established for what is an acceptable risk to manage patients undergoing CECT.

  2. Are There Long-Run Effects of the Minimum Wage?

    PubMed Central

    Sorkin, Isaac

    2014-01-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790

  3. Are There Long-Run Effects of the Minimum Wage?

    PubMed

    Sorkin, Isaac

    2015-04-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.

  4. Remote sensing-based estimation of annual soil respiration at two contrasting forest sites

    DOE PAGES

    Gu, Lianhong; Huang, Ni; Black, T. Andrew; ...

    2015-11-23

    Soil respiration (R s), an important component of the global carbon cycle, can be estimated using remotely sensed data, but the accuracy of this technique has not been thoroughly investigated. In this article, we proposed a methodology for the remote estimation of annual R s at two contrasting FLUXNET forest sites (a deciduous broadleaf forest and an evergreen needleleaf forest).

  5. Linking climate change projections for an Alaskan watershed to future coho salmon production.

    PubMed

    Leppi, Jason C; Rinella, Daniel J; Wilson, Ryan R; Loya, Wendy M

    2014-06-01

    Climate change is predicted to dramatically change hydrologic processes across Alaska, but estimates of how these impacts will influence specific watersheds and aquatic species are lacking. Here, we linked climate, hydrology, and habitat models within a coho salmon (Oncorhynchus kisutch) population model to assess how projected climate change could affect survival at each freshwater life stage and, in turn, production of coho salmon smolts in three subwatersheds of the Chuitna (Chuit) River watershed, Alaska. Based on future climate scenarios and projections from a three-dimensional hydrology model, we simulated coho smolt production over a 20-year span at the end of the century (2080-2100). The direction (i.e., positive vs. negative) and magnitude of changes in smolt production varied substantially by climate scenario and subwatershed. Projected smolt production decreased in all three subwatersheds under the minimum air temperature and maximum precipitation scenario due to elevated peak flows and a resulting 98% reduction in egg-to-fry survival. In contrast, the maximum air temperature and minimum precipitation scenario led to an increase in smolt production in all three subwatersheds through an increase in fry survival. Other climate change scenarios led to mixed responses, with projected smolt production increasing and decreasing in different subwatersheds. Our analysis highlights the complexity inherent in predicting climate-change-related impacts to salmon populations and demonstrates that population effects may depend on interactions between the relative magnitude of hydrologic and thermal changes and their interactions with features of the local habitat. © 2013 The Authors. Global Change Biology published by John Wiley & Sons Ltd.

  6. Simulations of Fractal Star Cluster Formation. I. New Insights for Measuring Mass Segregation of Star Clusters with Substructure

    NASA Astrophysics Data System (ADS)

    Yu, Jincheng; Puzia, Thomas H.; Lin, Congping; Zhang, Yiwei

    2017-05-01

    We compare the existent methods, including the minimum spanning tree based method and the local stellar density based method, in measuring mass segregation of star clusters. We find that the minimum spanning tree method reflects more the compactness, which represents the global spatial distribution of massive stars, while the local stellar density method reflects more the crowdedness, which provides the local gravitational potential information. It is suggested to measure the local and the global mass segregation simultaneously. We also develop a hybrid method that takes both aspects into account. This hybrid method balances the local and the global mass segregation in the sense that the predominant one is either caused by dynamical evolution or purely accidental, especially when such information is unknown a priori. In addition, we test our prescriptions with numerical models and show the impact of binaries in estimating the mass segregation value. As an application, we use these methods on the Orion Nebula Cluster (ONC) observations and the Taurus cluster. We find that the ONC is significantly mass segregated down to the 20th most massive stars. In contrast, the massive stars of the Taurus cluster are sparsely distributed in many different subclusters, showing a low degree of compactness. The massive stars of Taurus are also found to be distributed in the high-density region of the subclusters, showing significant mass segregation at subcluster scales. Meanwhile, we also apply these methods to discuss the possible mechanisms of the dynamical evolution of the simulated substructured star clusters.

  7. Simulations of Fractal Star Cluster Formation. I. New Insights for Measuring Mass Segregation of Star Clusters with Substructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jincheng; Puzia, Thomas H.; Lin, Congping

    2017-05-10

    We compare the existent methods, including the minimum spanning tree based method and the local stellar density based method, in measuring mass segregation of star clusters. We find that the minimum spanning tree method reflects more the compactness, which represents the global spatial distribution of massive stars, while the local stellar density method reflects more the crowdedness, which provides the local gravitational potential information. It is suggested to measure the local and the global mass segregation simultaneously. We also develop a hybrid method that takes both aspects into account. This hybrid method balances the local and the global mass segregationmore » in the sense that the predominant one is either caused by dynamical evolution or purely accidental, especially when such information is unknown a priori. In addition, we test our prescriptions with numerical models and show the impact of binaries in estimating the mass segregation value. As an application, we use these methods on the Orion Nebula Cluster (ONC) observations and the Taurus cluster. We find that the ONC is significantly mass segregated down to the 20th most massive stars. In contrast, the massive stars of the Taurus cluster are sparsely distributed in many different subclusters, showing a low degree of compactness. The massive stars of Taurus are also found to be distributed in the high-density region of the subclusters, showing significant mass segregation at subcluster scales. Meanwhile, we also apply these methods to discuss the possible mechanisms of the dynamical evolution of the simulated substructured star clusters.« less

  8. Spatially valid data of atmospheric deposition of heavy metals and nitrogen derived by moss surveys for pollution risk assessments of ecosystems.

    PubMed

    Schröder, Winfried; Nickel, Stefan; Schönrock, Simon; Meyer, Michaela; Wosniok, Werner; Harmens, Harry; Frontasyeva, Marina V; Alber, Renate; Aleksiayenak, Julia; Barandovski, Lambe; Carballeira, Alejo; Danielsson, Helena; de Temmermann, Ludwig; Godzik, Barbara; Jeran, Zvonka; Karlsson, Gunilla Pihl; Lazo, Pranvera; Leblond, Sebastien; Lindroos, Antti-Jussi; Liiv, Siiri; Magnússon, Sigurður H; Mankovska, Blanka; Martínez-Abaigar, Javier; Piispanen, Juha; Poikolainen, Jarmo; Popescu, Ion V; Qarri, Flora; Santamaria, Jesus Miguel; Skudnik, Mitja; Špirić, Zdravko; Stafilov, Trajce; Steinnes, Eiliv; Stihi, Claudia; Thöni, Lotti; Uggerud, Hilde Thelle; Zechmeister, Harald G

    2016-06-01

    For analysing element input into ecosystems and associated risks due to atmospheric deposition, element concentrations in moss provide complementary and time-integrated data at high spatial resolution every 5 years since 1990. The paper reviews (1) minimum sample sizes needed for reliable, statistical estimation of mean values at four different spatial scales (European and national level as well as landscape-specific level covering Europe and single countries); (2) trends of heavy metal (HM) and nitrogen (N) concentrations in moss in Europe (1990-2010); (3) correlations between concentrations of HM in moss and soil specimens collected across Norway (1990-2010); and (4) canopy drip-induced site-specific variation of N concentration in moss sampled in seven European countries (1990-2013). While the minimum sample sizes on the European and national level were achieved without exception, for some ecological land classes and elements, the coverage with sampling sites should be improved. The decline in emission and subsequent atmospheric deposition of HM across Europe has resulted in decreasing HM concentrations in moss between 1990 and 2010. In contrast, hardly any changes were observed for N in moss between 2005, when N was included into the survey for the first time, and 2010. In Norway, both, the moss and the soil survey data sets, were correlated, indicating a decrease of HM concentrations in moss and soil. At the site level, the average N deposition inside of forests was almost three times higher than the average N deposition outside of forests.

  9. Estimating health state utility values for comorbid health conditions using SF-6D data.

    PubMed

    Ara, Roberta; Brazier, John

    2011-01-01

    When health state utility values for comorbid health conditions are not available, data from cohorts with single conditions are used to estimate scores. The methods used can produce very different results and there is currently no consensus on which is the most appropriate approach. The objective of the current study was to compare the accuracy of five different methods within the same dataset. Data collected during five Welsh Health Surveys were subgrouped by health status. Mean short-form 6 dimension (SF-6D) scores for cohorts with a specific health condition were used to estimate mean SF-6D scores for cohorts with comorbid conditions using the additive, multiplicative, and minimum methods, the adjusted decrement estimator (ADE), and a linear regression model. The mean SF-6D for subgroups with comorbid health conditions ranged from 0.4648 to 0.6068. The linear model produced the most accurate scores for the comorbid health conditions with 88% of values accurate to within the minimum important difference for the SF-6D. The additive and minimum methods underestimated or overestimated the actual SF-6D scores respectively. The multiplicative and ADE methods both underestimated the majority of scores. However, both methods performed better when estimating scores smaller than 0.50. Although the range in actual health state utility values (HSUVs) was relatively small, our data covered the lower end of the index and the majority of previous research has involved actual HSUVs at the upper end of possible ranges. Although the linear model gave the most accurate results in our data, additional research is required to validate our findings. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Operative needs in HIV+ populations: An estimation for sub-Saharan Africa.

    PubMed

    Cherewick, Megan L; Cherewick, Steven D; Kushner, Adam L

    2017-05-01

    In 2015, it was estimated that approximately 36.7 million people were living with HIV globally and approximately 25.5 million of those people were living in sub-Saharan Africa. Limitations in the availability and access to adequate operative care require policy and planning to enhance operative capacity. Data estimating the total number of persons living with HIV by country, sex, and age group were obtained from the Joint United Nations Programme on HIV/AIDS (UNAIDS) in 2015. Using minimum proposed surgical rates per 100,000 for 4, defined, sub-Saharan regions of Africa, country-specific and regional estimates were calculated. The total need and unmet need for operative procedures were estimated. A minimum of 1,539,138 operative procedures were needed in 2015 for the 25.5 million persons living with HIV in sub-Saharan Africa. In 2015, there was an unmet need of 908,513 operative cases in sub-Saharan Africa with the greatest unmet need in eastern sub-Saharan Africa (427,820) and western sub-Saharan Africa (325,026). Approximately 55.6% of the total need for operative cases is adult women, 38.4% are adult men, and 6.0% are among children under the age of 15. A minimum of 1.5 million operative procedures annually are required to meet the needs of persons living with HIV in sub-Saharan Africa. The unmet need for operative care is greatest in eastern and western sub-Saharan Africa and will require investments in personnel, infrastructure, facilities, supplies, and equipment. We highlight the need for global planning and investment in resources to meet targets of operative capacity. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2018-06-01

    The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. The incorporation of fault zone head wave and direct wave secondary arrival times and arrival polarizations into seismic tomography: Application to the Parkfield, California area

    NASA Astrophysics Data System (ADS)

    Bennington, N. L.; Thurber, C. H.; Peng, Z.; Zhao, P.

    2012-12-01

    We present a 3D P-wave velocity (Vp) model of the Parkfield region that utilizes existing P-wave arrival time data, including fault zone head waves (FZHW), plus new data from direct wave secondary arrivals (DWSA). The first-arrival and DWSA travel times are obtained as the global and local minimum travel time paths, respectively. The inclusion of DWSA results in as much as a 10% increase in the across-fault velocity contrast for the Vp model at Parkfield relative to Thurber et al. (2006). Viewed along strike, three pronounced velocity contrast regions are observed: a pair of strong positive velocity contrasts (SW fast), one NW of the 1966 Parkfield hypocenter and the other SE of the 2004 Parkfield hypocenter, and a strong negative velocity contrast (NE fast) between the two hypocenters. The negative velocity contrast partially to entirely encompasses peak coseismic slip estimated in several slip models for the 2004 earthquake, suggesting that the negative velocity contrast played a part in defining the rupture patch of the 2004 Parkfield earthquake. We expand on this work by modifying our seismic tomography algorithm to incorporate arrival polarizations (azimuths). Synthetic tests will be presented to demonstrate the improvements in velocity structure when arrival polarizations are incorporated. These tests will compare the synthetic model recovered when FZHW/DWSA arrivals as well as existing P-wave arrival time data are inverted to that recovered with the same dataset with the inclusion of arrival polarizations. We plan to extend this work to carry out a full scale seismic tomography/relocation inversion at Parkfield, CA utilizing arrival polarizations from all first-P arrivals, and FZHW/DWSA arrivals as well as existing P-wave arrival time data. This effort requires the determination of polarization data for all P-waves and FZHW's at Parkfield. To this end, we use changes in the arrival azimuth from fault normal to source-receiver direction to identify FZHW and DWSA arrivals. We also use an eigenvalue decomposition to determine the direction of the incoming wave field, and to measure the arrival azimuths. This work is supported by the USGS Earthquake Hazards Program under grant numbers G11AP20027 and G11AP20028.

  13. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  14. Different Cortical Dynamics in Face and Body Perception: An MEG study

    PubMed Central

    Meeren, Hanneke K. M.; de Gelder, Beatrice; Ahlfors, Seppo P.; Hämäläinen, Matti S.; Hadjikhani, Nouchine

    2013-01-01

    Evidence from functional neuroimaging indicates that visual perception of human faces and bodies is carried out by distributed networks of face and body-sensitive areas in the occipito-temporal cortex. However, the dynamics of activity in these areas, needed to understand their respective functional roles, are still largely unknown. We monitored brain activity with millisecond time resolution by recording magnetoencephalographic (MEG) responses while participants viewed photographs of faces, bodies, and control stimuli. The cortical activity underlying the evoked responses was estimated with anatomically-constrained noise-normalised minimum-norm estimate and statistically analysed with spatiotemporal cluster analysis. Our findings point to distinct spatiotemporal organization of the neural systems for face and body perception. Face-selective cortical currents were found at early latencies (120–200 ms) in a widespread occipito-temporal network including the ventral temporal cortex (VTC). In contrast, early body-related responses were confined to the lateral occipito-temporal cortex (LOTC). These were followed by strong sustained body-selective responses in the orbitofrontal cortex from 200–700 ms, and in the lateral temporal cortex and VTC after 500 ms latency. Our data suggest that the VTC region has a key role in the early processing of faces, but not of bodies. Instead, the LOTC, which includes the extra-striate body area (EBA), appears the dominant area for early body perception, whereas the VTC contributes to late and post-perceptual processing. PMID:24039712

  15. The "Nursing Home Compare" measure of urinary/fecal incontinence: cross-sectional variation, stability over time, and the impact of case mix.

    PubMed

    Li, Yue; Schnelle, John; Spector, William D; Glance, Laurent G; Mukamel, Dana B

    2010-02-01

    To assess the impact of facility case mix on cross-sectional variations and short-term stability of the "Nursing Home Compare" incontinence quality measure (QM) and to determine whether multivariate risk adjustment can minimize such impacts. Retrospective analyses of the 2005 national minimum data set (MDS) that included approximately 600,000 long-term care residents in over 10,000 facilities in each quarterly sample. Mixed logistic regression was used to construct the risk-adjusted QM (nonshrinkage estimator). Facility-level ordinary least-squares models and adjusted R(2) were used to estimate the impact of case mix on cross-sectional and short-term longitudinal variations of currently published and risk-adjusted QMs. At least 50 percent of the cross-sectional variation and 25 percent of the short-term longitudinal variation of the published QM are explained by facility case mix. In contrast, the cross-sectional and short-term longitudinal variations of the risk-adjusted QM are much less susceptible to case-mix variations (adjusted R(2)<0.10), even for facilities with more extreme or more unstable outcome. Current "Nursing Home Compare" incontinence QM reflects considerable case-mix variations across facilities and over time, and therefore it may be biased. This issue can be largely addressed by multivariate risk adjustment using risk factors available in the MDS.

  16. Physical constraints in the condensation of eukaryotic chromosomes. Local concentration of DNA versus linear packing ratio in higher order chromatin structures.

    PubMed

    Daban, J R

    2000-04-11

    The local concentration of DNA in metaphase chromosomes of different organisms has been determined in several laboratories. The average of these measurements is 0.17 g/mL. In the first level of chromosome condensation, DNA is wrapped around histones forming nucleosomes. This organization limits the DNA concentration in nucleosomes to 0. 3-0.4 g/mL. Furthermore, in the structural models suggested in different laboratories for the 30-40 nm chromatin fiber, the estimated DNA concentration is significantly reduced; it ranges from 0.04 to 0.27 g/mL. The DNA concentration is further reduced when the fiber is folded into the successive higher order structures suggested in different models for metaphase chromosomes; the estimated minimum decrease of DNA concentration represents an additional 40%. These observations suggest that most of the models proposed for the 30-40 nm chromatin fiber are not dense enough for the construction of metaphase chromosomes. In contrast, it is well-known that the linear packing ratio increases dramatically in each level of DNA folding in chromosomes. Thus, the consideration of the linear packing ratio is not enough for the study of chromatin condensation; the constraint resulting from the actual DNA concentration in metaphase chromosomes must be considered for the construction of models for condensed chromatin.

  17. Un modelo de dínamo para ɛ Eridani

    NASA Astrophysics Data System (ADS)

    Sraibman, L.; Buccino, A. P.; Minotti, F.

    2017-10-01

    Eridani is an active young K2V star (0.8 Gyr), which exhibits a short and long-term chromospheric cycles of 3 and 13-yr periods, between 1985 and 1992, the star went through a broad activity minimum, similar to the solar Maunder Minimum-state. Motivated by these results, we found in Eridani a great opportunity to test the solar cinematic dynamo model built in sraibman16. In this work we present the components of the magnetic fields in the stellar surface derived from the model. To contrast these results to the registry of activity obtained from stellar observations, we also computed an activity index associated to the magnetic field.

  18. Estimate of Cost-Effective Potential for Minimum Efficiency Performance Standards in 13 Major World Economies Energy Savings, Environmental and Financial Impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Letschert, Virginie E.; Bojda, Nicholas; Ke, Jing

    2012-07-01

    This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programsmore » while still saving consumers money?« less

  19. A revised burial dose estimation procedure for optical dating of youngand modern-age sediments

    USGS Publications Warehouse

    Arnold, L.J.; Roberts, R.G.; Galbraith, R.F.; DeLong, S.B.

    2009-01-01

    The presence of genuinely zero-age or near-zero-age grains in modern-age and very young samples poses a problem for many existing burial dose estimation procedures used in optical (optically stimulated luminescence, OSL) dating. This difficulty currently necessitates consideration of relatively simplistic and statistically inferior age models. In this study, we investigate the potential for using modified versions of the statistical age models of Galbraith et??al. [Galbraith, R.F., Roberts, R.G., Laslett, G.M., Yoshida, H., Olley, J.M., 1999. Optical dating of single and multiple grains of quartz from Jinmium rock shelter, northern Australia: Part I, experimental design and statistical models. Archaeometry 41, 339-364.] to provide reliable equivalent dose (De) estimates for young and modern-age samples that display negative, zero or near-zero De estimates. For this purpose, we have revised the original versions of the central and minimum age models, which are based on log-transformed De values, so that they can be applied to un-logged De estimates and their associated absolute standard errors. The suitability of these 'un-logged' age models is tested using a series of known-age fluvial samples deposited within two arroyo systems from the American Southwest. The un-logged age models provide accurate burial doses and final OSL ages for roughly three-quarters of the total number of samples considered in this study. Sensitivity tests reveal that the un-logged versions of the central and minimum age models are capable of producing accurate burial dose estimates for modern-age and very young (<350??yr) fluvial samples that contain (i) more than 20% of well-bleached grains in their De distributions, or (ii) smaller sub-populations of well-bleached grains for which the De values are known with high precision. Our results indicate that the original (log-transformed) versions of the central and minimum age models are still preferable for most routine dating applications, since these age models are better suited to the statistical properties of typical single-grain and multi-grain single-aliquot De datasets. However, the unique error properties of modern-age samples, combined with the problems of calculating natural logarithms of negative or zero-Gy De values, mean that the un-logged versions of the central and minimum age models currently offer the most suitable means of deriving accurate burial dose estimates for very young and modern-age samples. ?? 2009 Elsevier Ltd. All rights reserved.

  20. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  1. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  2. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  3. Contrast of slightly complex patterns: computing the perceived contrast of Gabor patches

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1996-04-01

    The local contrast in an image may be approximated by the contrast of a Gabor patch of varying phase and bandwidth. In a search for a metric for such local contrast, perceived (apparent) contrast, as indicated by matching of such patterns, were compared here to the physical contrast calculated by a number of methods. The 2 cycles/deg 1-octave Gabor patch stimuli of different phases were presented side by side separated by 4 degrees. During each session the subjects (n equals 5) were adapted to the average luminance, and four different contrast levels (0.1, 0.3, 0.6, and 0.8) were randomly interleaved. The task was repeated at four mean luminance levels between 0.75 and 37.5 cd/m2. The subject's task was to indicate which of the two patterns was lower in contrast. Equal apparent contrast was determined by fitting a psychometric function to the data from 40 to 70 presentations. There was no effect of mean luminance on the subjects settings. The matching results rejected the hypothesis that either the Michelson formula or the King-Smith & Kulikowski contrast (CKK equals (Lmax-Laverage)/Laverage) was used by the subjects to set the match. The use of the Nominal contrast (the Michelson contrast of the underlying sinusoid) as an estimate of apparent contrast could not be rejected. In a second experiment the apparent contrast of a 1-octave Gabor patch was matched to the apparent contrast of a 2-octave Gabor patch (of Nominal contrast of 0.1, 0.3, 0.6, 0.8) using the method of adjustment. The result of this experiment rejected the prediction of the Nominal contrast definition. The local band limited contrast measure (Peli, 1990), when used with the modifications suggested by Lubin (1995), as an estimate of apparent contrast could not be rejected by the results of either experiment. These results suggest that a computational contrast measure based on multi scale bandpass filtering is a better estimate of apparent perceived contrast than any of the other measures tested.

  4. Cues for Lexical Tone Perception in Children: Acoustic Correlates and Phonetic Context Effects

    ERIC Educational Resources Information Center

    Tong, Xiuli; McBride, Catherine; Burnham, Denis

    2014-01-01

    Purpose: The authors investigated the effects of acoustic cues (i.e., pitch height, pitch contour, and pitch onset and offset) and phonetic context cues (i.e., syllable onsets and rimes) on lexical tone perception in Cantonese-speaking children. Method: Eight minimum pairs of tonal contrasts were presented in either an identical phonetic context…

  5. 49 CFR 172.315 - Limited quantities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and bottom portions of the square-on-point and the border forming the square-on-point must be black and the center white or of a suitable contrasting background as follows: ER30DE11.004 (2) The square... the square-on-point must be at least 2 mm and the minimum dimension of each side must be 100 mm unless...

  6. 49 CFR 172.315 - Limited quantities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... and bottom portions of the square-on-point and the border forming the square-on-point must be black and the center white or of a suitable contrasting background as follows: ER30DE11.004 (2) The square... the square-on-point must be at least 2 mm and the minimum dimension of each side must be 100 mm unless...

  7. 49 CFR 172.315 - Limited quantities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and bottom portions of the square-on-point and the border forming the square-on-point must be black and the center white or of a suitable contrasting background as follows: ER30DE11.004 (2) The square... the square-on-point must be at least 2 mm and the minimum dimension of each side must be 100 mm unless...

  8. Estimated minimum savings to the medicaid budget in Florida by implementing a primary seat belt law

    DOT National Transportation Integrated Search

    2007-03-01

    A 2003 study estimated that if all States had primary laws from 1995 to 2002, over 12,000 lives would have been saved. Failure to implement a primary belt law creates a real cost to a States budget for Medicaid and other State medical expenditures...

  9. Estimated minimum savings to the Medicaid budget in Arkansas by implementing a primary seat belt law

    DOT National Transportation Integrated Search

    2007-03-01

    A 2003 study estimated that if all States had primary laws from 1995 to 2002, over 12,000 lives would have been saved. Failure to implement a primary belt law creates a real cost to a States budget for Medicaid and other State medical expenditures...

  10. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  11. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  12. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  13. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  14. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  15. Testing for Seed Quality in Southern Oaks

    Treesearch

    F.T. Bonner

    1984-01-01

    Expressions of germination rate, such as peak value (PV) or mean germination time (MGT), provide good estimates of acorn quality, but test completion requires a minimum of 3 weeks. For more rapid estimates, tetrazolium staining is recommended. Some seed test results were significantly correlated with nursery germination of cherrybark and water oaks, but not with...

  16. Estimated minimum savings to the Medicaid budget in Missouri by implementing a primary seat belt law

    DOT National Transportation Integrated Search

    2007-03-01

    A 2003 study estimated that if all States had primary laws from 1995 to 2002, over 12,000 lives would have been saved. Failure to implement a primary belt law creates a real cost to a States budget for Medicaid and other State medical expenditures...

  17. Mars surface radiation exposure for solar maximum conditions and 1989 solar proton events

    NASA Technical Reports Server (NTRS)

    Simonsen, Lisa C.; Nealy, John E.

    1992-01-01

    The Langley heavy-ion/nucleon transport code, HZETRN, and the high-energy nucleon transport code, BRYNTRN, are used to predict the propagation of galactic cosmic rays (GCR's) and solar flare protons through the carbon dioxide atmosphere of Mars. Particle fluences and the resulting doses are estimated on the surface of Mars for GCR's during solar maximum conditions and the Aug., Sep., and Oct. 1989 solar proton events. These results extend previously calculated surface estimates for GCR's at solar minimum conditions and the Feb. 1956, Nov. 1960, and Aug. 1972 solar proton events. Surface doses are estimated with both a low-density and a high-density carbon dioxide model of the atmosphere for altitudes of 0, 4, 8, and 12 km above the surface. A solar modulation function is incorporated to estimate the GCR dose variation between solar minimum and maximum conditions over the 11-year solar cycle. By using current Mars mission scenarios, doses to the skin, eye, and blood-forming organs are predicted for short- and long-duration stay times on the Martian surface throughout the solar cycle.

  18. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  19. DOSIMETRIC CONSEQUENCES OF USING CONTRAST-ENHANCED COMPUTED TOMOGRAPHIC IMAGES FOR INTENSITY-MODULATED STEREOTACTIC BODY RADIOTHERAPY PLANNING.

    PubMed

    Yoshikawa, Hiroto; Roback, Donald M; Larue, Susan M; Nolan, Michael W

    2015-01-01

    Potential benefits of planning radiation therapy on a contrast-enhanced computed tomography scan (ceCT) should be weighed against the possibility that this practice may be associated with an inadvertent risk of overdosing nearby normal tissues. This study investigated the influence of ceCT on intensity-modulated stereotactic body radiotherapy (IM-SBRT) planning. Dogs with head and neck, pelvic, or appendicular tumors were included in this retrospective cross-sectional study. All IM-SBRT plans were constructed on a pre- or ceCT. Contours for tumor and organs at risk (OAR) were manually constructed and copied onto both CT's; IM-SBRT plans were calculated on each CT in a manner that resulted in equal radiation fluence. The maximum and mean doses for OAR, and minimum, maximum, and mean doses for targets were compared. Data were collected from 40 dogs per anatomic site (head and neck, pelvis, and limbs). The average dose difference between minimum, maximum, and mean doses as calculated on pre- and ceCT plans for the gross tumor volume was less than 1% for all anatomic sites. Similarly, the differences between mean and maximum doses for OAR were less than 1%. The difference in dose distribution between plans made on CTs with and without contrast enhancement was tolerable at all treatment sites. Therefore, although caution would be recommended when planning IM-SBRT for tumors near "reservoirs" for contrast media (such as the heart and urinary bladder), findings supported the use of ceCT with this dose calculation algorithm for both target delineation and IM-SBRT treatment planning. © 2015 American College of Veterinary Radiology.

  20. Estimation of representative elementary volume for DNAPL saturation and DNAPL-water interfacial areas in 2D heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun

    2017-06-01

    Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.

  1. An adaptive technique for estimating the atmospheric density profile during the AE mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.

    1973-01-01

    A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.

  2. Indian monsoon variations during three contrasting climatic periods: The Holocene, Heinrich Stadial 2 and the last interglacial-glacial transition

    NASA Astrophysics Data System (ADS)

    Zorzi, Coralie; Sanchez Goñi, Maria Fernanda; Anupama, Krishnamurthy; Prasad, Srinivasan; Hanquiez, Vincent; Johnson, Joel; Giosan, Liviu

    2015-10-01

    In contrast to the East Asian and African monsoons the Indian monsoon is still poorly documented throughout the last climatic cycle (last 135,000 years). Pollen analysis from two marine sediment cores (NGHP-01-16A and NGHP-01-19B) collected from the offshore Godavari and Mahanadi basins, both located in the Core Monsoon Zone (CMZ) reveals changes in Indian summer monsoon variability and intensity during three contrasting climatic periods: the Holocene, the Heinrich Stadial (HS) 2 and the Marine Isotopic Stage (MIS) 5/4 during the ice sheet growth transition. During the first part of the Holocene between 11,300 and 4200 cal years BP, characterized by high insolation (minimum precession, maximum obliquity), the maximum extension of the coastal forest and mangrove reflects high monsoon rainfall. This climatic regime contrasts with that of the second phase of the Holocene, from 4200 cal years BP to the present, marked by the development of drier vegetation in a context of low insolation (maximum precession, minimum obliquity). The historical period in India is characterized by an alternation of strong and weak monsoon centennial phases that may reflect the Medieval Climate Anomaly and the Little Ice Age, respectively. During the HS 2, a period of low insolation and extensive iceberg discharge in the North Atlantic Ocean, vegetation was dominated by grassland and dry flora indicating pronounced aridity as the result of a weak Indian summer monsoon. The MIS 5/4 glaciation, also associated with low insolation but moderate freshwater fluxes, was characterized by a weaker reduction of the Indian summer monsoon and a decrease of seasonal contrast as recorded by the expansion of dry vegetation and the development of Artemisia, respectively. Our results support model predictions suggesting that insolation changes control the long term trend of the Indian monsoon precipitation, but its millennial scale variability and intensity are instead modulated by atmospheric teleconnections to remote phenomena in the North Atlantic, Eurasia or the Indian Ocean.

  3. Scores on Riley's stuttering severity instrument versions three and four for samples of different length and for different types of speech material.

    PubMed

    Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter

    2014-12-01

    Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.

  4. Defining Incident Cases of Epilepsy in Administrative Data

    PubMed Central

    Bakaki, Paul M.; Koroukian, Siran M.; Jackson, Leila W.; Albert, Jeffrey M.; Kaiboriboon, Kitti

    2013-01-01

    Purpose To determine the minimum enrollment duration for identifying incident cases of epilepsy in administrative data. Methods We performed a retrospective dynamic cohort study using Ohio Medicaid data from 1992–2006 to identify a total of 5,037 incident epilepsy cases who had at least 1 year of follow-up prior to epilepsy diagnosis (epilepsy-free interval). The incidence for epilepsy-free intervals from 1 to 8 years, overall and stratified by pre-existing disability status, was examined. The graphical approach between the slopes of incidence estimates and the epilepsy-free intervals was used to identify the minimum epilepsy-free interval that minimized misclassification of prevalent as incident epilepsy cases. Results As the length of epilepsy-free interval increased, the incidence rates decreased. A graphical plot showed that the decline in incidence of epilepsy became nearly flat beyond the third epilepsy-free interval. Conclusion The minimum of 3-year epilepsy-free interval is needed to differentiate incident from prevalent cases in administrative data. Shorter or longer epilepsy-free intervals could result in over- or under-estimation of epilepsy incidence. PMID:23791310

  5. Uncertainty of the potential curve minimum for diatomic molecules extrapolated from Dunham type coefficients

    NASA Astrophysics Data System (ADS)

    Ilieva, T.; Iliev, I.; Pashov, A.

    2016-12-01

    In the traditional description of electronic states of diatomic molecules by means of molecular constants or Dunham coefficients, one of the important fitting parameters is the value of the zero point energy - the minimum of the potential curve or the energy of the lowest vibrational-rotational level - E00 . Their values are almost always the result of an extrapolation and it may be difficult to estimate their uncertainties, because they are connected not only with the uncertainty of the experimental data, but also with the distribution of experimentally observed energy levels and the particular realization of set of Dunham coefficients. This paper presents a comprehensive analysis based on Monte Carlo simulations, which aims to demonstrate the influence of all these factors on the uncertainty of the extrapolated minimum of the potential energy curve U (Re) and the value of E00 . The very good extrapolation properties of the Dunham coefficients are quantitatively confirmed and it is shown that for a proper estimate of the uncertainties, the ambiguity in the composition of the Dunham coefficients should be taken into account.

  6. Galactic Cosmic Ray Intensity in the Upcoming Minimum of the Solar Activity Cycle

    NASA Astrophysics Data System (ADS)

    Krainev, M. B.; Bazilevskaya, G. A.; Kalinin, M. S.; Svirzhevskaya, A. K.; Svirzhevskii, N. S.

    2018-03-01

    During the prolonged and deep minimum of solar activity between cycles 23 and 24, an unusual behavior of the heliospheric characteristics and increased intensity of galactic cosmic rays (GCRs) near the Earth's orbit were observed. The maximum of the current solar cycle 24 is lower than the previous one, and the decline in solar and, therefore, heliospheric activity is expected to continue in the next cycle. In these conditions, it is important for an understanding of the process of GCR modulation in the heliosphere, as well as for applied purposes (evaluation of the radiation safety of planned space flights, etc.), to estimate quantitatively the possible GCR characteristics near the Earth in the upcoming solar minimum ( 2019-2020). Our estimation is based on the prediction of the heliospheric characteristics that are important for cosmic ray modulation, as well as on numeric calculations of GCR intensity. Additionally, we consider the distribution of the intensity and other GCR characteristics in the heliosphere and discuss the intercycle variations in the GCR characteristics that are integral for the whole heliosphere (total energy, mean energy, and charge).

  7. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    NASA Astrophysics Data System (ADS)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  8. Climatic significance of the ostracode fauna from the Pliocene Kap Kobenhavn Formation, north Greenland

    USGS Publications Warehouse

    Brouwers, E.M.; Jorgensen, N.O.; Cronin, T. M.

    1991-01-01

    The Kap Kobenhavn Formation crops out in Greenland at 80??N latitude and marks the most northerly onshore Pliocene locality known. The sands and silts that comprise the formation were deposited in marginal marine and shallow marine environments. An abundant and diverse vertebrate and invertebrate fauna and plant megafossil flora provide age and paleoclimatic constraints. The age estimated for the Kap Kobenhavn ranges from 2.0 to 3.0 million years old. Winter and summer bottom water paleotemperatures were estimated on the basis of the ostracode assemblages. The marine ostracode fauna in units B1 and B2 indicate a subfrigid to frigid marine climate, with estimated minimum sea bottom temperatures (SBT) of -2??C and estimated maximum SBT of 6-8??C. Sediments assigned to unit B2 at locality 72 contain a higher proportion of warm water genera, and the maximum SBT is estimated at 9-10??C. The marginal marine fauna in the uppermost unit B3 (locality 68) indicates a cold temperate to subfrigid marine climate, with an estimated minimum SBT of -2??C and an estimated maximum SBT ranging as high as 12-14??C. These temperatures indicated that, on the average, the Kap Kobenhavn winters in the late Pliocene were similar to or perhaps 1-2??C warmer than winters today and that summer temperatures were 7-8??C warmer than today. -from Authors

  9. Multiple-rule bias in the comparison of classification rules

    PubMed Central

    Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.

    2011-01-01

    Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390

  10. Threshold of long-term survival of a coastal delphinid in anthropogenically degraded environment: Indo-Pacific humpback dolphins in Pearl River Delta.

    PubMed

    Karczmarski, Leszek; Huang, Shiang-Lin; Chan, Stephen C Y

    2017-02-23

    Defining demographic and ecological threshold of population persistence can assist in informing conservation management. We undertook such analyses for the Indo-Pacific humpback dolphin (Sousa chinensis) in the Pearl River Delta (PRD) region, southeast China. We use adult survival estimates for assessments of population status and annual rate of change. Our estimates indicate that, given a stationary population structure and minimal risk scenario, ~2000 individuals (minimum viable population in carrying capacity, MVP k ) can maintain the population persistence across 40 generations. However, under the current population trend (~2.5% decline/annum), the population is fast approaching its viability threshold and may soon face effects of demographic stochasticity. The population demographic trajectory and the minimum area of critical habitat (MACH) that could prevent stochastic extinction are both highly sensitive to fluctuations in adult survival. For a hypothetical stationary population, MACH should approximate 3000-km 2 . However, this estimate increases four-fold with a 5% increase of adult mortality and exceeds the size of PRD when calculated for the current population status. On the other hand, cumulatively all current MPAs within PRD fail to secure the minimum habitat requirement to accommodate sufficiently viable population size. Our findings indicate that the PRD population is deemed to become extinct unless effective conservation measures can rapidly reverse the current population trend.

  11. ESPC Coupled Global Prediction System - Develop and Test Coupled Physical Parameterizations: NAVGEM/CICE/HYCOM

    DTIC Science & Technology

    2013-09-30

    the Study of the Environmental Arctic Change (SEARCH) Sea Ice Outlook (SIO) effort. The SIO is an international effort to provide a community-wide...summary of the expected September arctic sea ice minimum. Monthly reports released throughout the summer synthesize community estimates of the current...state and expected minimum of sea ice . Along with the backbone components of this system (NAVGEM/HYCOM/CICE), other data models have been used to

  12. Turbine Engine Fault Detection and Isolation Program. Volume I. Turbine Engine Performance Estimation Methods

    DTIC Science & Technology

    1982-08-01

    DATA NUMBER OF POINTS 1988 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 PS3 -218.12 294.77 3 T3 -341.54 738.15 4 T5 -464.78 623.47 5 PT51 12.317...Continued) CRUISE AND TAKE-OFF MODE DATA I NUMBER OF POINTS 4137 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 P53 -218.12 376.60 3 T3 -482.72

  13. Measuring and Estimating Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2013-01-01

    Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.

  14. About an adaptively weighted Kaplan-Meier estimate.

    PubMed

    Plante, Jean-François

    2009-09-01

    The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

  15. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  16. Simple Form of MMSE Estimator for Super-Gaussian Prior Densities

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-04-01

    The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.

  17. Changes in risk of immediate adverse reactions to iodinated contrast media by repeated administrations in patients with hepatocellular carcinoma.

    PubMed

    Fujiwara, Naoto; Tateishi, Ryosuke; Akahane, Masaaki; Taguri, Masataka; Minami, Tatsuya; Mikami, Shintaro; Sato, Masaya; Uchino, Koji; Uchino, Kouji; Enooku, Kenichiro; Kondo, Yuji; Asaoka, Yoshinari; Yamashiki, Noriyo; Goto, Tadashi; Shiina, Shuichiro; Yoshida, Haruhiko; Ohtomo, Kuni; Koike, Kazuhiko

    2013-01-01

    To elucidate whether repeated exposures to iodinated contrast media increase the risk of adverse reaction. We retrospectively reviewed 1,861 patients with hepatocellular carcinoma who visited authors' institution, a tertiary referral center, between 2004 and 2008. We analyzed cumulative probability of adverse reactions and risk factors. We categorized all symptoms into hypersensitivity reactions, physiologic reactions, and other reactions, according to the American College of Radiology guidelines, and evaluated each category as an event. We estimated the association between hazard for adverse reactions and the number of cumulative exposures to contrast media. We also evaluated subsequent contrast media injections and adverse reactions. There were 23,684 contrast media injections in 1,729 patients. One hundred and thirty-two patients were excluded because they were given no contrast media during the study period. Adverse reactions occurred in 196 (0.83%) patients. The cumulative incidence at 10(th), 20(th), and 30(th) examination was 7.9%, 15.2%, and 24.1%, respectively. Presence of renal impairment was found to be one of risk factors for adverse reactions. The estimated hazard of overall adverse reaction gradually decreased until around 10(th) exposure and rose with subsequent exposures. The estimated hazard of hypersensitivity showed V-shaped change with cumulative number of exposures. The estimated hazard of physiologic reaction had a tendency toward decreasing and that of other reaction had a tendency toward increasing. Second adverse reaction was more severe than the initial in only one among 130 patients receiving subsequent injections. Repeated exposures to iodinated contrast media increase the risk of adverse reaction.

  18. New requirements for digital radiographic testing of welds according to ISO standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zscherpel, U.; Ewert, U.; Jechow, M.

    Users of EN 14784-2 (general principles for computed radiography with phosphor imaging plates) reported about difficulties to achieve testing class B in weld testing with imaging plates. One of the reasons is the insufficient consideration of the inherent detector unsharpness (u{sub i}) in the minimum requirements. Digital detectors have a higher inherent unsharpness compared to film, which can even exceed the geometrical unsharpness (u{sub g}) of the typical contact technique. In EN 444 and ISO 5579 (general principles for film radiography) u{sub i} is neglected for the calculation of the minimum source-to-object distance (SOD), because it is small compared tomore » the geometric unsharpness (u{sub g}). Considering u{sub i} for digital detectors results in a new equation for SOD (see ISO/FDIS 17636-2). Therefore, the increase in total image unsharpness requires the compensation by a larger SOD to reduce u{sub g}. This contribution discusses the need for change of the SOD for different setups (detectors, focal spots, etc.) and explains the difference in image quality, achieved on basis of the extended equation of ISO/FDIS 17636-2. Furthermore, the detection of image quality indicators depends on the achieved Contrast-to-Noise ratio (CNR) and total image unsharpness. Both of them are essential parameters, which influence the contrast sensitivity. Additionally, new compensation principles (e.g. compensation of missing spatial resolution by enhanced contrast sensitivity) allow to widen the application range of digital detectors for radiographic weld testing.« less

  19. An opening criterion for dust gaps in protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Dipierro, Giovanni; Laibe, Guillaume

    2017-08-01

    We aim to understand under which conditions a low-mass planet can open a gap in viscous dusty protoplanetary discs. For this purpose, we extend the theory of dust radial drift to include the contribution from the tides of an embedded planet and from the gas viscous forces. From this formalism, we derive (I) a grain-size-dependent criterion for dust gap opening in discs, (II) an estimate of the location of the outer edge of the dust gap and (III) an estimate of the minimum Stokes number above which low-mass planets are able to carve gaps that appear only in the dust disc. These analytical estimates are particularly helpful to appraise the minimum mass of a hypothetical planet carving gaps in discs observed at long wavelengths and high resolution. We validate the theory against 3D smoothed particle hydrodynamics simulations of planet-disc interaction in a broad range of dusty protoplanetary discs. We find a remarkable agreement between the theoretical model and the numerical experiments.

  20. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  1. 12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... pricing multipliers are derived from: • A model (the Statistical Model) that estimates the probability..., which is four basis points higher than the minimum rate. II. The Statistical Model The Statistical Model... to 1997. As a result, and as described in Table A.1, the Statistical Model is estimated using a...

  2. PROPERTY APPRAISAL PROVIDES CONTROL, INSURANCE BASIS, AND VALUE ESTIMATE.

    ERIC Educational Resources Information Center

    THOMSON, JACK

    A COMPLETE PROPERTY APPRAISAL SERVES AS A BASIS FOR CONTROL, INSURANCE AND VALUE ESTIMATE. A PROFESSIONAL APPRAISAL FIRM SHOULD PERFORM THIS FUNCTION BECAUSE (1) IT IS FAMILIAR WITH PROPER METHODS, (2) IT CAN PREPARE THE REPORT WITH MINIMUM CONFUSION AND INTERRRUPTION OF THE COLLEGE OPERATION, (3) USE OF ITS PRICING LIBRARY REDUCES TIME NEEDED AND…

  3. 3D depth-to-basement and density contrast estimates using gravity and borehole data

    NASA Astrophysics Data System (ADS)

    Barbosa, V. C.; Martins, C. M.; Silva, J. B.

    2009-05-01

    We present a gravity inversion method for simultaneously estimating the 3D basement relief of a sedimentary basin and the parameters defining the parabolic decay of the density contrast with depth in a sedimentary pack assuming the prior knowledge about the basement depth at a few points. The sedimentary pack is approximated by a grid of 3D vertical prisms juxtaposed in both horizontal directions, x and y, of a right-handed coordinate system. The prisms' thicknesses represent the depths to the basement and are the parameters to be estimated from the gravity data. To produce stable depth-to-basement estimates we impose smoothness on the basement depths through minimization of the spatial derivatives of the parameters in the x and y directions. To estimate the parameters defining the parabolic decay of the density contrast with depth we mapped a functional containing prior information about the basement depths at a few points. We apply our method to synthetic data from a simulated complex 3D basement relief with two sedimentary sections having distinct parabolic laws describing the density contrast variation with depth. Our method retrieves the true parameters of the parabolic law of density contrast decay with depth and produces good estimates of the basement relief if the number and the distribution of boreholes are sufficient. We also applied our method to real gravity data from the onshore and part of the shallow offshore Almada Basin, on Brazil's northeastern coast. The estimated 3D Almada's basement shows geologic structures that cannot be easily inferred just from the inspection of the gravity anomaly. The estimated Almada relief presents steep borders evidencing the presence of gravity faults. Also, we note the existence of three terraces separating two local subbasins. These geologic features are consistent with Almada's geodynamic origin (the Mesozoic breakup of Gondwana and the opening of the South Atlantic Ocean) and they are important in understanding the basin evolution and in detecting structural oil traps.

  4. On the Trend of the Annual Mean, Maximum, and Minimum Temperature and the Diurnal Temperature Range in the Armagh Observatory, Northern Ireland, Dataset, 1844 -2012

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2013-01-01

    Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.

  5. Influence of Spatial and Chromatic Noise on Luminance Discrimination.

    PubMed

    Miquilini, Leticia; Walker, Natalie A; Odigie, Erika A; Guimarães, Diego Leite; Salomão, Railson Cruz; Lacerda, Eliza Maria Costa Brito; Cortes, Maria Izabel Tentes; de Lima Silveira, Luiz Carlos; Fitzgerald, Malinda E C; Ventura, Dora Fix; Souza, Givago Silva

    2017-12-05

    Pseudoisochromatic figures are designed to base discrimination of a chromatic target from a background solely on the chromatic differences. This is accomplished by the introduction of luminance and spatial noise thereby eliminating these two dimensions as cues. The inverse rationale could also be applied to luminance discrimination, if spatial and chromatic noise are used to mask those cues. In this current study estimate of luminance contrast thresholds were conducted using a novel stimulus, based on the use of chromatic and spatial noise to mask the use of these cues in a luminance discrimination task. This was accomplished by presenting stimuli composed of a mosaic of circles colored randomly. A Landolt-C target differed from the background only by the luminance. The luminance contrast thresholds were estimated for different chromatic noise saturation conditions and compared to luminance contrast thresholds estimated using the same target in a non-mosaic stimulus. Moreover, the influence of the chromatic content in the noise on the luminance contrast threshold was also investigated. Luminance contrast threshold was dependent on the chromaticity noise strength. It was 10-fold higher than thresholds estimated from non-mosaic stimulus, but they were independent of colour space location in which the noise was modulated. The present study introduces a new method to investigate luminance vision intended for both basic science and clinical applications.

  6. The effect of atmospheric drag on the design of solar-cell power systems for low Earth orbit

    NASA Technical Reports Server (NTRS)

    Kyser, A. C.

    1983-01-01

    The feasibility of reducing the atmospheric drag of low orbit solar powered satellites by operating the solar-cell array in a minimum-drag attitude, rather than in the conventional Sun pointing attitude was determined. The weights of the solar array, the energy storage batteries, and the fuel required to overcome the drag of the solar array for a range of design life times in orbit were considered. The drag of the array was estimated by free molecule flow theory, and the system weights were calculated from unit weight estimates for 1990 technology. The trailing, minimum drag system was found to require 80% more solar array area, and 30% more battery capacity, the system weights for reasonable life times were dominated by the thruster fuel requirements.

  7. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  8. Estimating Potential Reductions in Premature Mortality in New York City From Raising the Minimum Wage to $15

    PubMed Central

    Konty, Kevin J.; Van Wye, Gretchen; Barbot, Oxiris; Hadler, James L.; Linos, Natalia; Bassett, Mary T.

    2016-01-01

    Objectives. To assess potential reductions in premature mortality that could have been achieved in 2008 to 2012 if the minimum wage had been $15 per hour in New York City. Methods. Using the 2008 to 2012 American Community Survey, we performed simulations to assess how the proportion of low-income residents in each neighborhood might change with a hypothetical $15 minimum wage under alternative assumptions of labor market dynamics. We developed an ecological model of premature death to determine the differences between the levels of premature mortality as predicted by the actual proportions of low-income residents in 2008 to 2012 and the levels predicted by the proportions of low-income residents under a hypothetical $15 minimum wage. Results. A $15 minimum wage could have averted 2800 to 5500 premature deaths between 2008 and 2012 in New York City, representing 4% to 8% of total premature deaths in that period. Most of these avertable deaths would be realized in lower-income communities, in which residents are predominantly people of color. Conclusions. A higher minimum wage may have substantial positive effects on health and should be considered as an instrument to address health disparities. PMID:27077350

  9. Estimating Potential Reductions in Premature Mortality in New York City From Raising the Minimum Wage to $15.

    PubMed

    Tsao, Tsu-Yu; Konty, Kevin J; Van Wye, Gretchen; Barbot, Oxiris; Hadler, James L; Linos, Natalia; Bassett, Mary T

    2016-06-01

    To assess potential reductions in premature mortality that could have been achieved in 2008 to 2012 if the minimum wage had been $15 per hour in New York City. Using the 2008 to 2012 American Community Survey, we performed simulations to assess how the proportion of low-income residents in each neighborhood might change with a hypothetical $15 minimum wage under alternative assumptions of labor market dynamics. We developed an ecological model of premature death to determine the differences between the levels of premature mortality as predicted by the actual proportions of low-income residents in 2008 to 2012 and the levels predicted by the proportions of low-income residents under a hypothetical $15 minimum wage. A $15 minimum wage could have averted 2800 to 5500 premature deaths between 2008 and 2012 in New York City, representing 4% to 8% of total premature deaths in that period. Most of these avertable deaths would be realized in lower-income communities, in which residents are predominantly people of color. A higher minimum wage may have substantial positive effects on health and should be considered as an instrument to address health disparities.

  10. Characterizing and contrasting instream and riparian coarse wood in western Montana basins

    Treesearch

    Michael K. Young; Ethan A. Mace; Eric T. Ziegler; Elaine K. Sutherland

    2006-01-01

    The importance of coarse wood to aquatic biota and stream channel structure is widely recognized, yet characterizations of large-scale patterns in coarse wood dimensions and loads are rare. To address these issues, we censused instream coarse wood ( 2 m long and 10 cm minimum diameter) and sampled riparian coarse wood and channel characteristics in and along 13 streams...

  11. 49 CFR 38.25 - Doors, steps and thresholds.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... lift platforms shall have a band of color(s) running the full width of the step or edge which contrasts from the step tread and riser, or lift or ramp surface, either light-on-dark or dark-on-light. (c) Door... opening and the raised lift platform, or highest point of a ramp, shall be a minimum of 68 inches. For...

  12. 49 CFR 38.25 - Doors, steps and thresholds.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... lift platforms shall have a band of color(s) running the full width of the step or edge which contrasts from the step tread and riser, or lift or ramp surface, either light-on-dark or dark-on-light. (c) Door... opening and the raised lift platform, or highest point of a ramp, shall be a minimum of 68 inches. For...

  13. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  14. Visualizing dispersive features in 2D image via minimum gradient method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yu; Wang, Yan; Shen, Zhi -Xun

    Here, we developed a minimum gradient based method to track ridge features in a 2D image plot, which is a typical data representation in many momentum resolved spectroscopy experiments. Through both analytic formulation and numerical simulation, we compare this new method with existing DC (distribution curve) based and higher order derivative based analyses. We find that the new method has good noise resilience and enhanced contrast especially for weak intensity features and meanwhile preserves the quantitative local maxima information from the raw image. An algorithm is proposed to extract 1D ridge dispersion from the 2D image plot, whose quantitative applicationmore » to angle-resolved photoemission spectroscopy measurements on high temperature superconductors is demonstrated.« less

  15. Visualizing dispersive features in 2D image via minimum gradient method

    DOE PAGES

    He, Yu; Wang, Yan; Shen, Zhi -Xun

    2017-07-24

    Here, we developed a minimum gradient based method to track ridge features in a 2D image plot, which is a typical data representation in many momentum resolved spectroscopy experiments. Through both analytic formulation and numerical simulation, we compare this new method with existing DC (distribution curve) based and higher order derivative based analyses. We find that the new method has good noise resilience and enhanced contrast especially for weak intensity features and meanwhile preserves the quantitative local maxima information from the raw image. An algorithm is proposed to extract 1D ridge dispersion from the 2D image plot, whose quantitative applicationmore » to angle-resolved photoemission spectroscopy measurements on high temperature superconductors is demonstrated.« less

  16. In vitro and ex vivo evaluation of silica-coated super paramagnetic iron oxide nanoparticles (SPION) as biomedical photoacoustic contrast agent

    NASA Astrophysics Data System (ADS)

    Alwi, Rudolf; Telenkov, Sergey A.; Mandelis, Andreas; Leshuk, Timothy; Gu, Frank; Oladepo, Sulayman; Michaelian, Kirk; Dickie, Kristopher

    2013-03-01

    The employment of contrast agents in photoacoustic imaging has gained significant attention within the past few years for their biomedical applications. In this study, the use of silica-coated superparamagnetic iron oxide (Fe3O4) nanoparticles (SPION) was investigated as a contrast agent in biomedical photoacoustic imaging. SPIONs have been widely used as Food-and-Drug-Administration (FDA)-approved contrast agents for magnetic resonance imaging (MRI) and are known to have an excellent safety profile. Using our frequency-domain photoacoustic correlation technique ("the photoacoustic radar") with modulated laser excitation, we examined the effects of nanoparticle size, concentration and biological medium (e.g. serum, sheep blood) on its photoacoustic response in turbid media (intralipid solution). Maximum detection depth and minimum measurable SPION concentration were determined experimentally. The detection was performed using a single element transducer. The nanoparticle-induced optical contrast ex vivo in dense muscular tissues (avian pectus) was evaluated using a phased array photoacoustic probe and the strong potential of silicacoated SPION as a possible photoacoustic contrast agent was demonstrated. This study opens the way for future clinical applications of nanoparticle-enhanced photoacoustic imaging in cancer therapy.

  17. Optimization of the K-edge imaging for vulnerable plaques using gold nanoparticles and energy-resolved photon counting detectors: a simulation study

    PubMed Central

    Alivov, Yahya; Baturin, Pavlo; Le, Huy Q.; Ducote, Justin; Molloi, Sabee

    2014-01-01

    We investigated the effect of different imaging parameters such as dose, beam energy, energy resolution, and number of energy bins on image quality of K-edge spectral computed tomography (CT) of gold nanoparticles (GNP) accumulated in an atherosclerotic plaque. Maximum likelihood technique was employed to estimate the concentration of GNP, which served as a targeted intravenous contrast material intended to detect the degree of plaque's inflammation. The simulations studies used a single slice parallel beam CT geometry with an X-ray beam energy ranging between 50 and 140 kVp. The synthetic phantoms included small (3 cm in diameter) cylinder and chest (33x24 cm2) phantom, where both phantoms contained tissue, calcium, and gold. In the simulation studies GNP quantification and background (calcium and tissue) suppression task were pursued. The X-ray detection sensor was represented by an energy resolved photon counting detector (e.g., CdZnTe) with adjustable energy bins. Both ideal and more realistic (12% FWHM energy resolution) implementations of photon counting detector were simulated. The simulations were performed for the CdZnTe detector with pixel pitch of 0.5-1 mm, which corresponds to the performance without significant charge sharing and cross-talk effects. The Rose model was employed to estimate the minimum detectable concentration of GNPs. A figure of merit (FOM) was used to optimize the X-ray beam energy (kVp) to achieve the highest signal-to-noise ratio (SNR) with respect to patient dose. As a result, the successful identification of gold and background suppression was demonstrated. The highest FOM was observed at 125 kVp X-ray beam energy. The minimum detectable GNP concentration was determined to be approximately 1.06 μmol/mL (0.21 mg/mL) for an ideal detector and about 2.5 μmol/mL (0.49 mg/mL) for more realistic (12% FWHM) detector. The studies show the optimal imaging parameters at lowest patient dose using an energy resolved photon counting detector to image GNP in an atherosclerotic plaque. PMID:24334301

  18. Oxygen Sensitivity of Anammox and Coupled N-Cycle Processes in Oxygen Minimum Zones

    PubMed Central

    Kalvelage, Tim; Jensen, Marlene M.; Contreras, Sergio; Revsbech, Niels Peter; Lam, Phyllis; Günter, Marcel; LaRoche, Julie; Lavik, Gaute; Kuypers, Marcel M. M.

    2011-01-01

    Nutrient measurements indicate that 30–50% of the total nitrogen (N) loss in the ocean occurs in oxygen minimum zones (OMZs). This pelagic N-removal takes place within only ∼0.1% of the ocean volume, hence moderate variations in the extent of OMZs due to global warming may have a large impact on the global N-cycle. We examined the effect of oxygen (O2) on anammox, NH3 oxidation and NO3 − reduction in 15N-labeling experiments with varying O2 concentrations (0–25 µmol L−1) in the Namibian and Peruvian OMZs. Our results show that O2 is a major controlling factor for anammox activity in OMZ waters. Based on our O2 assays we estimate the upper limit for anammox to be ∼20 µmol L−1. In contrast, NH3 oxidation to NO2 − and NO3 − reduction to NO2 − as the main NH4 + and NO2 − sources for anammox were only moderately affected by changing O2 concentrations. Intriguingly, aerobic NH3 oxidation was active at non-detectable concentrations of O2, while anaerobic NO3 − reduction was fully active up to at least 25 µmol L−1 O2. Hence, aerobic and anaerobic N-cycle pathways in OMZs can co-occur over a larger range of O2 concentrations than previously assumed. The zone where N-loss can occur is primarily controlled by the O2-sensitivity of anammox itself, and not by any effects of O2 on the tightly coupled pathways of aerobic NH3 oxidation and NO3 − reduction. With anammox bacteria in the marine environment being active at O2 levels ∼20 times higher than those known to inhibit their cultured counterparts, the oceanic volume potentially acting as a N-sink increases tenfold. The predicted expansion of OMZs may enlarge this volume even further. Our study provides the first robust estimates of O2 sensitivities for processes directly and indirectly connected with N-loss. These are essential to assess the effects of ocean de-oxygenation on oceanic N-cycling. PMID:22216239

  19. Vehicle longitudinal velocity estimation during the braking process using unknown input Kalman filter

    NASA Astrophysics Data System (ADS)

    Moaveni, Bijan; Khosravi Roqaye Abad, Mahdi; Nasiri, Sayyad

    2015-10-01

    In this paper, vehicle longitudinal velocity during the braking process is estimated by measuring the wheels speed. Here, a new algorithm based on the unknown input Kalman filter is developed to estimate the vehicle longitudinal velocity with a minimum mean square error and without using the value of braking torque in the estimation procedure. The stability and convergence of the filter are analysed and proved. Effectiveness of the method is shown by designing a real experiment and comparing the estimation result with actual longitudinal velocity computing from a three-axis accelerometer output.

  20. Novel Non-invasive Estimation of Coronary Blood Flow using Contrast Advection in Computed Tomography Angiography

    NASA Astrophysics Data System (ADS)

    Eslami, Parastou; Seo, Jung-Hee; Rahsepar, Amirali; George, Richard; Lardo, Albert; Mittal, Rajat

    2014-11-01

    Coronary computed tomography angiography (CTA) is a promising tool for assessment of coronary stenosis and plaque burden. Recent studies have shown the presence of axial contrast concentration gradients in obstructed arteries, but the mechanism responsible for this phenomenon is not well understood. We use computational fluid dynamics to study intracoronary contrast dispersion and the correlation of concentration gradients with intracoronary blood flow and stenotic severity. Data from our CFD patient-specific simulations reveals that contrast dispersions are generated by intracoronary advection effects, and therefore, encode the coronary flow velocity. This novel method- Transluminal Attenuation Flow Encoding (TAFE) - is used to estimate the flowrate in phantom studies as well as preclinical experiments. Our results indicate a strong correlation between the values estimated from TAFE and the values measured in these experiments. The flow physics of contrast dispersion associated with TAFE will be discussed. This work is funded by grants from Coulter Foundation and Maryland Innovation Initiative. The authors have pending patents in this technology and RM and ACL have other financial interests associated with TAFE.

  1. Microdisk Injection Lasers for the 1.27-μm Spectral Range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kryzhanovskaya, N. V.; Maximov, M. V.; Blokhin, S. A.

    2016-03-15

    Microdisk injection lasers on GaAs substrates, with a minimum diameter of 15 μm and an active region based on InAs/InGaAs quantum dots, are fabricated. The lasers operate in the continuous-wave mode at room temperature without external cooling. The lasing wavelength is around 1.27 μm at a minimum threshold current of 1.6 mA. The specific thermal resistance is estimated to be 5 × 10–3 °C cm{sup 2}/W.

  2. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  3. Extreme Brightness Temperatures and Refractive Substructure in 3C273 with RadioAstron

    NASA Astrophysics Data System (ADS)

    Johnson, Michael D.; Kovalev, Yuri Y.; Gwinn, Carl R.; Gurvits, Leonid I.; Narayan, Ramesh; Macquart, Jean-Pierre; Jauncey, David L.; Voitsik, Peter A.; Anderson, James M.; Sokolovsky, Kirill V.; Lisakov, Mikhail M.

    2016-03-01

    Earth-space interferometry with RadioAstron provides the highest direct angular resolution ever achieved in astronomy at any wavelength. RadioAstron detections of the classic quasar 3C 273 on interferometric baselines up to 171,000 km suggest brightness temperatures exceeding expected limits from the “inverse-Compton catastrophe” by two orders of magnitude. We show that at 18 cm, these estimates most likely arise from refractive substructure introduced by scattering in the interstellar medium. We use the scattering properties to estimate an intrinsic brightness temperature of 7× {10}12 {{K}}, which is consistent with expected theoretical limits, but which is ˜15 times lower than estimates that neglect substructure. At 6.2 cm, the substructure influences the measured values appreciably but gives an estimated brightness temperature that is comparable to models that do not account for the substructure. At 1.35 {{cm}}, the substructure does not affect the extremely high inferred brightness temperatures, in excess of {10}13 {{K}}. We also demonstrate that for a source having a Gaussian surface brightness profile, a single long-baseline estimate of refractive substructure determines an absolute minimum brightness temperature, if the scattering properties along a given line of sight are known, and that this minimum accurately approximates the apparent brightness temperature over a wide range of total flux densities.

  4. Estimated Effects of Different Alcohol Taxation and Price Policies on Health Inequalities: A Mathematical Modelling Study

    PubMed Central

    Meier, Petra S.; Holmes, John; Angus, Colin; Ally, Abdallah K.; Meng, Yang; Brennan, Alan

    2016-01-01

    Introduction While evidence that alcohol pricing policies reduce alcohol-related health harm is robust, and alcohol taxation increases are a WHO “best buy” intervention, there is a lack of research comparing the scale and distribution across society of health impacts arising from alternative tax and price policy options. The aim of this study is to test whether four common alcohol taxation and pricing strategies differ in their impact on health inequalities. Methods and Findings An econometric epidemiological model was built with England 2014/2015 as the setting. Four pricing strategies implemented on top of the current tax were equalised to give the same 4.3% population-wide reduction in total alcohol-related mortality: current tax increase, a 13.4% all-product duty increase under the current UK system; a value-based tax, a 4.0% ad valorem tax based on product price; a strength-based tax, a volumetric tax of £0.22 per UK alcohol unit (= 8 g of ethanol); and minimum unit pricing, a minimum price threshold of £0.50 per unit, below which alcohol cannot be sold. Model inputs were calculated by combining data from representative household surveys on alcohol purchasing and consumption, administrative and healthcare data on 43 alcohol-attributable diseases, and published price elasticities and relative risk functions. Outcomes were annual per capita consumption, consumer spending, and alcohol-related deaths. Uncertainty was assessed via partial probabilistic sensitivity analysis (PSA) and scenario analysis. The pricing strategies differ as to how effects are distributed across the population, and, from a public health perspective, heavy drinkers in routine/manual occupations are a key group as they are at greatest risk of health harm from their drinking. Strength-based taxation and minimum unit pricing would have greater effects on mortality among drinkers in routine/manual occupations (particularly for heavy drinkers, where the estimated policy effects on mortality rates are as follows: current tax increase, −3.2%; value-based tax, −2.9%; strength-based tax, −6.1%; minimum unit pricing, −7.8%) and lesser impacts among drinkers in professional/managerial occupations (for heavy drinkers: current tax increase, −1.3%; value-based tax, −1.4%; strength-based tax, +0.2%; minimum unit pricing, +0.8%). Results from the PSA give slightly greater mean effects for both the routine/manual (current tax increase, −3.6% [95% uncertainty interval (UI) −6.1%, −0.6%]; value-based tax, −3.3% [UI −5.1%, −1.7%]; strength-based tax, −7.5% [UI −13.7%, −3.9%]; minimum unit pricing, −10.3% [UI −10.3%, −7.0%]) and professional/managerial occupation groups (current tax increase, −1.8% [UI −4.7%, +1.6%]; value-based tax, −1.9% [UI −3.6%, +0.4%]; strength-based tax, −0.8% [UI −6.9%, +4.0%]; minimum unit pricing, −0.7% [UI −5.6%, +3.6%]). Impacts of price changes on moderate drinkers were small regardless of income or socioeconomic group. Analysis of uncertainty shows that the relative effectiveness of the four policies is fairly stable, although uncertainty in the absolute scale of effects exists. Volumetric taxation and minimum unit pricing consistently outperform increasing the current tax or adding an ad valorem tax in terms of reducing mortality among the heaviest drinkers and reducing alcohol-related health inequalities (e.g., in the routine/manual occupation group, volumetric taxation reduces deaths more than increasing the current tax in 26 out of 30 probabilistic runs, minimum unit pricing reduces deaths more than volumetric tax in 21 out of 30 runs, and minimum unit pricing reduces deaths more than increasing the current tax in 30 out of 30 runs). Study limitations include reducing model complexity by not considering a largely ineffective ban on below-tax alcohol sales, special duty rates covering only small shares of the market, and the impact of tax fraud or retailer non-compliance with minimum unit prices. Conclusions Our model estimates that, compared to tax increases under the current system or introducing taxation based on product value, alcohol-content-based taxation or minimum unit pricing would lead to larger reductions in health inequalities across income groups. We also estimate that alcohol-content-based taxation and minimum unit pricing would have the largest impact on harmful drinking, with minimal effects on those drinking in moderation. PMID:26905063

  5. Optimal estimation of the optomechanical coupling strength

    NASA Astrophysics Data System (ADS)

    Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André

    2018-06-01

    We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.

  6. Software for the grouped optimal aggregation technique

    NASA Technical Reports Server (NTRS)

    Brown, P. M.; Shaw, G. W. (Principal Investigator)

    1982-01-01

    The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.

  7. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  8. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  9. Regulating the medical loss ratio: implications for the individual market.

    PubMed

    Abraham, Jean M; Karaca-Mandic, Pinar

    2011-03-01

    To provide state-level estimates of the size and structure of the US individual market for health insurance and to investigate the potential impact of new medical loss ratio (MLR) regulation in 2011, as indicated by the Patient Protection and Affordable Care Act (PPACA). Using data from the National Association of Insurance Commissioners, we provided state-level estimates of the size and structure of the US individual market from 2002 to 2009. We estimated the number of insurers expected to have MLRs below the legislated minimum and their corresponding enrollment. In the case of noncompliant insurers exiting the market, we estimated the number of enrollees that may be vulnerable to major coverage disruption given poor health status. In 2009, using a PPACA-adjusted MLR definition, we estimated that 29% of insurer-state observations in the individual market would have MLRs below the 80% minimum, corresponding to 32% of total enrollment. Nine states would have at least one-half of their health insurers below the threshold. If insurers below the MLR threshold exit the market, major coverage disruption could occur for those in poor health; we estimated the range to be between 104,624 and 158,736 member-years. The introduction of MLR regulation as part of the PPACA has the potential to significantly affect the functioning of the individual market for health insurance.

  10. Estimating contrast transfer function and associated parameters by constrained non-linear optimization.

    PubMed

    Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W

    2009-03-01

    The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.

  11. Respiratory motion estimation in x-ray angiography for improved guidance during coronary interventions

    NASA Astrophysics Data System (ADS)

    Baka, N.; Lelieveldt, B. P. F.; Schultz, C.; Niessen, W.; van Walsum, T.

    2015-05-01

    During percutaneous coronary interventions (PCI) catheters and arteries are visualized by x-ray angiography (XA) sequences, using brief contrast injections to show the coronary arteries. If we could continue visualizing the coronary arteries after the contrast agent passed (thus in non-contrast XA frames), we could potentially lower contrast use, which is advantageous due to the toxicity of the contrast agent. This paper explores the possibility of such visualization in mono-plane XA acquisitions with a special focus on respiratory based coronary artery motion estimation. We use the patient specific coronary artery centerlines from pre-interventional 3D CTA images to project on the XA sequence for artery visualization. To achieve this, a framework for registering the 3D centerlines with the mono-plane 2D + time XA sequences is presented. During the registration the patient specific cardiac and respiratory motion is learned. We investigate several respiratory motion estimation strategies with respect to accuracy, plausibility and ease of use for motion prediction in XA frames with and without contrast. The investigated strategies include diaphragm motion based prediction, and respiratory motion extraction from the guiding catheter tip motion. We furthermore compare translational and rigid respiratory based heart motion. We validated the accuracy of the 2D/3D registration and the respiratory and cardiac motion estimations on XA sequences of 12 interventions. The diaphragm based motion model and the catheter tip derived motion achieved 1.58 mm and 1.83 mm median 2D accuracy, respectively. On a subset of four interventions we evaluated the artery visualization accuracy for non-contrast cases. Both diaphragm, and catheter tip based prediction performed similarly, with about half of the cases providing satisfactory accuracy (median error < 2 mm).

  12. Changes in Risk of Immediate Adverse Reactions to Iodinated Contrast Media by Repeated Administrations in Patients with Hepatocellular Carcinoma

    PubMed Central

    Fujiwara, Naoto; Tateishi, Ryosuke; Akahane, Masaaki; Taguri, Masataka; Minami, Tatsuya; Mikami, Shintaro; Sato, Masaya; Uchino, Kouji; Enooku, Kenichiro; Kondo, Yuji; Asaoka, Yoshinari; Yamashiki, Noriyo; Goto, Tadashi; Shiina, Shuichiro; Yoshida, Haruhiko; Ohtomo, Kuni; Koike, Kazuhiko

    2013-01-01

    Background To elucidate whether repeated exposures to iodinated contrast media increase the risk of adverse reaction. Materials and Methods We retrospectively reviewed 1,861 patients with hepatocellular carcinoma who visited authors’ institution, a tertiary referral center, between 2004 and 2008. We analyzed cumulative probability of adverse reactions and risk factors. We categorized all symptoms into hypersensitivity reactions, physiologic reactions, and other reactions, according to the American College of Radiology guidelines, and evaluated each category as an event. We estimated the association between hazard for adverse reactions and the number of cumulative exposures to contrast media. We also evaluated subsequent contrast media injections and adverse reactions. Results There were 23,684 contrast media injections in 1,729 patients. One hundred and thirty-two patients were excluded because they were given no contrast media during the study period. Adverse reactions occurred in 196 (0.83%) patients. The cumulative incidence at 10th, 20th, and 30th examination was 7.9%, 15.2%, and 24.1%, respectively. Presence of renal impairment was found to be one of risk factors for adverse reactions. The estimated hazard of overall adverse reaction gradually decreased until around 10th exposure and rose with subsequent exposures. The estimated hazard of hypersensitivity showed V-shaped change with cumulative number of exposures. The estimated hazard of physiologic reaction had a tendency toward decreasing and that of other reaction had a tendency toward increasing. Second adverse reaction was more severe than the initial in only one among 130 patients receiving subsequent injections. Conclusion Repeated exposures to iodinated contrast media increase the risk of adverse reaction. PMID:24098420

  13. Patterning optimization for 55nm design rule DRAM/flash memory using production-ready customized illuminations

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith

    2005-11-01

    Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.

  14. Is the difference between chemical and numerical estimates of baseflow meaningful?

    NASA Astrophysics Data System (ADS)

    Cartwright, Ian; Gilfedder, Ben; Hofmann, Harald

    2014-05-01

    Both chemical and numerical techniques are commonly used to calculate baseflow inputs to gaining rivers. In general the chemical methods yield lower estimates of baseflow than the numerical techniques. In part, this may be due to the techniques assuming two components (event water and baseflow) whereas there may also be multiple transient stores of water. Bank return waters, interflow, or waters stored on floodplains are delayed components that may be geochemically similar to the surface water from which they are derived; numerical techniques may record these components as baseflow whereas chemical mass balance studies are likely to aggregate them with the surface water component. This study compares baseflow estimates using chemical mass balance, local minimum methods, and recursive digital filters in the upper reaches of the Barwon River, southeast Australia. While more sophisticated techniques exist, these methods of estimating baseflow are readily applied with the available data and have been used widely elsewhere. During the early stages of high-discharge events, chemical mass balance overestimates groundwater inflows, probably due to flushing of saline water from wetlands and marshes, soils, or the unsaturated zone. Overall, however, estimates of baseflow from the local minimum and recursive digital filters are higher than those from chemical mass balance using Cl calculated from continuous electrical conductivity. Between 2001 and 2011, the baseflow contribution to the upper Barwon River calculated using chemical mass balance is between 12 and 25% of annual discharge. Recursive digital filters predict higher baseflow contributions of 19 to 52% of annual discharge. These estimates are similar to those from the local minimum method (16 to 45% of annual discharge). These differences most probably reflect how the different techniques characterise the transient water sources in this catchment. The local minimum and recursive digital filters aggregate much of the water from delayed sources as baseflow. However, as many of these delayed transient water stores (such as bank return flow, floodplain storage, or interflow) have Cl concentrations that are similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The difference between the estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months at that time. Cl vs. discharge variations during individual flow events also demonstrate that inflows of high-salinity older water occurs on the rising limbs of hydrographs followed by inflows of low-salinity water from the transient stores as discharge falls. The use of complementary techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  15. Correlation of soft palate length with velum obstruction and severity of obstructive sleep apnea syndrome.

    PubMed

    Lim, Ju-Shin; Lee, Jae Woo; Han, Chun; Kwon, Jang-Woo

    2018-06-01

    Our aim in this study was to analyze whether soft palate length and velum obstruction during sleep are correlated and to determine the effects of related parameters on obstructive sleep apnea syndrome (OSAS) severity. We used computed tomography to measure soft palate length and drug-induced sleep endoscopy (DISE) to evaluate velum obstruction severity. Patients also underwent polysomnography (PSG) for evaluation of OSAS severity. A retrospective cohort of 67 patients with OSAS treated between May 1st, 2013 and July 31st, 2016 was analyzed. Each patient underwent DISE, PSG, and computed tomography. Using DISE, velum obstruction was categorized by the VOTE classification method. Using computed tomography, soft palate length was measured as the length of the posterior nasal spine to the uvula. Correlations of velum obstruction in DISE and PSG parameters (obstructive apnea, hypopnea, apnea hypopnea index (AHI), respiratory effort related arousal (RERA), respiratory disturbance index (RDI), baseline SaO 2 , and minimum SaO 2 ) with soft palate length were also analyzed. Among the 67 patients, the average PNS-U length was 39.90±4.19mm. Length was significantly different by age but not by other demographic characteristics such as sex, past history, or BMI. DISE revealed a statistically significant difference of velum obstruction degree; the cutoff value for PNS-U was 39.47mm. The PSG results, obstructive apnea, AHI, RDI, baseline SaO 2 , and minimum SaO 2 were correlated with PNS-U length, while other results such as hypopnea and RERA showed no correlation. Analysis of soft palate length showed that increased PNS-U length was associated with higher rates of obstructive apnea, AHI, and RDI as assessed by PSG. In contrast, lower baseline SaO 2 and minimum SaO 2 values were seen by PSG; more severe velum obstruction was seen by DISE. We propose that when a soft palate is suspected in OSAS, computed tomography measurement of soft palate length is a valid method for estimating the degree of velum obstruction and the severity of OSAS. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Potential for thermal tolerance to mediate climate change effects on three members of a cool temperate lizard genus, Niveoscincus.

    PubMed

    Caldwell, Amanda J; While, Geoffrey M; Beeton, Nicholas J; Wapstra, Erik

    2015-08-01

    Climatic changes are predicted to be greater in higher latitude and mountainous regions but species specific impacts are difficult to predict. This is partly due to inter-specific variance in the physiological traits which mediate environmental temperature effects at the organismal level. We examined variation in the critical thermal minimum (CTmin), critical thermal maximum (CTmax) and evaporative water loss rates (EWL) of a widespread lowland (Niveoscincus ocellatus) and two range restricted highland (N. microlepidotus and N. greeni) members of a cool temperate Tasmanian lizard genus. The widespread lowland species had significantly higher CTmin and CTmax and significantly lower EWL than both highland species. Implications of inter-specific variation in thermal tolerance for activity were examined under contemporary and future climate change scenarios. Instances of air temperatures below CTmin were predicted to decline in frequency for the widespread lowland and both highland species. Air temperatures of high altitude sites were not predicted to exceed the CTmax of either highland species throughout the 21st century. In contrast, the widespread lowland species is predicted to experience air temperatures in excess of CTmax on 1 or 2 days by three of six global circulation models from 2068-2096. To estimate climate change effects on activity we reran the thermal tolerance models using minimum and maximum temperatures selected for activity. A net gain in available activity time was predicted under climate change for all three species; while air temperatures were predicted to exceed maximum temperatures selected for activity with increasing frequency, the change was not as great as the predicted decline in air temperatures below minimum temperatures selected for activity. We hypothesise that the major effect of rising air temperatures under climate change is an increase in available activity period for both the widespread lowland and highland species. The consequences of a greater available activity period will depend on the extent to which changes in climate alters other related factors, such as the nature and level of competition between the respective species. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Effect of bait delivery rate in a GreenFeed system on methane emission estimates from cattle grazing rangeland

    USDA-ARS?s Scientific Manuscript database

    Two experiments were conducted to evaluate the effect of bait delivery rate on methane emission estimates measured by a GreenFeed system (GFS; C-Lock, Inc., Rapid City, SD). The manufacture recommends that cattle have a minimum visit time of 3 minutes so that at least 3 eructations are captured to ...

  18. Monthly leaf area index estimates from point-in-time measurements and needle phenology for Pinus taeda

    Treesearch

    D.A. Sampson; T.J. Albaugh; Kurt H. Johnsen; H.L. Allen; Stanley J. Zarnoch

    2003-01-01

    Abstract: Leaf area index (LAI) of loblolly pine (Pinus taeda L.) trees of the southern United States varies almost twofold interannually; loblolly pine, essentially, carries two foliage cohorts at peak LAI (September) and one at minimum (March–April). Herein, we present an approach that may be site invariant to estimate monthly...

  19. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  20. Anticipating Cycle 24 Minimum and its Consequences: An Update

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2008-01-01

    This Technical Publication updates estimates for cycle 24 minimum and discusses consequences associated with cycle 23 being a longer than average period cycle and cycle 24 having parametric minimum values smaller (or larger for the case of spotless days) than long term medians. Through December 2007, cycle 23 has persisted 140 mo from its 12-mo moving average (12-mma) minimum monthly mean sunspot number occurrence date (May 1996). Longer than average period cycles of the modern era (since cycle 12) have minimum-to-minimum periods of about 139.0+/-6.3 mo (the 90-percent prediction interval), inferring that cycle 24 s minimum monthly mean sunspot number should be expected before July 2008. The major consequence of this is that, unless cycle 24 is a statistical outlier (like cycle 21), its maximum amplitude (RM) likely will be smaller than previously forecast. If, however, in the course of its rise cycle 24 s 12-mma of the weighted mean latitude (L) of spot groups exceeds 24 deg, then one expects RM >131, and if its 12-mma of highest latitude (H) spot groups exceeds 38 deg, then one expects RM >127. High-latitude new cycle spot groups, while first reported in January 2008, have not, as yet, become the dominant form of spot groups. Minimum values in L and H were observed in mid 2007 and values are now slowly increasing, a precondition for the imminent onset of the new sunspot cycle.

  1. Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade

    PubMed Central

    Price, Stephen F.; Payne, Antony J.; Howat, Ian M.; Smith, Benjamin E.

    2011-01-01

    We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland’s three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing. PMID:21576500

  2. Constraints on Water Reservoir Lifetimes From Catchment-Wide 10Be Erosion Rates—A Case Study From Western Turkey

    NASA Astrophysics Data System (ADS)

    Heineke, Caroline; Hetzel, Ralf; Akal, Cüneyt; Christl, Marcus

    2017-11-01

    The functionality and retention capacity of water reservoirs is generally impaired by upstream erosion and reservoir sedimentation, making a reliable assessment of erosion indispensable to estimate reservoir lifetimes. Widely used river gauging methods may underestimate sediment yield, because they do not record rare, high-magnitude events and may underestimate bed load transport. Hence, reservoir lifetimes calculated from short-term erosion rates should be regarded as maximum values. We propose that erosion rates from cosmogenic 10Be, which commonly integrate over hundreds to thousands of years, are useful to complement short-term sediment yield estimates and should be employed to estimate minimum reservoir lifetimes. Here we present 10Be erosion rates for the drainage basins of six water reservoirs in Western Turkey, which are located in a tectonically active region with easily erodible bedrock. Our 10Be erosion rates for these catchments are high, ranging from ˜170 to ˜1,040 t/km2/yr. When linked to reservoir volumes, they yield minimum reservoir lifetimes between 25 ± 5 and 1,650 ± 360 years until complete filling, with four reservoirs having minimum lifespans of ≤110 years. In a neighboring region with more resistant bedrock and less tectonic activity, we obtain much lower catchment-wide 10Be erosion rates of ˜33 to ˜95 t/km2/yr, illustrating that differences in lithology and tectonic boundary conditions can cause substantial variations in erosion even at a spatial scale of only ˜50 km. In conclusion, we suggest that both short-term sediment yield estimates and 10Be erosion rates should be employed to predict the lifetimes of reservoirs.

  3. Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade.

    PubMed

    Price, Stephen F; Payne, Antony J; Howat, Ian M; Smith, Benjamin E

    2011-05-31

    We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland's three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing.

  4. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  5. Trends in Arctic Sea Ice Volume 2010-2013 from CryoSat-2

    NASA Astrophysics Data System (ADS)

    Tilling, R.; Ridout, A.; Wingham, D.; Shepherd, A.; Haas, C.; Farrell, S. L.; Schweiger, A. J.; Zhang, J.; Giles, K.; Laxon, S.

    2013-12-01

    Satellite records show a decline in Arctic sea ice extent over the past three decades with a record minimum in September 2012, and results from the Pan-Arctic Ice-Ocean Modelling and Assimilation System (PIOMAS) suggest that this has been accompanied by a reduction in volume. We use three years of measurements recorded by the European Space Agency CryoSat-2 (CS-2) mission, validated with in situ data, to generate estimates of seasonal variations and inter-annual trends in Arctic sea ice volume between 2010 and 2013. The CS-2 estimates of sea ice thickness agree with in situ estimates derived from upward looking sonar measurements of ice draught and airborne measurements of ice thickness and freeboard to within 0.1 metres. Prior to the record minimum in summer 2012, autumn and winter Arctic sea ice volume had fallen by ~1300 km3 relative to the previous year. Using the full 3-year period of CS-2 observations, we estimate that winter Arctic sea ice volume has decreased by ~700 km3/yr since 2010, approximately twice the average rate since 1980 as predicted by the PIOMAS.

  6. Archaeal Abundance across a pH Gradient in an Arable Soil and Its Relationship to Bacterial and Fungal Growth Rates

    PubMed Central

    Sterngren, Anna E.; Rousk, Johannes

    2012-01-01

    Soil pH is one of the most influential factors for the composition of bacterial and fungal communities, but the influence of soil pH on the distribution and composition of soil archaeal communities has yet to be systematically addressed. The primary aim of this study was to determine how total archaeal abundance (quantitative PCR [qPCR]-based estimates of 16S rRNA gene copy numbers) is related to soil pH across a pH gradient (pH 4.0 to 8.3). Secondarily, we wanted to assess how archaeal abundance related to bacterial and fungal growth rates across the same pH gradient. We identified two distinct and opposite effects of pH on the archaeal abundance. In the lowest pH range (pH 4.0 to 4.7), the abundance of archaea did not seem to correspond to pH. Above this pH range, there was a sharp, almost 4-fold decrease in archaeal abundance, reaching a minimum at pH 5.1 to 5.2. The low abundance of archaeal 16S rRNA gene copy numbers at this pH range then sharply increased almost 150-fold with pH, resulting in an increase in the ratio between archaeal and bacterial copy numbers from a minimum of 0.002 to more than 0.07 at pH 8. The nonuniform archaeal response to pH could reflect variation in the archaeal community composition along the gradient, with some archaea adapted to acidic conditions and others to neutral to slightly alkaline conditions. This suggestion is reinforced by observations of contrasting outcomes of the (competitive) interactions between archaea, bacteria, and fungi toward the lower and higher ends of the examined pH gradient. PMID:22706045

  7. Evidence for pre-Taconic metamorphism in the Potomac terrane, Maryland and Virginia: Hornblende and Muscovite [sup 40]Ar/[sup 39]Ar results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, J.L.; Wintsch, R.P.; Kunk, M.J.

    1993-03-01

    New [sup 40]Ar/[sup 39]Ar age spectra of hornblende and white mica from the Great Falls area of the Potomac terrane of Maryland and Virginia indicate pre-Taconic metamorphism. Age spectra of hornblende samples are interpreted to represent cooling from peak metamorphic conditions through their closure temperatures for argon diffusion ([approximately]500C) at about 490 Ma. These older Ordovician postmetamorphic cooling ages strongly contrast with younger post-Ordovician metamorphic cooling ages now being reported in the Blue Ridge and Goochland terranes to the west and east respectively. A late phyllitic sheen observed on rocks in the field and petrographic observations of undulose plagioclase andmore » amphibole, and older muscovite, and kinked primary muscovite in the Bear Island Granodiorite reflect a younger retrogressive metamorphism involving the growth of secondary muscovite (Fisher's S4 ). [sup 40]Ar/[sup 39]Ar Age spectra of white micas from the Bear Island Granodiorite are complex and probably indicate both primary and secondary white mica, the latter apparently growing below the closure temperature for retention of argon in muscovite ([approximately]350C). The age spectra permit an estimate of a minimum age of 420 Ma for cooling through closure of the older generation of white mica. The above ages of hornblende and muscovite closure imply a minimum cooling rate of [approximately]2C/m.y., and exhumation rate of about 1 mm/yr. The projected time of peak metamorphism at upper amphibolite facies for the Great Falls area clearly predates the Ordovician Taconic orogeny and suggests that these rocks escaped this event and largely escaped younger Paleozoic metamorphic events, which are well documented in adjacent terranes.« less

  8. Protein folding: Over half a century lasting quest. Comment on "There and back again: Two views on the protein folding puzzle" by Alexei V. Finkelstein et al.

    NASA Astrophysics Data System (ADS)

    Krokhotin, Andrey; Dokholyan, Nikolay V.

    2017-07-01

    Most proteins fold into unique three-dimensional (3D) structures that determine their biological functions, such as catalytic activity or macromolecular binding. Misfolded proteins can pose a threat through aberrant interactions with other proteins leading to a number of diseases including Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis [1,2]. What does determine 3D structure of proteins? The first clue to this question came more than fifty years ago when Anfinsen demonstrated that unfolded proteins can spontaneously fold to their native 3D structures [3,4]. Anfinsen's experiments lead to the conclusion that proteins fold to unique native structure corresponding to the stable and kinetically accessible free energy minimum, and protein native structure is solely determined by its amino acid sequence. The question of how exactly proteins find their free energy minimum proved to be a difficult problem. One of the puzzles, initially pointed out by Levinthal, was an inconsistency between observed protein folding times and theoretical estimates. A self-avoiding polymer model of a globular protein of 100-residues length on a cubic lattice can sample at least 1047 states. Based on the assumption that conformational sampling occurs at the highest vibrational mode of proteins (∼picoseconds), predicted folding time by searching among all the possible conformations leads to ∼1027 years (much larger than the age of the universe) [5]. In contrast, observed protein folding time range from microseconds to minutes. Due to tremendous theoretical progress in protein folding field that has been achieved in past decades, the source of this inconsistency is currently understood that is thoroughly described in the review by Finkelstein et al. [6].

  9. Usefulness of the apparent diffusion coefficient for the evaluation of the white matter to differentiate between glioblastoma and brain metastases.

    PubMed

    Miquelini, L A; Pérez Akly, M S; Funes, J A; Besada, C H

    2016-01-01

    To determine whether there are significant differences in the apparent diffusion coefficient (ADC) between the apparently normal peritumor white matter surrounding glioblastomas and that surrounding brain metastases. We retrospectively reviewed 42 patients with histologically confirmed glioblastomas and 42 patients with a single cerebral metastasis. We measured the signal intensity in the apparently normal peritumor white matter and in the abnormal peritumor white matter on the ADC maps. We used mean ADC values in the contralateral occipital white matter as a reference from which to design normalized ADC indices. We compared mean values between the two tumor types. We calculated the area under the receiver operator characteristic curve and estimated the sensitivity and specificity of the measurements taken. Supratentorial lesions and compromise of the corpus callosum were more common in patients with glioblastoma than in patients with brain metastases. The maximum diameter of the enhanced area after injection of a contrast agent was greater in the glioblastomas (p<0.001). The minimum ADC value measured in the apparently normal peritumor white matter was higher for the glioblastomas than for the metastases (p=0.002). Significant differences in the ADC index were found only for the minimum ADC value in apparently normal peritumor white matter. The sensitivity and specificity were less than 70% for all variables analyzed. There are differences in the ADC values of apparently normal peritumor white matter between glioblastomas and cerebral metastases, but the magnitude of these differences is slight and the application of these differences in clinical practice is still limited. Copyright © 2015 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  10. Archaeal abundance across a pH gradient in an arable soil and its relationship to bacterial and fungal growth rates.

    PubMed

    Bengtson, Per; Sterngren, Anna E; Rousk, Johannes

    2012-08-01

    Soil pH is one of the most influential factors for the composition of bacterial and fungal communities, but the influence of soil pH on the distribution and composition of soil archaeal communities has yet to be systematically addressed. The primary aim of this study was to determine how total archaeal abundance (quantitative PCR [qPCR]-based estimates of 16S rRNA gene copy numbers) is related to soil pH across a pH gradient (pH 4.0 to 8.3). Secondarily, we wanted to assess how archaeal abundance related to bacterial and fungal growth rates across the same pH gradient. We identified two distinct and opposite effects of pH on the archaeal abundance. In the lowest pH range (pH 4.0 to 4.7), the abundance of archaea did not seem to correspond to pH. Above this pH range, there was a sharp, almost 4-fold decrease in archaeal abundance, reaching a minimum at pH 5.1 to 5.2. The low abundance of archaeal 16S rRNA gene copy numbers at this pH range then sharply increased almost 150-fold with pH, resulting in an increase in the ratio between archaeal and bacterial copy numbers from a minimum of 0.002 to more than 0.07 at pH 8. The nonuniform archaeal response to pH could reflect variation in the archaeal community composition along the gradient, with some archaea adapted to acidic conditions and others to neutral to slightly alkaline conditions. This suggestion is reinforced by observations of contrasting outcomes of the (competitive) interactions between archaea, bacteria, and fungi toward the lower and higher ends of the examined pH gradient.

  11. Biased visualization of hypoperfused tissue by computed tomography due to short imaging duration: improved classification by image down-sampling and vascular models.

    PubMed

    Mikkelsen, Irene Klærke; Jones, P Simon; Ribe, Lars Riisgaard; Alawneh, Josef; Puig, Josep; Bekke, Susanne Lise; Tietze, Anna; Gillard, Jonathan H; Warburton, Elisabeth A; Pedraza, Salva; Baron, Jean-Claude; Østergaard, Leif; Mouridsen, Kim

    2015-07-01

    Lesion detection in acute stroke by computed-tomography perfusion (CTP) can be affected by incomplete bolus coverage in veins and hypoperfused tissue, so-called bolus truncation (BT), and low contrast-to-noise ratio (CNR). We examined the BT-frequency and hypothesized that image down-sampling and a vascular model (VM) for perfusion calculation would improve normo- and hypoperfused tissue classification. CTP datasets from 40 acute stroke patients were retrospectively analysed for BT. In 16 patients with hypoperfused tissue but no BT, repeated 2-by-2 image down-sampling and uniform filtering was performed, comparing CNR to perfusion-MRI levels and tissue classification to that of unprocessed data. By simulating reduced scan duration, the minimum scan-duration at which estimated lesion volumes came within 10% of their true volume was compared for VM and state-of-the-art algorithms. BT in veins and hypoperfused tissue was observed in 9/40 (22.5%) and 17/40 patients (42.5%), respectively. Down-sampling to 128 × 128 resolution yielded CNR comparable to MR data and improved tissue classification (p = 0.0069). VM reduced minimum scan duration, providing reliable maps of cerebral blood flow and mean transit time: 5 s (p = 0.03) and 7 s (p < 0.0001), respectively). BT is not uncommon in stroke CTP with 40-s scan duration. Applying image down-sampling and VM improve tissue classification. • Too-short imaging duration is common in clinical acute stroke CTP imaging. • The consequence is impaired identification of hypoperfused tissue in acute stroke patients. • The vascular model is less sensitive than current algorithms to imaging duration. • Noise reduction by image down-sampling improves identification of hypoperfused tissue by CTP.

  12. A visual detection model for DCT coefficient quantization

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Watson, Andrew B.

    1994-01-01

    The discrete cosine transform (DCT) is widely used in image compression and is part of the JPEG and MPEG compression standards. The degree of compression and the amount of distortion in the decompressed image are controlled by the quantization of the transform coefficients. The standards do not specify how the DCT coefficients should be quantized. One approach is to set the quantization level for each coefficient so that the quantization error is near the threshold of visibility. Results from previous work are combined to form the current best detection model for DCT coefficient quantization noise. This model predicts sensitivity as a function of display parameters, enabling quantization matrices to be designed for display situations varying in luminance, veiling light, and spatial frequency related conditions (pixel size, viewing distance, and aspect ratio). It also allows arbitrary color space directions for the representation of color. A model-based method of optimizing the quantization matrix for an individual image was developed. The model described above provides visual thresholds for each DCT frequency. These thresholds are adjusted within each block for visual light adaptation and contrast masking. For given quantization matrix, the DCT quantization errors are scaled by the adjusted thresholds to yield perceptual errors. These errors are pooled nonlinearly over the image to yield total perceptual error. With this model one may estimate the quantization matrix for a particular image that yields minimum bit rate for a given total perceptual error, or minimum perceptual error for a given bit rate. Custom matrices for a number of images show clear improvement over image-independent matrices. Custom matrices are compatible with the JPEG standard, which requires transmission of the quantization matrix.

  13. Coastal Upwelling and Deep Fog: 50-year Worldwide Climatology

    NASA Astrophysics Data System (ADS)

    Koracin, D. R.

    2015-12-01

    An analysis is presented of the marine fog distribution based upon the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) ship observations taken during 1950-2007. Deep fog occurrence is reported in routine weather reports that are encoded in an ICOADS ship observation. Occurrence is estimated by the number of deep fog observations divided by the total present weather observations in a one-degree area centered on latitude and longitude grid point intersections. The mean fog occurrence for the summer (June-July-August) 1950-2007 was computed for each one degree point for the world. There are five major world locations with coastal SST minimums due to wind driven upwelling. Four of these are during the local summer on the eastern side of a semi-permanent anticyclone on eastern sides of northern and southern mid-latitudes of the Pacifica and the Atlantic. The fifth is during the SW monsoon in the Indian Ocean. For all five of these locations, the deep fog occurrence is at maximum during the upwelling season, with the greatest occurrences concentrated along the coast and isolated over the SST minimum. For the five coastal fog maxima, the greatest and longest duration occurrence along coast occurrence is associated with the coldest sea surface temperature and longest along coast occurrence, which is along N. California- S. Oregon. In contrast, the lowest occurrence of fog and the least along coast occurrence is associated with the warmest sea surface temperatures and least along coast occurrence along the SE Arabian Peninsula. The remaining three zones, Peru-Chile, NW Africa, and SW Africa are between the two extremes in fog occurrence, along coast coverage and sea surface temperature. Peru-Chile is more complex than the others as the Peru upwelling and fog appears the more dominant although ship observations are sparse along Chile.

  14. Ocean mixing in deep-sea trenches: New insights from the Challenger Deep, Mariana Trench

    NASA Astrophysics Data System (ADS)

    van Haren, Hans; Berndt, Christian; Klaucke, Ingo

    2017-11-01

    Reliable very deep shipborne SBE 911plus Conductivity Temperature Depth (CTD) data to within 60 m from the bottom and Kongsberg EM122 0.5° × 1° multibeam echosounder data are collected in the Challenger Deep, Mariana Trench. A new position and depth are given for the deepest point in the world's ocean. The data provide insight into the interplay between topography and internal waves in the ocean that lead to mixing of the lowermost water masses on Earth. Below 5000 m, the vertical density stratification is weak, with a minimum buoyancy frequency N = 1.0 ± 0.6 cpd, cycles per day, between 6500 and 8500 m. In that depth range, the average turbulence is coarsely estimated from Thorpe-overturning scales, with limited statistics to be ten times higher than the mean values of dissipation rate εT = 3 ± 2 × 10-11 m2 s-3 and eddy diffusivity KzT = 2 ± 1.5 × 10-4 m2 s-1 estimated for the depth range between 10,300 and 10,850 m, where N = 2.5 ± 0.6 cpd. Inertial and meridionally directed tidal inertio-gravity waves can propagate between the differently stratified layers. These waves are suggested to be responsible for the observed turbulence. The turbulence values are similar to those recently estimated from CTD and moored observations in the Puerto Rico Trench. Yet, in contrast to the Puerto Rico Trench, seafloor morphology in the Mariana Trench shows up to 500 m-high fault scarps on the incoming tectonic plate and a very narrow trench, suggesting that seafloor topography does not play a crucial role for mixing.

  15. Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces.

    PubMed

    Abu-Alqumsan, Mohammad; Peer, Angelika

    2016-06-01

    Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.

  16. Climatology of cloud-base height from long-term radiosonde measurements in China

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Zhang, Lejian; Guo, Jianping; Feng, Jinming; Cao, Lijuan; Wang, Yang; Zhou, Qing; Li, Liangxu; Li, Bai; Xu, Hui; Liu, Lin; An, Ning; Liu, Huan

    2018-02-01

    Clouds are critical to the global radiation budget and hydrological cycle, but knowledge is still poor concerning the observed climatology of cloud-base height (CBH) in China. Based on fine-resolution sounding observations from the China Radiosonde Network (CRN), the method used to estimate CBH was modified, and uncertainty analyses indicated that the CBH is good enough. The accuracy of CBH estimation is verified by the comparison between the sounding-derived CBHs and those estimated from the micro-pulse lidar and millimeter-wave cloud radar. As such, the CBH climatology was compiled for the period 2006-16. Overall, the CBH exhibits large geographic variability across China, at both 0800 Local Standard Time (LST) and 2000 LST, irrespective of season. In addition, the summertime cloud base tends to be elevated to higher altitudes in dry regions [i.e., Inner Mongolia and the North China Plain (NCP)]. By comparison, the Tibetan Plateau (TP), Pearl River Delta (PRD) and Sichuan Basin (SCB) have relatively low CBHs (< 2.4 km above ground level). In terms of seasonality, the CBH reaches its maximum in summer and minimum in winter. A low cloud base tends to occur frequently (> 70%) over the TP, PRD and SCB. In contrast, at most sites over the Yangtze River Delta (YRD) and the NCP, about half the cloud belongs to the high-cloud category. The CBH does not exhibit marked diurnal variation in summer, throughout all CRN sites, probably due to the persistent cloud coverage caused by the East Asia Summer Monsson. To the best of our knowledge, this is the first CBH climatology produced from sounding measurements in China, and provides a useful reference for obtaining observational cloud base information.

  17. [Comparison of diagnostic quality in hysterosalpingography between iodinated non-ionic contrast media with low and high osmolarity].

    PubMed

    Piccotti, K; Guida, D; Carbonetti, F; Stefanetti, L; Macioce, A; Cremona, A; David, V

    Comparison of diagnostic quality in hysterosalpingography between low and high-osmolality contrast media. We performed a retrospective evaluation of two cohorts of patients who underwent HSG using contrast media with different osmolarity: the first group ,47 patients, underwent hysterosalpingography in the period September 2011-December 2012 using Iopromide 370 mg/ml; the second group, 50 patients, underwent HSG from January 2013 to October 2013 using Iomeprol 400 mg/ml. Three radiologists, in consensus reading,, reviewed the radiographs by assessing the following four parameters: opacification of the uterine cavity, uterine profiles definition, Fallopian tubes visualization, contrast media spillage into peritoneum. A score-scale from 0 to 3 was assigned for each of the mentioned parameter (0 = minimum non-diagnostic exam, 1 = sufficient examination; 2 = good quality examination; maximum 3 = high quality images). We documented a statistically significant higher quality in displaying Fallopian tubes among patients studied through high osmolarity contrast medium (Iopromide 370 mg/ml) than what obtained through lower osmolarity contrast medium (Iomeprol 400 mg/ml). The use of high osmolarity contrast medium enabled better visualization of the tubes and a greater number of diagnoses of chronic aspecific salpigintis due to the increased osmolality and viscosity of Iomeprol 400 mg/ml. There were no significant differences between the two contrast agents in the evaluation of intra-uterine pathology and in the evaluation of the tubal patency.

  18. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    PubMed Central

    Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.

    2013-01-01

    Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  19. Generalizing boundaries for triangular designs, and efficacy estimation at extended follow-ups.

    PubMed

    Allison, Annabel; Edwards, Tansy; Omollo, Raymond; Alves, Fabiana; Magirr, Dominic; E Alexander, Neal D

    2015-11-16

    Visceral leishmaniasis (VL) is a parasitic disease transmitted by sandflies and is fatal if left untreated. Phase II trials of new treatment regimens for VL are primarily carried out to evaluate safety and efficacy, while pharmacokinetic data are also important to inform future combination treatment regimens. The efficacy of VL treatments is evaluated at two time points, initial cure, when treatment is completed and definitive cure, commonly 6 months post end of treatment, to allow for slow response to treatment and detection of relapses. This paper investigates a generalization of the triangular design to impose a minimum sample size for pharmacokinetic or other analyses, and methods to estimate efficacy at extended follow-up accounting for the sequential design and changes in cure status during extended follow-up. We provided R functions that generalize the triangular design to impose a minimum sample size before allowing stopping for efficacy. For estimation of efficacy at a second, extended, follow-up time, the performance of a shrinkage estimator (SHE), a probability tree estimator (PTE) and the maximum likelihood estimator (MLE) for estimation was assessed by simulation. The SHE and PTE are viable approaches to estimate an extended follow-up although the SHE performed better than the PTE: the bias and root mean square error were lower and coverage probabilities higher. Generalization of the triangular design is simple to implement for adaptations to meet requirements for pharmacokinetic analyses. Using the simple MLE approach to estimate efficacy at extended follow-up will lead to biased results, generally over-estimating treatment success. The SHE is recommended in trials of two or more treatments. The PTE is an acceptable alternative for one-arm trials or where use of the SHE is not possible due to computational complexity. NCT01067443 , February 2010.

  20. Calculation of Appropriate Minimum Size of Isolation Rooms based on Questionnaire Survey of Experts and Analysis on Conditions of Isolation Room Use

    NASA Astrophysics Data System (ADS)

    Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha

    2017-07-01

    After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.

  1. Energy and IAQ Implications of Alternative Minimum Ventilation Rates in California Retail and School Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dutton, Spencer M.; Fisk, William J.

    For a stand-alone retail building, a primary school, and a secondary school in each of the 16 California climate zones, the EnergyPlus building energy simulation model was used to estimate how minimum mechanical ventilation rates (VRs) affect energy use and indoor air concentrations of an indoor-generated contaminant. The modeling indicates large changes in heating energy use, but only moderate changes in total building energy use, as minimum VRs in the retail building are changed. For example, predicted state-wide heating energy consumption in the retail building decreases by more than 50% and total building energy consumption decreases by approximately 10% asmore » the minimum VR decreases from the Title 24 requirement to no mechanical ventilation. The primary and secondary schools have notably higher internal heat gains than in the retail building models, resulting in significantly reduced demand for heating. The school heating energy use was correspondingly less sensitive to changes in the minimum VR. The modeling indicates that minimum VRs influence HVAC energy and total energy use in schools by only a few percent. For both the retail building and the school buildings, minimum VRs substantially affected the predicted annual-average indoor concentrations of an indoor generated contaminant, with larger effects in schools. The shape of the curves relating contaminant concentrations with VRs illustrate the importance of avoiding particularly low VRs.« less

  2. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  3. Influence of Observed Diurnal Cycles of Aerosol Optical Depth on Aerosol Direct Radiative Effect

    NASA Technical Reports Server (NTRS)

    Arola, A.; Eck, T. F.; Huttunen, J.; Lehtinen, K. E. J.; Lindfors, A. V.; Myhre, G.; Smirinov, A.; Tripathi, S. N.; Yu, H.

    2013-01-01

    The diurnal variability of aerosol optical depth (AOD) can be significant, depending on location and dominant aerosol type. However, these diurnal cycles have rarely been taken into account in measurement-based estimates of aerosol direct radiative forcing (ADRF) or aerosol direct radiative effect (ADRE). The objective of our study was to estimate the influence of diurnal aerosol variability at the top of the atmosphere ADRE estimates. By including all the possible AERONET sites, we wanted to assess the influence on global ADRE estimates. While focusing also in more detail on some selected sites of strongest impact, our goal was to also see the possible impact regionally.We calculated ADRE with different assumptions about the daily AOD variability: taking the observed daily AOD cycle into account and assuming diurnally constant AOD. Moreover, we estimated the corresponding differences in ADREs, if the single AOD value for the daily mean was taken from the the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra or Aqua overpass times, instead of accounting for the true observed daily variability. The mean impact of diurnal AOD variability on 24 h ADRE estimates, averaged over all AERONET sites, was rather small and it was relatively small even for the cases when AOD was chosen to correspond to the Terra or Aqua overpass time. This was true on average over all AERONET sites, while clearly there can be much stronger impact in individual sites. Examples of some selected sites demonstrated that the strongest observed AOD variability (the strongest morning afternoon contrast) does not typically result in a significant impact on 24 h ADRE. In those cases, the morning and afternoon AOD patterns are opposite and thus the impact on 24 h ADRE, when integrated over all solar zenith angles, is reduced. The most significant effect on daily ADRE was induced by AOD cycles with either maximum or minimum AOD close to local noon. In these cases, the impact on 24 h ADRE was typically around 0.1-0.2W/sq m (both positive and negative) in absolute values, 5-10% in relative ones.

  4. The transition from the open minimum to the ring minimum on the ground state and on the lowest excited state of like symmetry in ozone: A configuration interaction study

    DOE PAGES

    Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.; ...

    2016-03-10

    The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less

  5. The transition from the open minimum to the ring minimum on the ground state and on the lowest excited state of like symmetry in ozone: A configuration interaction study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.

    The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less

  6. Investigation of Color Constancy in 4.5-Month-Old Infants under a Strict Control of Luminance Contrast for Individual Participants

    ERIC Educational Resources Information Center

    Yang, Jiale; Kanazawa, So; Yamaguchi, Masami K.; Kuriki, Ichiro

    2013-01-01

    The current study examined color constancy in infants using a familiarization paradigm. We first obtained isoluminance in each infant as defined by the minimum motion paradigm and used these data to control the luminance of stimuli in the main experiments. In the familiarization phase of the main experiment, two identical smiling face patterns…

  7. The energy requirements of an aircraft triggered discharge

    NASA Astrophysics Data System (ADS)

    Bicknell, J. A.; Shelton, R. W.

    The corona produced at aircraft surfaces requires an energy input before the corona can develop into a high current discharge and, thus, a possible lightning stroke. This energy must be drawn from the space charge field of the thundercloud and, since this is of low density, the unique propagation characteristics of positive corona streamers may be important. Estimates of the energy made available by the propagation are compared with laboratory measurements of the minimum energy input required to trigger a breakdown. The comparison indicates a minimum streamer range for breakdown of several tens of meters. Also estimated is the energy released as a consequence of streamer-hydrometer interactions; this is shown to be significant so that breakdown could depend upon the precipitation rate within the cloud. Inhibiting streamer production may therefore provide an aircraft with a degree of corona protection.

  8. Effect of intravascular contrast agent on diffusion and perfusion fraction coefficients in the peripheral zone and prostate cancer.

    PubMed

    Mazaheri, Yousef; Hötker, Andreas M; Shukla-Dave, Amita; Akin, Oguz; Hricak, Hedvig

    2018-09-01

    To determine whether water diffusion and the perfusion fraction coefficients in prostate peripheral zone (PZ) and prostate cancer (PCa) are affected by intravenous contrast injection and explore the potential mechanism behind previously reported differences between pre- and post-contrast ADC values. Our institutional review board waived informed consent for this HIPAA-compliant, retrospective study, which included 32 patients (median age, 63 years; range, 47-77 years) with biopsy-proven, untreated PCa who underwent 3-Tesla MRI, including DW-MRI at b-values 0, 400, 700, 1000 s/mm 2 before and after gadolinium injection. For regions of interest (ROIs) in presumed benign PZ and PZ PCa, apparent diffusion coefficient (ADC), perfusion fraction f, and diffusion coefficient D were estimated voxel-wise, and signal-to-noise ratio (SNR) and contrast-to-noise (CNR) were estimated. Pre- and post-contrast measurements were compared by Wilcoxon signed-rank test; P < 0.05 was considered significant. In PZ, f (P = 0.002) was significantly higher on post-contrast imaging than on pre-contrast imaging, but ADC and D values did not change significantly (P = 0.562 and 0.295 respectively). In PCa, all parameters differed significantly between post-contrast and pre-contrast imaging (P < 0.0001 for ADC, P = 0.0084 for D, and P = 0.029 for f). On post-contrast imaging, SNR was not significantly different in PZ (P = 0.260) but was significantly lower in PCa (P < 0.0001); CNR did not change significantly (P = 0.059). After contrast injection, ADC and D declined significantly in PCa only, while f increased significantly in both PCa and PZ. Pre- and post-contrast diffusion parameters cannot be used interchangeably for diagnostic purposes that require quantitative diffusion estimates. Copyright © 2018. Published by Elsevier Inc.

  9. How dusty is α Centauri?. Excess or non-excess over the infrared photospheres of main-sequence stars

    NASA Astrophysics Data System (ADS)

    Wiegert, J.; Liseau, R.; Thébault, P.; Olofsson, G.; Mora, A.; Bryden, G.; Marshall, J. P.; Eiroa, C.; Montesinos, B.; Ardila, D.; Augereau, J. C.; Bayo Aran, A.; Danchi, W. C.; del Burgo, C.; Ertel, S.; Fridlund, M. C. W.; Hajigholi, M.; Krivov, A. V.; Pilbratt, G. L.; Roberge, A.; White, G. J.; Wolf, S.

    2014-03-01

    Context. Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby, solar-type binary α Centauri have metallicities that are higher than solar, which is thought to promote giant planet formation. Aims: We aim to determine the level of emission from debris around the stars in the α Cen system. This requires knowledge of their photospheres. Having already detected the temperature minimum, Tmin, of α Cen A at far-infrared wavelengths, we here attempt to do the same for the more active companion α Cen B. Using the α Cen stars as templates, we study the possible effects that Tmin may have on the detectability of unresolved dust discs around other stars. Methods: We used Herschel-PACS, Herschel-SPIRE, and APEX-LABOCA photometry to determine the stellar spectral energy distributions in the far infrared and submillimetre. In addition, we used APEX-SHeFI observations for spectral line mapping to study the complex background around α Cen seen in the photometric images. Models of stellar atmospheres and of particulate discs, based on particle simulations and in conjunction with radiative transfer calculations, were used to estimate the amount of debris around these stars. Results: For solar-type stars more distant than α Cen, a fractional dust luminosity fd ≡ Ldust/Lstar 2 × 10-7 could account for SEDs that do not exhibit the Tmin effect. This is comparable to estimates of fd for the Edgeworth-Kuiper belt of the solar system. In contrast to the far infrared, slight excesses at the 2.5σ level are observed at 24 μm for both α Cen A and B, which, if interpreted as due to zodiacal-type dust emission, would correspond to fd (1-3) × 10-5, i.e. some 102 times that of the local zodiacal cloud. Assuming simple power-law size distributions of the dust grains, dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the α Cen stars, viz. ≲4 × 10-6 M≤ftmoon of 4 to 1000 μm size grains, distributed according to n(a) ∝ a-3.5. Similarly, for filled-in Tmin emission, corresponding Edgeworth-Kuiper belts could account for {˜ 10-3 M≤ftmoon} of dust. Conclusions: Our far-infrared observations lead to estimates of upper limits to the amount of circumstellar dust around the stars α Cen A and B. Light scattered and/or thermally emitted by exo-Zodi discs will have profound implications for future spectroscopic missions designed to search for biomarkers in the atmospheres of Earth-like planets. The far-infrared spectral energy distribution of α Cen B is marginally consistent with the presence of a minimum temperature region in the upper atmosphere of the star. We also show that an α Cen A-like temperature minimum may result in an erroneous apprehension about the presence of dust around other, more distant stars. Based on observations with Herschel which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.And also based on observations with APEX, which is a 12 m diameter submillimetre telescope at 5100 m altitude on Llano Chajnantor in Chile. The telescope is operated by Onsala Space Observatory, Max-Planck-Institut für Radioastronomie (MPIfR), and European Southern Observatory (ESO).

  10. The Separation of Blood Components Using Standing Surface Acoustic Waves (SSAWs) Microfluidic Devices: Analysis and Simulation.

    PubMed

    Soliman, Ahmed M; Eldosoky, Mohamed A; Taha, Taha E

    2017-03-29

    The separation of blood components (WBCs, RBCs, and platelets) is important for medical applications. Recently, standing surface acoustic wave (SSAW) microfluidic devices are used for the separation of particles. In this paper, the design analysis of SSAW microfluidics is presented. Also, the analysis of SSAW force with Rayleigh angle effect and its attenuation in liquid-loaded substrate, viscous drag force, hydrodynamic force, and diffusion force are explained and analyzed. The analyses are provided for selecting the piezoelectric material, width of the main microchannel, working area of SAW, wavelength, minimum input power required for the separation process, and widths of outlet collecting microchannels. The design analysis of SSAW microfluidics is provided for determining the minimum input power required for the separation process with appropriated the displacement contrast of the particles.The analyses are applied for simulation the separation of blood components. The piezoelectric material, width of the main microchannel, working area of SAW, wavelength, and minimum input power required for the separation process are selected as LiNbO₃, 120 μm, 1.08 mm², 300 μm, 371 mW. The results are compared to other published results. The results of these simulations achieve minimum power consumption, less complicated setup, and high collecting efficiency. All simulation programs are built by MATLAB.

  11. The Separation of Blood Components Using Standing Surface Acoustic Waves (SSAWs) Microfluidic Devices: Analysis and Simulation

    PubMed Central

    Soliman, Ahmed M.; Eldosoky, Mohamed A.; Taha, Taha E.

    2017-01-01

    The separation of blood components (WBCs, RBCs, and platelets) is important for medical applications. Recently, standing surface acoustic wave (SSAW) microfluidic devices are used for the separation of particles. In this paper, the design analysis of SSAW microfluidics is presented. Also, the analysis of SSAW force with Rayleigh angle effect and its attenuation in liquid-loaded substrate, viscous drag force, hydrodynamic force, and diffusion force are explained and analyzed. The analyses are provided for selecting the piezoelectric material, width of the main microchannel, working area of SAW, wavelength, minimum input power required for the separation process, and widths of outlet collecting microchannels. The design analysis of SSAW microfluidics is provided for determining the minimum input power required for the separation process with appropriated the displacement contrast of the particles.The analyses are applied for simulation the separation of blood components. The piezoelectric material, width of the main microchannel, working area of SAW, wavelength, and minimum input power required for the separation process are selected as LiNbO3, 120 μm, 1.08 mm2, 300 μm, 371 mW. The results are compared to other published results. The results of these simulations achieve minimum power consumption, less complicated setup, and high collecting efficiency. All simulation programs are built by MATLAB. PMID:28952506

  12. Scattering linear polarization of late-type active stars

    NASA Astrophysics Data System (ADS)

    Yakobchuk, T. M.; Berdyugina, S. V.

    2018-05-01

    Context. Many active stars are covered in spots, much more so than the Sun, as indicated by spectroscopic and photometric observations. It has been predicted that star spots induce non-zero intrinsic linear polarization by breaking the visible stellar disk symmetry. Although small, this effect might be useful for star spot studies, and it is particularly significant for a future polarimetric atmosphere characterization of exoplanets orbiting active host stars. Aims: Using models for a center-to-limb variation of the intensity and polarization in presence of continuum scattering and adopting a simplified two-temperature photosphere model, we aim to estimate the intrinsic linear polarization for late-type stars of different gravity, effective temperature, and spottedness. Methods: We developed a code that simulates various spot configurations or uses arbitrary surface maps, performs numerical disk integration, and builds Stokes parameter phase curves for a star over a rotation period for a selected wavelength. It allows estimating minimum and maximum polarization values for a given set of stellar parameters and spot coverages. Results: Based on assumptions about photosphere-to-spot temperature contrasts and spot size distributions, we calculate the linear polarization for late-type stars with Teff = 3500 K-6000 K, log g = 1.0-5.0, using the plane-parallel and spherical atmosphere models. Employing random spot surface distribution, we analyze the relation between spot coverage and polarization and determine the influence of different input parameters on results. Furthermore, we consider spot configurations with polar spots and active latitudes and longitudes.

  13. Depth Of Modulation And Spot Size Selection In Bar-Code Laser Scanners

    NASA Astrophysics Data System (ADS)

    Barkan, Eric; Swartz, Jerome

    1982-04-01

    Many optical and electronic considerations enter into the selection of optical spot size in flying spot laser scanners of the type used in modern industrial and commerical environments. These include: the scale of the symbols to be read, optical background noise present in the symbol substrate, and factors relating to the characteristics of the signal processor. Many 'front ends' consist of a linear signal conditioner followed by nonlinear conditioning and digitizing circuitry. Although the nonlinear portions of the circuit can be difficult to characterize mathematically, it is frequently possible to at least give a minimum depth of modulation measure to yield a worst-case guarantee of adequate performance with respect to digitization accuracy. Depth of modulation actually delivered to the nonlinear circuitry will depend on scale, contrast, and noise content of the scanned symbol, as well as the characteristics of the linear conditioning circuitry (eg. transfer function and electronic noise). Time and frequency domain techniques are applied in order to estimate the effects of these factors in selecting a spot size for a given system environment. Results obtained include estimates of the effects of the linear front end transfer function on effective spot size and asymmetries which can affect digitization accuracy. Plots of convolution-computed modulation patterns and other important system properties are presented. Considerations are limited primarily to Gaussian spot profiles but also apply to more general cases. Attention is paid to realistic symbol models and to implications with respect to printing tolerances.

  14. Vessel packaging effect in laser speckle contrast imaging and laser Doppler imaging.

    PubMed

    Fredriksson, Ingemar; Larsson, Marcus

    2017-10-01

    Laser speckle-based techniques are frequently used to assess microcirculatory blood flow. Perfusion estimates are calculated either by analyzing the speckle fluctuations over time as in laser Doppler flowmetry (LDF), or by analyzing the speckle contrast as in laser speckle contrast imaging (LSCI). The perfusion estimates depend on the amount of blood and its speed distribution. However, the perfusion estimates are commonly given in arbitrary units as they are nonlinear and depend on the magnitude and the spatial distribution of the optical properties in the tissue under investigation. We describe how the spatial confinement of blood to vessels, called the vessel packaging effect, can be modeled in LDF and LSCI, which affect the Doppler power spectra and speckle contrast, and the underlying bio-optical mechanisms for these effects. As an example, the perfusion estimate is reduced by 25% for LDF and often more than 50% for LSCI when blood is located in vessels with an average diameter of 40  μm, instead of being homogeneously distributed within the tissue. This significant effect can be compensated for only with knowledge of the average diameter of the vessels in the tissue. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  15. Estimating Fluctuating Pressures From Distorted Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Leondes, Cornelius T.

    1994-01-01

    Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.

  16. Quantitative hard x-ray phase contrast imaging of micropipes in SiC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kohn, V. G.; Argunova, T. S.; Je, J. H., E-mail: jhje@postech.ac.kr

    2013-12-15

    Peculiarities of quantitative hard x-ray phase contrast imaging of micropipes in SiC are discussed. The micropipe is assumed as a hollow cylinder with an elliptical cross section. The major and minor diameters can be restored using the least square fitting procedure by comparing the experimental data, i.e. the profile across the micropipe axis, with those calculated based on phase contrast theory. It is shown that one projection image gives an information which does not allow a complete determination of the elliptical cross section, if an orientation of micropipe is not known. Another problem is a weak accuracy in estimating themore » diameters, partly because of using pink synchrotron radiation, which is necessary because a monochromatic beam intensity is not sufficient to reveal the weak contrast from a very small object. The general problems of accuracy in estimating the two diameters using the least square procedure are discussed. Two experimental examples are considered to demonstrate small as well as modest accuracies in estimating the diameters.« less

  17. Region-wide trends of nesting ospreys in northwestern Mexico: a three-decade perspective

    USGS Publications Warehouse

    Henny, Charles J.; Anderson, Daniel W.; Vera, Aradit Castellanos; Cartron, Jean-Luc E.

    2008-01-01

    We used a double-sampling technique (air plus ground survey) in 2006, with partial double coverage, to estimate the present size of the Osprey (Pandion haliaetus) nesting population in northwestern Mexico (coastal Baja California, islands in the Gulf of California, and coastal Sonora and Sinaloa). With the exception of Natividad, Cedros, and San Benitos islands along the Pacific coast of Baja California (all three excluded from our coverage in 2006 due to fog), this survey was a repeat of previous surveys conducted by us with the same protocol in 1977 and 1992/1993, allowing for estimates of regional population trends. The minimum population estimate for the area we surveyed in 2006 was 1343 nesting pairs, an 81% increase since 1977, but only a 3% increase since 1992/1993. The population on the Gulf side of Baja California generally remained stable during the three surveys (255, 236, and 252 pairs, respectively). The population of the Midriff Islands (Gulf of California in the vicinity of 29°N latitude) remained similar from 1992/1993 (308 pairs) to 2006 (289 pairs), but with notable population changes on the largest two islands (Guardian Angel: 45 to 105 pairs [133% increase]; Tiburón: 164 to 109 pairs [34% decrease]). The minimum estimated Osprey population on the Sonora mainland decreased in a manner similar to adjacent Isla Tiburón, i.e., by 26%, from 214 pairs in 1993 to 158 pairs in 2006. In contrast, the population in coastal Sinaloa, which had increased by 150% between 1977 and 1993, grew again by 58% between 1993 and 2006, from 180 to 285 pairs. Our survey confirmed previously described patterns of rapid population changes at a local level, coupled with apparent shifts in spatial distribution. The large ground-nesting population that until recently nested on two islands in San Ignacio Lagoon ( Pacific Ocean side, Baja California) was no longer present on the islands in 2006, but an equivalent number of pairs were found to the north and south of the lagoon, nesting in small towns and along adjoining overhead electric lines, with no overall change in population size for that general area (198 pairs in 1992; 199 in 2006). Use of artificial nesting structures was 4.3% in 1977 and 6.2% in 1992/1993, but jumped to 26.4% in 2006. Use of poles that support overhead electric lines poses a risk of electrocution to Ospreys and also causes power outages and fires. We recommend modification of these poles to safely accommodate Osprey nests, as has been successfully accomplished in many countries.

  18. Microfluidic breakups of confined droplets against a linear obstacle: The importance of the viscosity contrast

    NASA Astrophysics Data System (ADS)

    Salkin, Louis; Courbin, Laurent; Panizza, Pascal

    2012-09-01

    Combining experiments and theory, we investigate the break-up dynamics of deformable objects, such as drops and bubbles, against a linear micro-obstacle. Our experiments bring the role of the viscosity contrast Δη between dispersed and continuous phases to light: the evolution of the critical capillary number to break a drop as a function of its size is either nonmonotonic (Δη>0) or monotonic (Δη≤0). In the case of positive viscosity contrasts, experiments and modeling reveal the existence of an unexpected critical object size for which the critical capillary number for breakup is minimum. Using simple physical arguments, we derive a model that well describes observations, provides diagrams mapping the four hydrodynamic regimes identified experimentally, and demonstrates that the critical size originating from confinement solely depends on geometrical parameters of the obstacle.

  19. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.

  20. Geologic and geophysical investigations of the Zuni-Bandera volcanic field, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ander, M.E.; Heiken, G.; Eichelberger, J.

    1981-05-01

    A positive, northeast-trending gravity anomaly, 90 km long and 30 km wide, extends southwest from the Zuni uplift, New Mexico. The Zuni-Bandera volcanic field, an alignment of 74 basaltic vents, is parallel to the eastern edge of the anomaly. Lavas display a bimodal distribution of tholeiitic and alkalic compositions, and were erupted over a period from 4 Myr to present. A residual gravity profile taken perpendicular to the major axis of the anomaly was analyzed using linear programming and ideal body theory to obtain bounds on the density contrast, depth, and minimum thickness of the gravity body. Two-dimensionality was assumed.more » The limiting case where the anomalous body reaches the surface gives 0.1 g/cm/sup 3/ as the greatest lower bound on the maximum density contrast. If 0.4 g/cm/sup 3/ is taken as the geologically reasonable upper limit on the maximum density contrast, the least upper bound on the depth of burial is 3.5 km and minimum thickness is 2 km. A shallow mafic intrusion, emplaced sometime before Laramide deformation, is proposed to account for the positive gravity anomaly. Analysis of a magnetotelluric survey suggests that the intrusion is not due to recent basaltic magma associated with the Zuni-Bandera volcanic field. This large basement structure has controlled the development of the volcanic field; vent orientations have changed somewhat through time, but the trend of the volcanic chain followed the edge of the basement structure. It has also exhibited some control on deformation of the sedimentary section.« less

  1. Comment on "The Predicted Size of Cycle 23 Based on the Inferred three-cycle Quasiperiodicity of the Planetary Index Ap"

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    1999-01-01

    Recently, Ahluwalia reviewed the solar and geomagnetic data for the last 6 decades and remarked that these data "indicate the existence of a three-solar-activity-cycle quasiperiodicity in them." Furthermore, on the basis of this inferred quasiperiodicity, he asserted that cycle 23 represents the initial cycle in a new three-cycle string, implying that it "will be more modest (a la cycle 17) with an annual mean sunspot number count of 119.3 +/- 30 at the maximum", a prediction that is considerably below the consensus prediction of 160 +/- 30 by Joselin et al. and of similar predictions by others based on a variety of predictive techniques. Several major sticking points of Ahluwalia's presentation, however, must be readdressed, and these issues form the basis of this comment. First, Ahluwalia appears to have based his analysis on a data set of Ap index values that is erroneous. For example, he depicts for the interval of 1932-1997 the variation of the Ap index in terms of annual averages, contrasting them against annual averages of sunspot number (SSN), and he lists for cycles 17-23 the minimum and maximum value of each, as well as the years in which they occur and a quantity which he calls "Amplitude" (defined as the numeric difference between the maximum and minimum values). In particular, he identifies the minimum Ap index (i.e., the minimum value of the Ap index in the vicinity of sunspot cycle minimum, which usually occurs in the year following sunspot minimum and which will be called hereafter, simply, Ap min) and the year in which it occur for cycles 17 - 23 respectively.

  2. Evaluation of the dependence of the exposure dose on the attenuation correction in brain PET/CT scans using 18F-FDG

    NASA Astrophysics Data System (ADS)

    Choi, Eun-Jin; Jeong, Moon-Taeg; Jang, Seong-Joo; Choi, Nam-Gil; Han, Jae-Bok; Yang, Nam-Hee; Dong, Kyung-Rae; Chung, Woon-Kwan; Lee, Yun-Jong; Ryu, Young-Hwan; Choi, Sung-Hyun; Seong, Kyeong-Jeong

    2014-01-01

    This study examined whether scanning could be performed with minimum dose and minimum exposure to the patient after an attenuation correction. A Hoffman 3D Brain Phantom was used in BIO_40 and D_690 PET/CT scanners, and the CT dose for the equipment was classified as a low dose (minimum dose), medium dose (general dose for scanning) and high dose (dose with use of contrast medium) before obtaining the image at a fixed kilo-voltage-peak (kVp) and milliampere (mA) that were adjusted gradually in 17-20 stages. A PET image was then obtained to perform an attenuation correction based on an attenuation map before analyzing the dose difference. Depending on tube current in the range of 33-190 milliampere-second (mAs) when BIO_40 was used, a significant difference in the effective dose was observed between the minimum and the maximum mAs (p < 0.05). According to a Scheffe post-hoc test, the ratio of the minimum to the maximum of the effective dose was increased by approximately 5.26-fold. Depending on the change in the tube current in the range of 10-200 mA when D_690 was used, a significant difference in the effective dose was observed between the minimum and the maximum of mA (p < 0.05). The Scheffe posthoc test revealed a 20.5-fold difference. In conclusion, because effective exposure dose increases with increasing operating current, it is possible to reduce the exposure limit in a brain scan can be reduced if the CT dose can be minimized for a transmission scan.

  3. A measurement of the holographic minimum-observable beam branching ratio in the FERMILAB 15-ft bubble chamber

    NASA Astrophysics Data System (ADS)

    Aderholz, M.; Aggarwal, M. M.; Akbari, H.; Allport, P. P.; Badyal, S. K.; Ballagh, H. C.; Barth, M.; Baton, J. P.; Bingham, H. H.; Bjelkhagen, H.; Brucker, E. B.; Burnstein, R. A.; Campbell, J. R.; Cence, R. J.; Chatterjee, T. K.; Clayton, E. F.; Corrigan, G.; Coutures, C.; DeProspo, D.; Devanand; De Wolf, E. A.; Faulkner, P. J. W.; Foeth, H.; Fretter, W. B.; Geissler, K.; Gupta, V. K.; Hanlon, J.; Harigel, G. G.; Harris, F. A.; Hawkins, J.; Jabiol, M. A.; Jacques, P.; Jones, G. T.; Jones, M. D.; Kafka, T.; Kalelkar, M.; Kasper, P.; Kohli, J. M.; Koller, E. L.; Krawiec, R. J.; Lauko, M.; Lys, J. E.; Marage, P.; Milburn, R. H.; Miller, D. B.; Mittra, I. S.; Mobayyen, M. M.; Moreels, J.; Morrison, D. R. O.; Myatt, G.; Naon, R.; Napier, A.; Naylor, P.; Neveu, M.; Passmore, D.; Peters, M. W.; Peterson, V. Z.; Plano, R.; Rao, N. K.; Rubin, H. A.; Sacton, J.; Sambyal, S. S.; Schmitz, N.; Schneps, J.; Sekulin, R. L.; Sewell, S.; Singh, J. B.; Smart, W.; Stamer, P.; Varvell, K. E.; Verluyten, L.; Voyvodic, L.; Wachsmuth, H.; Wainstein, S.; Williams, W.; Willocq, S.; Yost, G. P.; E-632 Collaboration

    1999-01-01

    Holography has been used successfully in combination with conventional optics for the first time in a large cryogenic bubble chamber, the 15-foot bubble chamber at Fermilab, during a physics run. The innovative system combined the reference beam with the object beam, irradiating a conical volume of ˜1.4 m 3. Bubble tracks from neutrino interactions with a width of ˜120 μm have been recorded with good contrast. The ratio of intensities of the object light to the reference light striking the film is called the beam branching ratio. We obtained in our experiment an exceedingly small minimum-observable ratio of (0.54±0.21)×10 -7. The technology has the potential for a wide range of applications.

  4. System design of the Pioneer Venus spacecraft. Volume 7: Communication subsystem studies

    NASA Technical Reports Server (NTRS)

    Newlands, D. M.

    1973-01-01

    Communications subsystem tradeoffs were undertaken to establish a low cost and low weight design consistent with the mission requirements. Because of the weight constraint of the Thor/Delta launched configuration, minimum weight was emphasized in determining the Thor/Delta design. In contrast, because of the greatly relaxed weight constraint of the Atlas/Centaur launched configuration, minimum cost and off the shelf hardware were emphasized and the attendant weight penalities accepted. Communication subsystem hardware elements identified for study included probe and bus antennas (CM-6, CM-17), power amplifiers (CM-10), and the large probe transponder and small probe stable oscillator required for doppler tracking (CM-11, CM-16). In addition, particular hardware problems associated with the probe high temperature and high-g environment were investigated (CM-7).

  5. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  6. Theoretical study of X⁻ · 1 · YF (1 = triazine, X = Cl, Br and I, Y = H, Cl, Br, I, PH₂ and AsH₂): noncovalently electron-withdrawing effects on anion-arene interactions.

    PubMed

    Chen, Yishan; Yao, Lifeng

    2014-01-01

    The ternary complexes X(-) · 1 · YF (1 = triazine, X = Cl, Br and I, Y = H, Cl, Br, I, PH2 and AsH2) have been investigated by MP2 calculations to understand the noncovalently electron-withdrawing effects on anion-arene interactions. The results indicate that in binary complexes (1 · X(-)), both weak σ-type and anion-π complexes can be formed for Cl(-) and Br(-), but only anion-π complex can be formed for I(-). Moreover, the hydrogen-bonding complex is the global minimum for all three halides in binary complexes. However, in ternary complexes, anion-π complex become unstable and only σ complex can retain in many cases for Cl(-) and Br(-). Anion-π complex keeps stable only when YF = HF. In contrast with binary complexes, σ complex become the global minimum for Cl(-) and Br(-) in ternary complexes. These changes in binding mode and strength are consistent with the results of covalently electron-withdrawing effects. However, in contrast with the covalently electron-withdrawing substituents, Cl(-) and Br(-) can attack the aromatic carbon atom to form a strong σ complex when the noncovalently electron-withdrawing effect is induced by halogen bonding. The binding behavior for I(-) is different from that for Cl(-) and Br(-) in two aspects. First, the anion-π complex for I(-) can also keep stable when the noncovalent interaction is halogen bonding. Second, the anion-π complex for I(-) is the global minimum when it can retain as a stable structure.

  7. Contrasts between chemical and physical estimates of baseflow help discern multiple sources of water contributing to rivers

    NASA Astrophysics Data System (ADS)

    Cartwright, I.; Gilfedder, B.; Hofmann, H.

    2013-05-01

    This study compares geochemical and physical methods of estimating baseflow in the upper reaches of the Barwon River, southeast Australia. Estimates of baseflow from physical techniques such as local minima and recursive digital filters are higher than those based on chemical mass balance using continuous electrical conductivity (EC). Between 2001 and 2011 the baseflow flux calculated using chemical mass balance is between 1.8 × 103 and 1.5 × 104 ML yr-1 (15 to 25% of the total discharge in any one year) whereas recursive digital filters yield baseflow fluxes of 3.6 × 103 to 3.8 × 104 ML yr-1 (19 to 52% of discharge) and the local minimum method yields baseflow fluxes of 3.2 × 103 to 2.5 × 104 ML yr-1 (13 to 44% of discharge). These differences most probably reflect how the different techniques characterise baseflow. Physical methods probably aggregate much of the water from delayed sources as baseflow. However, as many delayed transient water stores (such as bank return flow or floodplain storage) are likely to be geochemically similar to surface runoff, chemical mass balance calculations aggregate them with the surface runoff component. The mismatch between geochemical and physical estimates is greatest following periods of high discharge in winter, implying that these transient stores of water feed the river for several weeks to months. Consistent with these interpretations, modelling of bank storage indicates that bank return flows provide water to the river for several weeks after flood events. EC vs. discharge variations during individual flow events also imply that an inflow of low EC water stored within the banks or on the floodplain occurs as discharge falls. The joint use of physical and geochemical techniques allows a better understanding of the different components of water that contribute to river flow, which is important for the management and protection of water resources.

  8. 20 CFR 1001.152 - Hold-harmless criteria and minimum funding level.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...: (1) Update, as appropriate, the States' estimates of TAP workload and reserve sufficient funds for that purpose from the total amount available for allocation to the States. Beyond TAP workload, no...

  9. Wave-Based Algorithms and Bounds for Target Support Estimation

    DTIC Science & Technology

    2015-05-15

    vector electromagnetic formalism in [5]. This theory leads to three main variants of the optical theorem detector, in particular, three alternative...further expands the applicability for transient pulse change detection of ar- bitrary nonlinear-media and time-varying targets [9]. This report... electromagnetic methods a new methodology to estimate the minimum convex source region and the (possibly nonconvex) support of a scattering target from knowledge of

  10. Void Growth and Coalescence Simulations

    DTIC Science & Technology

    2013-08-01

    distortion and damage, minimum time step, and appropriate material model parameters. Further, a temporal and spatial convergence study was used to...estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we use a Gurson model with Johnson-Cook...spatial convergence study was used to estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we

  11. Estimates of downed woody debris decay class transitions for forests across the eastern United States

    Treesearch

    Matthew B. Russell; Christopher W. Woodall; Shawn Fraver; Anthony W. D' Amato

    2013-01-01

    Large-scale inventories of downed woody debris (DWD; downed dead wood of a minimum size) often record decay status by assigning pieces to classes of decay according to their visual/structural attributes (e.g., presence of branches, log shape, and texture and color of wood). DWD decay classes are not only essential for estimating current DWD biomass and carbon stocks,...

  12. Development of a predictive model to estimate the effect of soil solarization on survival of soilborne inoculum of Phytophthora ramorum and Phytophthora pini

    Treesearch

    Fumiaki Funahashi; Jennifer L. Parke

    2017-01-01

    Soil solarization has been shown to be an effective tool to manage Phytophthora spp. within surface soils, but estimating the minimum time required to complete local eradication under variable weather conditions remains unknown. A mathematical model could help predict the effectiveness of solarization at different sites and soil depths....

  13. Using regression methods to estimate stream phosphorus loads at the Illinois River, Arkansas

    USGS Publications Warehouse

    Haggard, B.E.; Soerens, T.S.; Green, W.R.; Richards, R.P.

    2003-01-01

    The development of total maximum daily loads (TMDLs) requires evaluating existing constituent loads in streams. Accurate estimates of constituent loads are needed to calibrate watershed and reservoir models for TMDL development. The best approach to estimate constituent loads is high frequency sampling, particularly during storm events, and mass integration of constituents passing a point in a stream. Most often, resources are limited and discrete water quality samples are collected on fixed intervals and sometimes supplemented with directed sampling during storm events. When resources are limited, mass integration is not an accurate means to determine constituent loads and other load estimation techniques such as regression models are used. The objective of this work was to determine a minimum number of water-quality samples needed to provide constituent concentration data adequate to estimate constituent loads at a large stream. Twenty sets of water quality samples with and without supplemental storm samples were randomly selected at various fixed intervals from a database at the Illinois River, northwest Arkansas. The random sets were used to estimate total phosphorus (TP) loads using regression models. The regression-based annual TP loads were compared to the integrated annual TP load estimated using all the data. At a minimum, monthly sampling plus supplemental storm samples (six samples per year) was needed to produce a root mean square error of less than 15%. Water quality samples should be collected at least semi-monthly (every 15 days) in studies less than two years if seasonal time factors are to be used in the regression models. Annual TP loads estimated from independently collected discrete water quality samples further demonstrated the utility of using regression models to estimate annual TP loads in this stream system.

  14. Real or perceived impediments to minimum pricing of alcohol in Australia: public opinion, the industry and the law.

    PubMed

    Chalmers, Jenny; Carragher, Natacha; Davoren, Sondra; O'Brien, Paula

    2013-11-01

    A burgeoning body of empirical evidence demonstrates that increases in the price of alcohol can reduce per capita alcohol consumption and harmful drinking. Taxes on alcohol can be raised to increase prices, but this strategy can be undermined if the industry absorbs the tax increase and cross-subsidises the price of one alcoholic beverage with other products. Such loss-leading strategies are not possible with minimum pricing. We argue that a minimum (or floor) price for alcohol should be used as a complement to alcohol taxation. Several jurisdictions have already introduced minimum pricing (e.g., Canada, Ukraine) and others are currently investigating pathways to introduce a floor price (e.g., Scotland). Tasked by the Australian government to examine the public interest case for a minimum price, Australia's peak preventative health agency recommended against setting one at the present time. The agency was concerned that there was insufficient Australian specific modelling evidence to make robust estimates of the net benefits. Nonetheless, its initial judgement was that it would be difficult for a minimum price to produce benefits for Australia at the national level. Whilst modelling evidence is certainly warranted to support the introduction of the policy, the development and uptake of policy is influenced by more than just empirical evidence. This article considers three potential impediments to minimum pricing: public opinion and misunderstandings or misgivings about the operation of a minimum price; the strength of alcohol industry objections and measures to undercut the minimum price through discounts and promotions; and legal obstacles including competition and trade law. The analysis of these factors is situated in an Australian context, but has salience internationally. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Implications of Extended Solar Minima

    NASA Technical Reports Server (NTRS)

    Adams, Mitzi L.; Davis, J. M.

    2009-01-01

    Since the discovery of periodicity in the solar cycle, the historical record of sunspot number has been carefully examined, attempting to make predictions about the next cycle. Much emphasis has been on predicting the maximum amplitude and length of the next cycle. Because current space-based and suborbital instruments are designed to study active phenomena, there is considerable interest in estimating the length and depth of the current minimum. We have developed criteria for the definition of a minimum and applied it to the historical sunspot record starting in 1749. In doing so, we find that 1) the current minimum is not yet unusually long and 2) there is no obvious way of predicting when, using our definition, the current minimum may end. However, by grouping the data into 22- year cycles there is an interesting pattern of extended minima that recurs every fourth or fifth 22-year cycle. A preliminary comparison of this pattern with other records, suggests the possibility of a correlation between extended minima and lower levels of solar irradiance.

  16. Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.

    PubMed

    Song, C; Zhuang, T; Wu, Q

    2005-01-01

    This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.

  17. The trajectory and atmospheric impact of asteroid 2014 AA

    NASA Astrophysics Data System (ADS)

    Farnocchia, Davide; Chesley, Steven R.; Brown, Peter G.; Chodas, Paul W.

    2016-08-01

    Near-Earth asteroid 2014 AA entered the Earth's atmosphere on 2014 January 2, only 21 h after being discovered by the Catalina Sky Survey. In this paper we compute the trajectory of 2014 AA by combining the available optical astrometry, seven ground-based observations over 69 min, and the International Monitoring System detection of the atmospheric impact infrasonic airwaves in a least-squares orbit estimation filter. The combination of these two sources of observations results in a tremendous improvement in the orbit uncertainties. The impact time is 3:05 UT with a 1σ uncertainty of 6 min, while the impact location corresponds to a west longitude of 44.2° and a latitude of 13.1° with a 1σ uncertainty of 140 km. The minimum impact energy estimated from the infrasound data and the impact velocity result in an estimated minimum mass of 22.6 t. By propagating the trajectory of 2014 AA backwards we find that the only window for finding precovery observations is for the three days before its discovery.

  18. Vision and the dimensions of nerve fibers.

    PubMed

    Wade, Nicholas J

    2005-12-01

    Vision provided the obvious source of determining the dimensions of nerve fibers when suitable achromatic microscopes were directed at neural tissue in the 1830s. The earlier microscopes of Hooke and Leeuwenhoek were unable to resolve such small structures adequately. However, it was not Hooke's microscope that led to an estimate of the dimensions of nerve fibers, but his experiments on the limits of visual resolution; he determined that a separation of one minute of arc was the minimum that could normally be seen. Descartes had earlier speculated that the retina consisted of the ends of fibers of the optic nerve, and that their size defined the limits of what could be seen. Estimates of the diameters of nerve fibers were made on the basis of human visual acuity by Porterfield in 1738; he calculated the diameters of nerve fibers in the retina as one 7,200th part of an inch (0.0035 mm), based on the resolution of one minute as the minimum visible. In the same year, Jurin questioned the reliability of such estimates because of variations in visual resolution with different stimuli.

  19. Comment on "Inference with minimal Gibbs free energy in information field theory".

    PubMed

    Iatsenko, D; Stefanovska, A; McClintock, P V E

    2012-03-01

    Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.

  20. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  1. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  2. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  3. Prevention of Contrast-Induced Nephropathy (CIN) in Interventional Radiology Practice

    PubMed Central

    Gupta, Rajan K.; Bang, Tami J.

    2010-01-01

    Contrast-induced nephropathy (CIN) is a widely recognized and clinically significant problem in patients undergoing an increasing number of minimally invasive procedures that require contrast administration. Contrast-induced nephropathy is the third most common cause of hospital-acquired renal failure and has significant prognostic implications on patient outcomes. Interventional practitioners are faced with challenging decisions regarding prophylaxis and patient management. The major risk factor for developing CIN is preexisting renal dysfunction, particularly in association with diabetes. Patients are considered to be at risk when estimated glomerular filtration rate (eGFR) or estimated creatinine clearance (eCCr) is less than 60. The cornerstone of prevention of CIN is appropriate risk stratification, intravenous hydration with normal saline or sodium bicarbonate, appropriate withholding of nephrotoxic medications, use of low or iso-osmolar contrast media, and various intraprocedural methods for iodinated contrast dose reduction. Although N-acetylcysteine administration is popular, it remains unproven. Practitioners must be familiar with prevention strategies and diagnosis of CIN to minimize its clinical impact. PMID:22550376

  4. Statistical analysis of multivariate atmospheric variables. [cloud cover

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.

    1979-01-01

    Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.

  5. A Minimum (Delta)V Orbit Maintenance Strategy for Low-Altitude Missions Using Burn Parameter Optimization

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.

    2011-01-01

    Orbit maintenance is the series of burns performed during a mission to ensure the orbit satisfies mission constraints. Low-altitude missions often require non-trivial orbit maintenance (Delta)V due to sizable orbital perturbations and minimum altitude thresholds. A strategy is presented for minimizing this (Delta)V using impulsive burn parameter optimization. An initial estimate for the burn parameters is generated by considering a feasible solution to the orbit maintenance problem. An example demonstrates the dV savings from the feasible solution to the optimal solution.

  6. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  7. Indian monsoon variations during three contrasting climatic periods: the Holocene, Heinrich Stadial 2 and the last interglacial-glacial transition

    NASA Astrophysics Data System (ADS)

    Zorzi, Coralie; Fernanda Sanchez Goñi, Maria; Anupama, Krishnamurthy; Prasad, Srinivasan; Hanquiez, Vincent; Johnson, Joel; Giosan, Liviu

    2016-04-01

    In contrast to the East Asian and African monsoons the Indian monsoon is still poorly documented throughout the last climatic cycle (last 135,000 years). Pollen analysis from two marine sediment cores (NGHP-01-16A and NGHP-01-19B) collected from the offshore Godavari and Mahanadi basins, both located in the Core Monsoon Zone (CMZ) reveals changes in Indian summer monsoon variability and intensity during three contrasting climatic periods: the Holocene, the Heinrich Stadial (HS) 2 and the Marine Isotopic Stage (MIS) 5/4 during the ice sheet growth transition. During the first part of the Holocene between 11,300 and 4,200 cal years BP, characterized by high insolation (minimum precession, maximum obliquity), the maximum extension of the coastal forest and mangrove reflects high monsoon rainfall. This climatic regime contrasts with that of the second phase of the Holocene, from 4,200 cal years BP to the present, marked by the development of drier vegetation in a context of low insolation (maximum precession, minimum obliquity). The historical period in India is characterized by an alternation of strong and weak monsoon centennial phases that may reflect the Medieval Climate Anomaly and the Little Ice Age, respectively. During the HS 2, a period of low insolation and extensive iceberg discharge in the North Atlantic Ocean, vegetation was dominated by grassland and dry flora indicating pronounced aridity as the result of a weak Indian summer monsoon. The MIS 5/4 glaciation, also associated with low insolation but moderate freshwater fluxes, was characterized by a weaker reduction of the Indian summer monsoon and a decrease of seasonal contrast as recorded by the expansion of dry vegetation and the development of Artemisia, respectively. Our results support model predictions suggesting that insolation changes control the long term trend of the Indian monsoon precipitation, but its millennial scale variability and intensity are instead modulated by atmospheric teleconnections to remote phenomena in the North Atlantic, Eurasia or the Indian Ocean.

  8. Temperature fine-tunes Mediterranean Arabidopsis thaliana life-cycle phenology geographically.

    PubMed

    Marcer, A; Vidigal, D S; James, P M A; Fortin, M-J; Méndez-Vigo, B; Hilhorst, H W M; Bentsink, L; Alonso-Blanco, C; Picó, F X

    2018-01-01

    To understand how adaptive evolution in life-cycle phenology operates in plants, we need to unravel the effects of geographic variation in putative agents of natural selection on life-cycle phenology by considering all key developmental transitions and their co-variation patterns. We address this goal by quantifying the temperature-driven and geographically varying relationship between seed dormancy and flowering time in the annual Arabidopsis thaliana across the Iberian Peninsula. We used data on genetic variation in two major life-cycle traits, seed dormancy (DSDS50) and flowering time (FT), in a collection of 300 A. thaliana accessions from the Iberian Peninsula. The geographically varying relationship between life-cycle traits and minimum temperature, a major driver of variation in DSDS50 and FT, was explored with geographically weighted regressions (GWR). The environmentally varying correlation between DSDS50 and FT was analysed by means of sliding window analysis across a minimum temperature gradient. Maximum local adjustments between minimum temperature and life-cycle traits were obtained in the southwest Iberian Peninsula, an area with the highest minimum temperatures. In contrast, in off-southwest locations, the effects of minimum temperature on DSDS50 were rather constant across the region, whereas those of minimum temperature on FT were more variable, with peaks of strong local adjustments of GWR models in central and northwest Spain. Sliding window analysis identified a minimum temperature turning point in the relationship between DSDS50 and FT around a minimum temperature of 7.2 °C. Above this minimum temperature turning point, the variation in the FT/DSDS50 ratio became rapidly constrained and the negative correlation between FT and DSDS50 did not increase any further with increasing minimum temperatures. The southwest Iberian Peninsula emerges as an area where variation in life-cycle phenology appears to be restricted by the duration and severity of the hot summer drought. The temperature-driven varying relationship between DSDS50 and FT detected environmental boundaries for the co-evolution between FT and DSDS50 in A. thaliana. In the context of global warming, we conclude that A. thaliana phenology from the southwest Iberian Peninsula, determined by early flowering and deep seed dormancy, might become the most common life-cycle phenotype for this annual plant in the region. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.

  9. Subharmonic Imaging and Pressure Estimation for Monitoring Neoadjuvant Chemotherapy

    DTIC Science & Technology

    2015-11-01

    ultrasound contrast agents to improve the monitoring of breast cancer treatment response to neoadjuvant therapies in women diagnosed with LABC by imaging...estimation (SHAPE). Software for analyzing RF data from a Logiq 9 ultrasound scanner (GE Healthcare, Milwauke, WI) to produce 3D SHAPE pressure...responders; albeit not statistically significant (p > 0.19). 14. SUBJECT TERMS Breast Cancer, Ultrasound Imaging, Ultrasound Contrast Agent, Pressure

  10. Age determination of soft tissue hematomas.

    PubMed

    Neumayer, Bernhard; Hassler, Eva; Petrovic, Andreas; Widek, Thomas; Ogris, Kathrin; Scheurer, Eva

    2014-11-01

    In clinical forensic medicine, the estimation of the age of injuries such as externally visible subcutaneous hematomas is important for the reconstruction of violent events, particularly to include or exclude potential suspects. Since the estimation of the time of origin based on external inspection is unreliable, the aim of this study was to use contrast in MRI to develop an easy-to-use model for hematoma age estimation. In a longitudinal study, artificially created subcutaneous hematomas were repetitively imaged using MRI over a period of two weeks. The hemorrhages were created by injecting autologous blood into the subcutaneous tissue of the thigh in 20 healthy volunteers. For MRI, standard commercially available sequences, namely proton-density-weighted, T2 -weighted and inversion recovery sequences, were used. The hematomas' MRI data were analyzed regarding their contrast behavior using the most suitable sequences to derive a model allowing an objective estimation of the age of soft tissue hematomas. The Michelson contrast between hematoma and muscle in the proton-density-weighted sequence showed an exponentially decreasing behavior with a dynamic range of 0.6 and a maximum standard deviation of 0.1. The contrast of the inversion recovery sequences showed increasing characteristics and was hypointense for TI = 200ms and hyperintense for TI =1000ms. These sequences were used to create a contrast model. The cross-validation of the model finally yielded limits of agreement for hematoma age determination (corresponding to ±1.96 SD) of ±38.7h during the first three days and ±54 h for the entire investigation period. The developed model provides lookup tables which allow for the estimation of a hematoma's age given a single contrast measurement applicable by a radiologist or a forensic physician. This is a first step towards an accurate and objective dating method for subcutaneous hematomas, which will be particularly useful in child abuse. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Feasibility of spectral CT imaging for the detection of liver lesions with gold-based contrast agents - A simulation study.

    PubMed

    Müllner, Marie; Schlattl, Helmut; Hoeschen, Christoph; Dietrich, Olaf

    2015-12-01

    To demonstrate the feasibility of gold-specific spectral CT imaging for the detection of liver lesions in humans at low concentrations of gold as targeted contrast agent. A Monte Carlo simulation study of spectral CT imaging with a photon-counting and energy-resolving detector (with 6 energy bins) was performed in a realistic phantom of the human abdomen. The detector energy thresholds were optimized for the detection of gold. The simulation results were reconstructed with the K-edge imaging algorithm; the reconstructed gold-specific images were filtered and evaluated with respect to signal-to-noise ratio and contrast-to-noise ratio (CNR). The simulations demonstrate the feasibility of spectral CT with CNRs of the specific gold signal between 2.7 and 4.8 after bilateral filtering. Using the optimized bin thresholds increases the CNRs of the lesions by up to 23% compared to bin thresholds described in former studies. Gold is a promising new CT contrast agent for spectral CT in humans; minimum tissue mass fractions of 0.2 wt% of gold are required for sufficient image contrast. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Observations of Rotation Reversal and Fluctuation Hysteresis in Alcator C-Mod L-Mode Plasmas

    NASA Astrophysics Data System (ADS)

    Cao, N. M.; Rice, J. E.; White, A. E.; Baek, S. G.; Creely, A. J.; Ennever, P. C.; Hubbard, A. E.; Hughes, J. W.; Irby, J.; Rodriguez-Fernandez, P.; Chilenski, M. A.; Diamond, P. H.; Reinke, M. L.; Alcator C-Mod Team

    2017-10-01

    Intrinsic core toroidal rotation in Alcator C-Mod L-mode plasmas has been observed to spontaneously reverse direction when the minimum value of the normalized collisionality ν*, crosses around 0.4. In Ohmic plasmas, the rotation is co-current in the low density linear Ohmic confinement (LOC) regime and counter-current in the higher density saturated Ohmic confinement (SOC) regime. The reversal manifests a hysteresis loop in ν*, where the critical collisionalities for the forward and reverse transitions differ by 10-15%. Temperature and density profiles of the two rotation states are observed to be indistinguishable to within experimental error estimated with Gaussian process regression. However, qualitative differences between the two rotation states are observed in fluctuation spectra, including the broadening of reflectometry spectra and, under certain conditions, the appearance of high-k features in phase contrast imaging (PCI) spectra (kθρs up to 1). These results suggest that the turbulent state can decouple from local profiles, and that turbulent self-regulation may play a role in the LOC/SOC transition. This work is supported by the US DOE under Grant DE-FC02-99ER54512 (C-Mod).

  13. A comparative look at sunspot cycles

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.

    1984-01-01

    On the basis of cycles 8 through 20, spanning about 143 years, observations of sunspot number, smoothed sunspot number, and their temporal properties were used to compute means, standard deviations, ranges, and frequency of occurrence histograms for a number of sunspot cycle parameters. The resultant schematic sunspot cycle was contrasted with the mean sunspot cycle, obtained by averaging smoothed sunspot number as a function of time, tying all cycles (8 through 20) to their minimum occurence date. A relatively good approximation of the time variation of smoothed sunspot number for a given cycle is possible if sunspot cycles are regarded in terms of being either HIGH- or LOW-R(MAX) cycles or LONG- or SHORT-PERIOD cycles, especially the latter. Linear regression analyses were performed comparing late cycle parameters with early cycle parameters and solar cycle number. The early occurring cycle parameters can be used to estimate later occurring cycle parameters with relatively good success, based on cycle 21 as an example. The sunspot cycle record clearly shows that the trend for both R(MIN) and R(MAX) was toward decreasing value between cycles 8 through 14 and toward increasing value between cycles 14 through 20. Linear regression equations were also obtained for several measures of solar activity.

  14. Mass-mortality of guillemots (Uria aalge) in the Gulf of Alaska in 1993

    USGS Publications Warehouse

    Piatt, John F.; van Pelt, Thomas I.

    1997-01-01

    During the first six months of 1993, about 3500 dead and moribund guillemots (Uria aalge) were observed throughout the northern Gulf of Alaska coast (ca 1800 km range). Mortality peaked during March. Highest numbers were observed in western Prince William Sound and along the south coast of the Kenai Peninsula. Large flocks of live guillemots gathered in nearshore waters, in contrast to most winters when guillemots reside offshore. Most guillemots recovered were extremely emaciated (ca 60% of normal weight) and sub-adult (80%). Based on carcass deposition and persistence experiments, we calculate that about 10 900 birds eventually came ashore on beaches that were surveyed. Even if most birds killed made it to shore, only a fraction of beaches in the Gulf of Alaska were surveyed and we estimate that a minimum total of 120 000 guillemots died. Results of other investigations on potential causes of mortality (biotoxins, pathogens, parasites, metals, etc.) were either negative or inconclusive, and necropsies lead us to believe that starvation was the proximate cause of death. Reduced food availability could have been related to anomalous sea conditions found during the prolonged 1990–1995 El Niño-Southern Oscillation event.

  15. EEG functional network topology is associated with disability in patients with amyotrophic lateral sclerosis

    NASA Astrophysics Data System (ADS)

    Fraschini, Matteo; Demuru, Matteo; Hillebrand, Arjan; Cuccu, Lorenza; Porcu, Silvia; di Stefano, Francesca; Puligheddu, Monica; Floris, Gianluca; Borghero, Giuseppe; Marrosu, Francesco

    2016-12-01

    Amyotrophic Lateral Sclerosis (ALS) is one of the most severe neurodegenerative diseases, which is known to affect upper and lower motor neurons. In contrast to the classical tenet that ALS represents the outcome of extensive and progressive impairment of a fixed set of motor connections, recent neuroimaging findings suggest that the disease spreads along vast non-motor connections. Here, we hypothesised that functional network topology is perturbed in ALS, and that this reorganization is associated with disability. We tested this hypothesis in 21 patients affected by ALS at several stages of impairment using resting-state electroencephalography (EEG) and compared the results to 16 age-matched healthy controls. We estimated functional connectivity using the Phase Lag Index (PLI), and characterized the network topology using the minimum spanning tree (MST). We found a significant difference between groups in terms of MST dissimilarity and MST leaf fraction in the beta band. Moreover, some MST parameters (leaf, hierarchy and kappa) significantly correlated with disability. These findings suggest that the topology of resting-state functional networks in ALS is affected by the disease in relation to disability. EEG network analysis may be of help in monitoring and evaluating the clinical status of ALS patients.

  16. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  17. Laser biostimulation therapy planning supported by imaging

    NASA Astrophysics Data System (ADS)

    Mester, Adam R.

    2018-04-01

    Ultrasonography and MR imaging can help to identify the area and depth of different lesions, like injury, overuse, inflammation, degenerative diseases. The appropriate power density, sufficient dose and direction of the laser treatment can be optimally estimated. If required minimum 5 mW photon density and required optimal energy dose: 2-4 Joule/cm2 wouldn't arrive into the depth of the target volume - additional techniques can help: slight compression of soft tissues can decrease the tissue thickness or multiple laser diodes can be used. In case of multiple diode clusters light scattering results deeper penetration. Another method to increase the penetration depth is a second pulsation (in kHz range) of laser light. (So called continuous wave laser itself has inherent THz pulsation by temporal coherence). Third solution of higher light intensity in the target volume is the multi-gate technique: from different angles the same joint can be reached based on imaging findings. Recent developments is ultrasonography: elastosonography and tissue harmonic imaging with contrast material offer optimal therapy planning. While MRI is too expensive modality for laser planning images can be optimally used if a diagnostic MRI already was done. Usual DICOM images offer "postprocessing" measurements in mm range.

  18. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.

  19. Infrared Contrast Analysis Technique for Flash Thermography Nondestructive Evaluation

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay

    2014-01-01

    The paper deals with the infrared flash thermography inspection to detect and analyze delamination-like anomalies in nonmetallic materials. It provides information on an IR Contrast technique that involves extracting normalized contrast verses time evolutions from the flash thermography infrared video data. The paper provides the analytical model used in the simulation of infrared image contrast. The contrast evolution simulation is achieved through calibration on measured contrast evolutions from many flat bottom holes in the subject material. The paper also provides formulas to calculate values of the thermal measurement features from the measured contrast evolution curve. Many thermal measurement features of the contrast evolution that relate to the anomaly characteristics are calculated. The measurement features and the contrast simulation are used to evaluate flash thermography inspection data in order to characterize the delamination-like anomalies. In addition, the contrast evolution prediction is matched to the measured anomaly contrast evolution to provide an assessment of the anomaly depth and width in terms of depth and diameter of the corresponding equivalent flat-bottom hole (EFBH) or equivalent uniform gap (EUG). The paper provides anomaly edge detection technique called the half-max technique which is also used to estimate width of an indication. The EFBH/EUG and half-max width estimations are used to assess anomaly size. The paper also provides some information on the "IR Contrast" software application, half-max technique and IR Contrast feature imaging application, which are based on models provided in this paper.

  20. Methods and Systems for Measurement and Estimation of Normalized Contrast in Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M. (Inventor)

    2017-01-01

    Methods and systems for converting an image contrast evolution of an object to a temperature contrast evolution and vice versa are disclosed, including methods for assessing an emissivity of the object; calculating an afterglow heat flux evolution; calculating a measurement region of interest temperature change; calculating a reference region of interest temperature change; calculating a reflection temperature change; calculating the image contrast evolution or the temperature contrast evolution; and converting the image contrast evolution to the temperature contrast evolution or vice versa, respectively.

Top