Sample records for estimated minimum detectable

  1. "PowerUp"!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

    ERIC Educational Resources Information Center

    Dong, Nianbo; Maynard, Rebecca

    2013-01-01

    This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…

  2. How long is enough to detect terrestrial animals? Estimating the minimum trapping effort on camera traps

    PubMed Central

    Si, Xingfeng; Kays, Roland

    2014-01-01

    Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha) study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period. PMID:24868493

  3. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  4. Probablilistic evaluation of earthquake detection and location capability for Illinois, Indiana, Kentucky, Ohio, and West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauk, F.J.; Christensen, D.H.

    1980-09-01

    Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less

  5. Estimating the dose response relationship for occupational radiation exposure measured with minimum detection level.

    PubMed

    Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y

    2004-10-01

    Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.

  6. Auger electron and characteristic energy loss spectra for electro-deposited americium-241

    NASA Astrophysics Data System (ADS)

    Varma, Matesh N.; Baum, John W.

    1983-07-01

    Auger electron energy spectra for electro-deposited americium-241 on platinum substrate were obtained using a cylindrical mirror analyzer. Characteristic energy loss spectra for this sample were also obtained at primary electron beam energies of 990 and 390 eV. From these measurements PI, PII, and PIII energy levels for americium-241 are determined. Auger electron energies are compared with theoretically calculated values. Minimum detectability under the present condition of sample preparation and equipment was estimated at approximately 1.2×10-8 g/cm2 or 3.9×10-8 Ci/cm2. Minimum detectability for plutonium-239 under similar conditions was estimated at about 7.2×10-10 Ci/cm2.

  7. Probe-Specific Procedure to Estimate Sensitivity and Detection Limits for 19F Magnetic Resonance Imaging.

    PubMed

    Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M

    2016-01-01

    Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.

  8. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  9. Detectability of change in winter precipitation within mountain landscapes: Spatial patterns and uncertainty

    NASA Astrophysics Data System (ADS)

    Silverman, N. L.; Maneta, M. P.

    2016-06-01

    Detecting long-term change in seasonal precipitation using ground observations is dependent on the representativity of the point measurement to the surrounding landscape. In mountainous regions, representativity can be poor and lead to large uncertainties in precipitation estimates at high elevations or in areas where observations are sparse. If the uncertainty in the estimate is large compared to the long-term shifts in precipitation, then the change will likely go undetected. In this analysis, we examine the minimum detectable change across mountainous terrain in western Montana, USA. We ask the question: What is the minimum amount of change that is necessary to be detected using our best estimates of precipitation in complex terrain? We evaluate the spatial uncertainty in the precipitation estimates by conditioning historic regional climate model simulations to ground observations using Bayesian inference. By using this uncertainty as a null hypothesis, we test for detectability across the study region. To provide context for the detectability calculations, we look at a range of future scenarios from the Coupled Model Intercomparison Project 5 (CMIP5) multimodel ensemble downscaled to 4 km resolution using the MACAv2-METDATA data set. When using the ensemble averages we find that approximately 65% of the significant increases in winter precipitation go undetected at midelevations. At high elevation, approximately 75% of significant increases in winter precipitation are undetectable. Areas where change can be detected are largely controlled by topographic features. Elevation and aspect are key characteristics that determine whether or not changes in winter precipitation can be detected. Furthermore, we find that undetected increases in winter precipitation at high elevation will likely remain as snow under climate change scenarios. Therefore, there is potential for these areas to offset snowpack loss at lower elevations and confound the effects of climate change on water resources.

  10. Real-time stop sign detection and distance estimation using a single camera

    NASA Astrophysics Data System (ADS)

    Wang, Wenpeng; Su, Yuxuan; Cheng, Ming

    2018-04-01

    In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.

  11. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  12. Verification of Minimum Detectable Activity for Radiological Threat Source Search

    NASA Astrophysics Data System (ADS)

    Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn

    2015-10-01

    The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.

  13. Optimal electrode selection for multi-channel electroencephalogram based detection of auditory steady-state responses.

    PubMed

    Van Dun, Bram; Wouters, Jan; Moonen, Marc

    2009-07-01

    Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.

  14. Residual toxicity of Cypermethrin in the larvae of coconut pest Oryctes rhinoceros (Coleoptera: Scarabaeidae).

    PubMed

    Venkatarajappa, P

    2001-01-01

    The toxic effect of Cypermethrin 10 EC (0.125, 0.25 and 0.5%) was estimated in the bodywall and digestive system of the larvae of Oryctes rhinoceros by HPLC after exposing them to different concentrations (0.125, 0.25 and 0.5%). Among the various concentrations used maximum residues were detected in bodywall (0.25%), whereas at higher concentration (0.5%) the residue detected was minimum. The treatment of Cypermethrin was found to be highly toxic upto 12 h of treatment, after which it declined reaching the minimum by 24 h. The residue of Cypermethrin could not be detected in digestive system. The experiments indicate the pesticide get concentrated in the bodywall to a maximum extent.

  15. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  16. Kidney function endpoints in kidney transplant trials: a struggle for power.

    PubMed

    Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A

    2013-03-01

    Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.

  17. Improved Time-Lapsed Angular Scattering Microscopy of Single Cells

    NASA Astrophysics Data System (ADS)

    Cannaday, Ashley E.

    By measuring angular scattering patterns from biological samples and fitting them with a Mie theory model, one can estimate the organelle size distribution within many cells. Quantitative organelle sizing of ensembles of cells using this method has been well established. Our goal is to develop the methodology to extend this approach to the single cell level, measuring the angular scattering at multiple time points and estimating the non-nuclear organelle size distribution parameters. The diameters of individual organelle-size beads were successfully extracted using scattering measurements with a minimum deflection angle of 20 degrees. However, the accuracy of size estimates can be limited by the angular range detected. In particular, simulations by our group suggest that, for cell organelle populations with a broader size distribution, the accuracy of size prediction improves substantially if the minimum angle of detection angle is 15 degrees or less. The system was therefore modified to collect scattering angles down to 10 degrees. To confirm experimentally that size predictions will become more stable when lower scattering angles are detected, initial validations were performed on individual polystyrene beads ranging in diameter from 1 to 5 microns. We found that the lower minimum angle enabled the width of this delta-function size distribution to be predicted more accurately. Scattering patterns were then acquired and analyzed from single mouse squamous cell carcinoma cells at multiple time points. The scattering patterns exhibit angular dependencies that look unlike those of any single sphere size, but are well-fit by a broad distribution of sizes, as expected. To determine the fluctuation level in the estimated size distribution due to measurement imperfections alone, formaldehyde-fixed cells were measured. Subsequent measurements on live (non-fixed) cells revealed an order of magnitude greater fluctuation in the estimated sizes compared to fixed cells. With our improved and better-understood approach to single cell angular scattering, we are now capable of reliably detecting changes in organelle size predictions due to biological causes above our measurement error of 20 nm, which enables us to apply our system to future studies of the investigation of various single cell biological processes.

  18. A minimum distance estimation approach to the two-sample location-scale problem.

    PubMed

    Zhang, Zhiyi; Yu, Qiqing

    2002-09-01

    As reported by Kalbfleisch and Prentice (1980), the generalized Wilcoxon test fails to detect a difference between the lifetime distributions of the male and female mice died from Thymic Leukemia. This failure is a result of the test's inability to detect a distributional difference when a location shift and a scale change exist simultaneously. In this article, we propose an estimator based on the minimization of an average distance between two independent quantile processes under a location-scale model. Large sample inference on the proposed estimator, with possible right-censorship, is discussed. The mouse leukemia data are used as an example for illustration purpose.

  19. Turbine Engine Fault Detection and Isolation Program. Volume I. Turbine Engine Performance Estimation Methods

    DTIC Science & Technology

    1982-08-01

    DATA NUMBER OF POINTS 1988 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 PS3 -218.12 294.77 3 T3 -341.54 738.15 4 T5 -464.78 623.47 5 PT51 12.317...Continued) CRUISE AND TAKE-OFF MODE DATA I NUMBER OF POINTS 4137 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 P53 -218.12 376.60 3 T3 -482.72

  20. Quiescent and Eruptive Prominences at Solar Minimum: A Statistical Study via an Automated Tracking System

    NASA Astrophysics Data System (ADS)

    Loboda, I. P.; Bogachev, S. A.

    2015-07-01

    We employ an automated detection algorithm to perform a global study of solar prominence characteristics. We process four months of TESIS observations in the He II 304Å line taken close to the solar minimum of 2008-2009 and mainly focus on quiescent and quiescent-eruptive prominences. We detect a total of 389 individual features ranging from 25×25 to 150×500 Mm2 in size and obtain distributions of many of their spatial characteristics, such as latitudinal position, height, size, and shape. To study their dynamics, we classify prominences as either stable or eruptive and calculate their average centroid velocities, which are found to rarely exceed 3 km/s. In addition, we give rough estimates of mass and gravitational energy for every detected prominence and use these values to estimate the total mass and gravitational energy of all simultaneously existing prominences (1012 - 1014 kg and 1029 - 1031 erg). Finally, we investigate the form of the gravitational energy spectrum of prominences and derive it to be a power-law of index -1.1 ± 0.2.

  1. Are There Long-Run Effects of the Minimum Wage?

    PubMed Central

    Sorkin, Isaac

    2014-01-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices. PMID:25937790

  2. Are There Long-Run Effects of the Minimum Wage?

    PubMed

    Sorkin, Isaac

    2015-04-01

    An empirical consensus suggests that there are small employment effects of minimum wage increases. This paper argues that these are short-run elasticities. Long-run elasticities, which may differ from short-run elasticities, are policy relevant. This paper develops a dynamic industry equilibrium model of labor demand. The model makes two points. First, long-run regressions have been misinterpreted because even if the short- and long-run employment elasticities differ, standard methods would not detect a difference using US variation. Second, the model offers a reconciliation of the small estimated short-run employment effects with the commonly found pass-through of minimum wage increases to product prices.

  3. Wave-Based Algorithms and Bounds for Target Support Estimation

    DTIC Science & Technology

    2015-05-15

    vector electromagnetic formalism in [5]. This theory leads to three main variants of the optical theorem detector, in particular, three alternative...further expands the applicability for transient pulse change detection of ar- bitrary nonlinear-media and time-varying targets [9]. This report... electromagnetic methods a new methodology to estimate the minimum convex source region and the (possibly nonconvex) support of a scattering target from knowledge of

  4. Optimizing occupancy surveys by maximizing detection probability: application to amphibian monitoring in the Mediterranean region.

    PubMed

    Petitot, Maud; Manceau, Nicolas; Geniez, Philippe; Besnard, Aurélien

    2014-09-01

    Setting up effective conservation strategies requires the precise determination of the targeted species' distribution area and, if possible, its local abundance. However, detection issues make these objectives complex for most vertebrates. The detection probability is usually <1 and is highly dependent on species phenology and other environmental variables. The aim of this study was to define an optimized survey protocol for the Mediterranean amphibian community, that is, to determine the most favorable periods and the most effective sampling techniques for detecting all species present on a site in a minimum number of field sessions and a minimum amount of prospecting effort. We visited 49 ponds located in the Languedoc region of southern France on four occasions between February and June 2011. Amphibians were detected using three methods: nighttime call count, nighttime visual encounter, and daytime netting. The detection nondetection data obtained was then modeled using site-occupancy models. The detection probability of amphibians sharply differed between species, the survey method used and the date of the survey. These three covariates also interacted. Thus, a minimum of three visits spread over the breeding season, using a combination of all three survey methods, is needed to reach a 95% detection level for all species in the Mediterranean region. Synthesis and applications: detection nondetection surveys combined to site occupancy modeling approach are powerful methods that can be used to estimate the detection probability and to determine the prospecting effort necessary to assert that a species is absent from a site.

  5. What does it take to detect a change in soil carbon stock? A regional comparison of minimum detectable difference and experiment duration in the North-Central United States

    USDA-ARS?s Scientific Manuscript database

    Accurate estimation of soil organic carbon (SOC) is crucial to efforts to improve soil fertility and stabilize atmospheric CO2 concentrations by sequestering carbon (C) in soils. Soil organic C measurements are, however, often highly variable and management practices can take a long time to produce ...

  6. DETECTION OF KOI-13.01 USING THE PHOTOMETRIC ORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shporer, Avi; Jenkins, Jon M.; Seader, Shawn E.

    2011-12-15

    We use the KOI-13 transiting star-planet system as a test case for the recently developed BEER algorithm, aimed at identifying non-transiting low-mass companions by detecting the photometric variability induced by the companion along its orbit. Such photometric variability is generated by three mechanisms: the beaming effect, tidal ellipsoidal distortion, and reflection/heating. We use data from three Kepler quarters, from the first year of the mission, while ignoring measurements within the transit and occultation, and show that the planet's ephemeris is clearly detected. We fit for the amplitude of each of the three effects and use the beaming effect amplitude tomore » estimate the planet's minimum mass, which results in M{sub p} sin i = 9.2 {+-} 1.1 M{sub J} (assuming the host star parameters derived by Szabo et al.). Our results show that non-transiting star-planet systems similar to KOI-13.01 can be detected in Kepler data, including a measurement of the orbital ephemeris and the planet's minimum mass. Moreover, we derive a realistic estimate of the amplitudes uncertainties, and use it to show that data obtained during the entire lifetime of the Kepler mission of 3.5 years will allow detecting non-transiting close-in low-mass companions orbiting bright stars, down to the few Jupiter mass level. Data from the Kepler Extended Mission, if funded by NASA, will further improve the detection capabilities.« less

  7. DOA Estimation for Underwater Wideband Weak Targets Based on Coherent Signal Subspace and Compressed Sensing.

    PubMed

    Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting

    2018-03-18

    Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.

  8. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  9. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  10. Turbofan engine demonstration of sensor failure detection

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood

    1991-01-01

    In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.

  11. A Portable Electronic Nose for Toxic Vapor Detection, Identification, and Quantification

    NASA Technical Reports Server (NTRS)

    Linnell, B. R.; Young, R. C.; Griffin, T. P.; Meneghelli, B. J.; Peterson, B. V.; Brooks, K. B.

    2005-01-01

    The Space Program and military use large quantities of hydrazine and monomethyl hydrazine as rocket propellant, which are very toxic and suspected human carcinogens. Current off-the-shelf portable instruments require 10 to 20 minutes of exposure to detect these compounds at the minimum required concentrations and are prone to false positives, making them unacceptable for many operations. In addition, post-mission analyses of grab bag air samples from the Shuttle have confirmed the occasional presence of on-board volatile organic contaminants, which also need to be monitored to ensure crew safety. A new prototype instrument based on electronic nose (e-nose) technology has demonstrated the ability to qualify (identify) and quantify many of these vapors at their minimum required concentrations, and may easily be adapted to detect many other toxic vapors. To do this, it was necessary to develop algorithms to classify unknown vapors, recognize when a vapor is not any of the vapors of interest, and estimate the concentrations of the contaminants. This paper describes the design of the portable e-nose instrument, test equipment setup, test protocols, pattern recognition algorithms, concentration estimation methods, and laboratory test results.

  12. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    PubMed Central

    Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.

    2013-01-01

    Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  13. Estimating detection probability for Canada lynx Lynx canadensis using snow-track surveys in the northern Rocky Mountains, Montana, USA

    Treesearch

    John R. Squires; Lucretia E. Olson; David L. Turner; Nicholas J. DeCesare; Jay A. Kolbe

    2012-01-01

    We used snow-tracking surveys to determine the probability of detecting Canada lynx Lynx canadensis in known areas of lynx presence in the northern Rocky Mountains, Montana, USA during the winters of 2006 and 2007. We used this information to determine the minimum number of survey replicates necessary to infer the presence and absence of lynx in areas of similar lynx...

  14. Minimum depth of investigation for grounded-wire TEM due to self-transients

    NASA Astrophysics Data System (ADS)

    Zhou, Nannan; Xue, Guoqiang

    2018-05-01

    The grounded-wire transient electromagnetic method (TEM) has been widely used for near-surface metalliferous prospecting, oil and gas exploration, and hydrogeological surveying in the subsurface. However, it is commonly observed that such TEM signal is contaminated by the self-transient process occurred at the early stage of data acquisition. Correspondingly, there exists a minimum depth of investigation, above which the observed signal is not applicable for reliable data processing and interpretation. Therefore, for achieving a more comprehensive understanding of the TEM method, it is necessary to perform research on the self-transient process and moreover develop an approach for quantifying the minimum detection depth. In this paper, we first analyze the temporal procedure of the equivalent circuit of the TEM method and present a theoretical equation for estimating the self-induction voltage based on the inductor of the transmitting wire. Then, numerical modeling is applied for building the relationship between the minimum depth of investigation and various properties, including resistivity of the earth, offset, and source length. It is guide for the design of survey parameters when the grounded-wire TEM is applied to the shallow detection. Finally, it is verified through applications to a coal field in China.

  15. DOA Estimation for Underwater Wideband Weak Targets Based on Coherent Signal Subspace and Compressed Sensing

    PubMed Central

    2018-01-01

    Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets. PMID:29562642

  16. Evaluation of Precipitation Detection over Various Surfaces from Passive Microwave Imagers and Sounders

    NASA Technical Reports Server (NTRS)

    Munchak, S. Joseph; Skofronick-Jackson, Gail

    2012-01-01

    During the middle part of this decade a wide variety of passive microwave imagers and sounders will be unified in the Global Precipitation Measurement (GPM) mission to provide a common basis for frequent (3 hr), global precipitation monitoring. The ability of these sensors to detect precipitation by discerning it from non-precipitating background depends upon the channels available and characteristics of the surface and atmosphere. This study quantifies the minimum detectable precipitation rate and fraction of precipitation detected for four representative instruments (TMI, GMI, AMSU-A, and AMSU-B) that will be part of the GPM constellation. Observations for these instruments were constructed from equivalent channels on the SSMIS instrument on DMSP satellites F16 and F17 and matched to precipitation data from NOAA's National Mosaic and QPE (NMQ) during 2009 over the continuous United States. A variational optimal estimation retrieval of non-precipitation surface and atmosphere parameters was used to determine the consistency between the observed brightness temperatures and these parameters, with high cost function values shown to be related to precipitation. The minimum detectable precipitation rate, defined as the lowest rate for which probability of detection exceeds 50%, and the detected fraction of precipitation, are reported for each sensor, surface type (ocean, coast, bare land, snow cover) and precipitation type (rain, mix, snow). The best sensors over ocean and bare land were GMI (0.22 mm/hr minimum threshold and 90% of precipitation detected) and AMSU (0.26 mm/hr minimum threshold and 81% of precipitation detected), respectively. Over coasts (0.74 mm/hr threshold and 12% detected) and snow-covered surfaces (0.44 mm/hr threshold and 23% detected), AMSU again performed best but with much lower detection skill, whereas TMI had no skill over these surfaces. The sounders (particularly over water) benefited from the use of re-analysis data (vs. climatology) to set the a-priori atmospheric state and all instruments benefit from the use of a conditional snow cover emissivity database over land. It is recommended that real-time sources of these data be used in the operational GPM precipitation algorithms.

  17. Quantitative test for concave aspheric surfaces using a Babinet compensator.

    PubMed

    Saxena, A K

    1979-08-15

    A quantitative test for the evaluation of surface figures of concave aspheric surfaces using a Babinet compensator is reported. A theoretical estimate of the sensitivity is 0.002lambda for a minimum detectable phase change of 2 pi x 10(-3) rad over a segment length of 1.0 cm.

  18. MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim

    2018-01-01

    A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.

  19. Minimum Detectable Activity for Tomographic Gamma Scanning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkataraman, Ram; Smith, Susan; Kirkpatrick, J. M.

    2015-01-01

    For any radiation measurement system, it is useful to explore and establish the detection limits and a minimum detectable activity (MDA) for the radionuclides of interest, even if the system is to be used at far higher values. The MDA serves as an important figure of merit, and often a system is optimized and configured so that it can meet the MDA requirements of a measurement campaign. The non-destructive assay (NDA) systems based on gamma ray analysis are no exception and well established conventions, such the Currie method, exist for estimating the detection limits and the MDA. However, the Tomographicmore » Gamma Scanning (TGS) technique poses some challenges for the estimation of detection limits and MDAs. The TGS combines high resolution gamma ray spectrometry (HRGS) with low spatial resolution image reconstruction techniques. In non-imaging gamma ray based NDA techniques measured counts in a full energy peak can be used to estimate the activity of a radionuclide, independently of other counting trials. However, in the case of the TGS each “view” is a full spectral grab (each a counting trial), and each scan consists of 150 spectral grabs in the transmission and emission scans per vertical layer of the item. The set of views in a complete scan are then used to solve for the radionuclide activities on a voxel by voxel basis, over 16 layers of a 10x10 voxel grid. Thus, the raw count data are not independent trials any more, but rather constitute input to a matrix solution for the emission image values at the various locations inside the item volume used in the reconstruction. So, the validity of the methods used to estimate MDA for an imaging technique such as TGS warrant a close scrutiny, because the pair-counting concept of Currie is not directly applicable. One can also raise questions as to whether the TGS, along with other image reconstruction techniques which heavily intertwine data, is a suitable method if one expects to measure samples whose activities are at or just above MDA levels. The paper examines methods used to estimate MDAs for a TGS system, and explores possible solutions that can be rigorously defended.« less

  20. Correlation Dimension Estimates of Global and Local Temperature Data.

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    1995-11-01

    The author has attempted to detect the presence of low-dimensional deterministic chaos in temperature data by estimating the correlation dimension with the Hill estimate that has been recently developed by Mikosch and Wang. There is no convincing evidence of low dimensionality with either global dataset (Southern Hemisphere monthly average temperatures from 1858 to 1984) or local temperature dataset (daily minimums at Auckland, New Zealand). Any apparent reduction in the dimension estimates appears to be due large1y, if not entirely, to effects of statistical bias, but neither is it a purely random stochastic process. The dimension of the climatic attractor may be significantly larger than 10.

  1. Weighted network analysis of high-frequency cross-correlation measures

    NASA Astrophysics Data System (ADS)

    Iori, Giulia; Precup, Ovidiu V.

    2007-03-01

    In this paper we implement a Fourier method to estimate high-frequency correlation matrices from small data sets. The Fourier estimates are shown to be considerably less noisy than the standard Pearson correlation measures and thus capable of detecting subtle changes in correlation matrices with just a month of data. The evolution of correlation at different time scales is analyzed from the full correlation matrix and its minimum spanning tree representation. The analysis is performed by implementing measures from the theory of random weighted networks.

  2. A Portable Electronic Nose For Toxic Vapor Detection, Identification, and Quantification

    NASA Technical Reports Server (NTRS)

    Linnell, B. R.; Young, R. C.; Griffin, T. P.; Meneghelli, B. J.; Peterson, B. V.; Brooks, K. B.

    2005-01-01

    A new prototype instrument based on electronic nose (e-nose) technology has demonstrated the ability to identify and quantify many vapors of interest to the Space Program at their minimum required concentrations for both single vapors and two-component vapor mixtures, and may easily be adapted to detect many other toxic vapors. To do this, it was necessary to develop algorithms to classify unknown vapors, recognize when a vapor is not any of the vapors of interest, and estimate the concentrations of the contaminants. This paper describes the design of the portable e-nose instrument, test equipment setup, test protocols, pattern recognition algorithms, concentration estimation methods, and laboratory test results.

  3. Estimation of Rain Intensity Spectra over the Continental US Using Ground Radar-Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Lin, Xin; Hou, Arthur Y.

    2013-01-01

    A high-resolution surface rainfall product is used to estimate rain characteristics over the continental US as a function of rain intensity. By defining each data at 4-km horizontal resolutions and 1-h temporal resolutions as an individual precipitating/nonprecipitating sample, statistics of rain occurrence and rain volume including their geographical and seasonal variations are documented. Quantitative estimations are also conducted to evaluate the impact of missing light rain events due to satellite sensors' detection capabilities. It is found that statistics of rain characteristics have large seasonal and geographical variations across the continental US. Although heavy rain events (> 10 mm/hr.) only occupy 2.6% of total rain occurrence, they may contribute to 27% of total rain volume. Light rain events (< 1.0 mm/hr.), occurring much more frequently (65%) than heavy rain events, can also make important contributions (15%) to the total rain volume. For minimum detectable rain rates setting at 0.5 and 0.2 mm/hr which are close to sensitivities of the current and future space-borne precipitation radars, there are about 43% and 11% of total rain occurrence below these thresholds, and they respectively represent 7% and 0.8% of total rain volume. For passive microwave sensors with their rain pixel sizes ranging from 14 to 16 km and the minimum detectable rain rates around 1 mm/hr., the missed light rain events may account for 70% of train occurrence and 16% of rain volume. Statistics of rain characteristics are also examined on domains with different temporal and spatial resolutions. Current issues in estimates of rain characteristics from satellite measurements and model outputs are discussed.

  4. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  5. Enhancing interferometer phase estimation, sensing sensitivity, and resolution using robust entangled states

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-11-01

    With the goal of designing interferometers and interferometer sensors, e.g., LADARs with enhanced sensitivity, resolution, and phase estimation, states using quantum entanglement are discussed. These states include N00N states, plain M and M states (PMMSs), and linear combinations of M and M states (LCMMS). Closed form expressions for the optimal detection operators; visibility, a measure of the state's robustness to loss and noise; a resolution measure; and phase estimate error, are provided in closed form. The optimal resolution for the maximum visibility and minimum phase error are found. For the visibility, comparisons between PMMSs, LCMMS, and N00N states are provided. For the minimum phase error, comparisons between LCMMS, PMMSs, N00N states, separate photon states (SPSs), the shot noise limit (SNL), and the Heisenberg limit (HL) are provided. A representative collection of computational results illustrating the superiority of LCMMS when compared to PMMSs and N00N states is given. It is found that for a resolution 12 times the classical result LCMMS has visibility 11 times that of N00N states and 4 times that of PMMSs. For the same case, the minimum phase error for LCMMS is 10.7 times smaller than that of PMMS and 29.7 times smaller than that of N00N states.

  6. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  7. Sampling methods, dispersion patterns, and fixed precision sequential sampling plans for western flower thrips (Thysanoptera: Thripidae) and cotton fleahoppers (Hemiptera: Miridae) in cotton.

    PubMed

    Parajulee, M N; Shrestha, R B; Leser, J F

    2006-04-01

    A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.

  8. Photometry of symbiotic stars. XI. EG And, Z And, BF Cyg, CH Cyg, CI Cyg, V1329 Cyg, TX CVn, AG Dra, RW Hya, AR Pav, AG Peg, AX Per, QW Sge, IV Vir and the LMXB V934 Her

    NASA Astrophysics Data System (ADS)

    Skopal, A.; Pribulla, T.; Vaňko, M.; Velič, Z.; Semkov, E.; Wolf, M.; Jones, A.

    2004-02-01

    We present new photometric observations of EG And, Z And, BF Cyg, CH Cyg, CI Cyg, V1329 Cyg, TX CVn, AG Dra, RW Hya, AG Peg, AX Per, IV Vir and the peculiar M giant V934 Her, which were made in the standard Johnson UBV(R) system. QW Sge was measured in the Kron-Cousin B, V, RC, IC system and for AR Pav we present its new visual estimates. The current issue gathers observations of these objects to December 2003. The main results can be summarized as follows: EG And: The primary minimum in the U light curve (LC) occurred at the end of 2002. A 0.2 -- 0.3 mag brightening in U was detected in the autumn of 2003. Z And: At around August 2002 we detected for the first time a minimum, which is due to eclipse of the active object by the red giant. Measurements from 2003.3 are close to those of a quiescent phase. BF Cyg: In February 2003 a short-term flare developed in the LC. A difference in the depth of recent minima was detected. CH Cyg: This star was in a quiescent phase at a rather bright state. A shallow minimum occurred at ˜ JD 2 452 730, close to the position of the inferior conjunction of the giant in the inner binary of the triple-star model of CH Cyg. CI Cyg: Our observations cover the descending branch of a broad minimum. TX CVn: At/around the beginning of 2003 the star entered a bright stage containing a minimum at ˜ JD 2 452 660. AG Dra: New observations revealed two eruptions, which peaked in October 2002 and 2003 at ˜ 9.3 in U. AR Pav: Our new visual estimates showed a transient disappearance of a wave-like modulation in the star's brightness between the minima at epochs E = 66 and E = 68 and its reappearance. AG Peg: Our measurements from the end of 2001 showed rather complex profile of the LC. RW Hya: Observations follow behaviour of the wave-like variability of quiet symbiotics. AX Per: In May 2003 a 0.5 mag flare was detected following a rapid decrease of the light to a minimum. QW Sge: CCD observations in B, V, RC, IC bands cover a period from 1994.5 to 2003.5. An increase in the star's brightness by about 1 mag was observed in all passbands in 1997. Less pronounced brightening was detected in 1999/2000. V934 Her: Our observations did not show any larger variation in the optical as a reaction to its X-ray activity.

  9. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement in practice.

  10. Zika and Chikungunya virus detection in naturally infected Aedes aegypti in Ecuador.

    PubMed

    Cevallos, Varsovia; Ponce, Patricio; Waggoner, Jesse J; Pinsky, Benjamin A; Coloma, Josefina; Quiroga, Cristina; Morales, Diego; Cárdenas, Maria José

    2018-01-01

    The wide and rapid spread of Chikungunya (CHIKV) and Zika (ZIKV) viruses represent a global public health problem, especially for tropical and subtropical environments. The early detection of CHIKV and ZIKV in mosquitoes may help to understand the dynamics of the diseases in high-risk areas, and to design data based epidemiological surveillance to activate the preparedness and response of the public health system and vector control programs. This study was done to detect ZIKV and CHIKV viruses in naturally infected fed female Aedes aegypti (L.) mosquitoes from active epidemic urban areas in Ecuador. Pools (n=193; 22 pools) and individuals (n=22) of field collected Ae. aegypti mosquitoes from high-risk arboviruses infection sites in Ecuador were analyzed for the presence of CHIKV and ZIKV using RT-PCR. Phylogenetic analysis demonstrated that both ZIKV and CHIKV viruses circulating in Ecuador correspond to the Asian lineages. Minimum infection rate (MIR) of CHIKV for Esmeraldas city was 2.3% and the maximum likelihood estimation (MLE) was 3.3%. The minimum infection rate (MIR) of ZIKV for Portoviejo city was 5.3% and for Manta city was 2.1%. Maximum likelihood estimation (MLE) for Portoviejo city was 6.9% and 2.6% for Manta city. Detection of arboviruses and infection rates in the arthropod vectors may help to predict an outbreak and serve as a warning tool in surveillance programs. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Longevity in Calumma parsonii, the World's largest chameleon.

    PubMed

    Tessa, Giulia; Glaw, Frank; Andreone, Franco

    2017-03-01

    Large body size of ectothermic species can be correlated with high life expectancy. We assessed the longevity of the World's largest chameleon, the Parson's chameleon Calumma parsonii from Madagascar by using skeletochronology of phalanges taken from preserved specimens held in European natural history museums. Due to the high bone resorption we can provide only the minimum age of each specimen. The highest minimum age detected was nine years for a male and eight years for a female, confirming that this species is considerably long living among chameleons. Our data also show a strong correlation between snout-vent length and estimated age. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Minimum time and fuel flight profiles for an F-15 airplane with a Highly Integrated Digital Electronic Control (HIDEC) system

    NASA Technical Reports Server (NTRS)

    Haering, E. A., Jr.; Burcham, F. W., Jr.

    1984-01-01

    A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.

  13. A Unified Framework for Estimating Minimum Detectable Effects for Comparative Short Interrupted Time Series Designs

    ERIC Educational Resources Information Center

    Price, Cristofer; Unlu, Fatih

    2014-01-01

    The Comparative Short Interrupted Time Series (C-SITS) design is a frequently employed quasi-experimental method, in which the pre- and post-intervention changes observed in the outcome levels of a treatment group is compared with those of a comparison group where the difference between the former and the latter is attributed to the treatment. The…

  14. Estimating occupancy and predicting numbers of gray wolf packs in Montana using hunter surveys

    USGS Publications Warehouse

    Rich, Lindsey N.; Russell, Robin E.; Glenn, Elizabeth M.; Mitchell, Michael S.; Gude, Justin A.; Podruzny, Kevin M.; Sime, Carolyn A.; Laudon, Kent; Ausband, David E.; Nichols, James D.

    2013-01-01

    Reliable knowledge of the status and trend of carnivore populations is critical to their conservation and management. Methods for monitoring carnivores, however, are challenging to conduct across large spatial scales. In the Northern Rocky Mountains, wildlife managers need a time- and cost-efficient method for monitoring gray wolf (Canis lupus) populations. Montana Fish, Wildlife and Parks (MFWP) conducts annual telephone surveys of >50,000 deer and elk hunters. We explored how survey data on hunters' sightings of wolves could be used to estimate the occupancy and distribution of wolf packs and predict their abundance in Montana for 2007–2009. We assessed model utility by comparing our predictions to MFWP minimum known number of wolf packs. We minimized false positive detections by identifying a patch as occupied if 2–25 wolves were detected by ≥3 hunters. Overall, estimates of the occupancy and distribution of wolf packs were generally consistent with known distributions. Our predictions of the total area occupied increased from 2007 to 2009 and predicted numbers of wolf packs were approximately 1.34–1.46 times the MFWP minimum counts for each year of the survey. Our results indicate that multi-season occupancy models based on public sightings can be used to monitor populations and changes in the spatial distribution of territorial carnivores across large areas where alternative methods may be limited by personnel, time, accessibility, and budget constraints.

  15. Post Detection Target State Estimation Using Heuristic Information Processing - A Preliminary Investigation

    DTIC Science & Technology

    1977-09-01

    Interpolation algorithm allows this to be done when the transition boundaries are defined close together and parallel to one another. In this case the...in the variable kernel esti- -mates.) In [2] a goodness-of-fit criterion for a set of sam- One question of great interest to us in this study pies...an estimate /(x) is For the unimodal case the ab.olute minimum okV .based on the variables ocurs at k .= 100, ce 5. At this point we have j Mean

  16. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  17. Palila abundance estimates and trends

    USGS Publications Warehouse

    Banko, Paul C.; Brink, Kevin W.; Camp, Richard

    2014-01-01

    The palila (Loxioides bailleui) population was surveyed annually during 1998−2014 on Mauna Kea Volcano to determine abundance, population trend, and spatial distribution. In the latest surveys, the 2013 population was estimated at 1,492−2,132 birds (point estimate: 1,799) and the 2014 population was estimated at 1,697−2,508 (point estimate: 2,070). Similar numbers of palila were detected during the first and subsequent counts within each year during 2012−2014, and there was no difference in their detection probability due to count sequence. This suggests that greater precision in population estimates can be achieved if future surveys include repeat visits. No palila were detected outside the core survey area in 2013 or 2014, suggesting that most if not all palila inhabit the western slope during the survey period. Since 2003, the size of the area containing all annual palila detections do not indicate a significant change among years, suggesting that the range of the species has remained stable; although this area represents only about 5% of its historical extent. During 1998−2003, palila numbers fluctuated moderately (coefficient of variation [CV] = 0.21). After peaking in 2003, population estimates declined steadily through 2011; since 2010, estimates have fluctuated moderately above the 2011 minimum (CV = 0.18). The average rate of decline during 1998−2014 was 167 birds per year with very strong statistical support for an overall declining trend in abundance. Over the 16-year monitoring period, the estimated rate of change equated to a 68% decline in the population.

  18. A Coupled Approach for Structural Damage Detection with Incomplete Measurements

    NASA Technical Reports Server (NTRS)

    James, George; Cao, Timothy; Kaouk, Mo; Zimmerman, David

    2013-01-01

    This historical work couples model order reduction, damage detection, dynamic residual/mode shape expansion, and damage extent estimation to overcome the incomplete measurements problem by using an appropriate undamaged structural model. A contribution of this work is the development of a process to estimate the full dynamic residuals using the columns of a spring connectivity matrix obtained by disassembling the structural stiffness matrix. Another contribution is the extension of an eigenvector filtering procedure to produce full-order mode shapes that more closely match the measured active partition of the mode shapes using a set of modified Ritz vectors. The full dynamic residuals and full mode shapes are used as inputs to the minimum rank perturbation theory to provide an estimate of damage location and extent. The issues associated with this process are also discussed as drivers of near-term development activities to understand and improve this approach.

  19. Estimation of completeness magnitude with a Bayesian modeling of daily and weekly variations in earthquake detectability

    NASA Astrophysics Data System (ADS)

    Iwata, T.

    2014-12-01

    In the analysis of seismic activity, assessment of earthquake detectability of a seismic network is a fundamental issue. For this assessment, the completeness magnitude Mc, the minimum magnitude above which all earthquakes are recorded, is frequently estimated. In most cases, Mc is estimated for an earthquake catalog of duration longer than several weeks. However, owing to human activity, noise level in seismic data is higher on weekdays than on weekends, so that earthquake detectability has a weekly variation [e.g., Atef et al., 2009, BSSA]; the consideration of such a variation makes a significant contribution to the precise assessment of earthquake detectability and Mc. For a quantitative evaluation of the weekly variation, we introduced the statistical model of a magnitude-frequency distribution of earthquakes covering an entire magnitude range [Ogata & Katsura, 1993, GJI]. The frequency distribution is represented as the product of the Gutenberg-Richter law and a detection rate function. Then, the weekly variation in one of the model parameters, which corresponds to the magnitude where the detection rate of earthquakes is 50%, was estimated. Because earthquake detectability also have a daily variation [e.g., Iwata, 2013, GJI], and the weekly and daily variations were estimated simultaneously by adopting a modification of a Bayesian smoothing spline method for temporal change in earthquake detectability developed in Iwata [2014, Aust. N. Z. J. Stat.]. Based on the estimated variations in the parameter, the value of Mc was estimated. In this study, the Japan Meteorological Agency catalog from 2006 to 2010 was analyzed; this dataset is the same as analyzed in Iwata [2013] where only the daily variation in earthquake detectability was considered in the estimation of Mc. A rectangular grid with 0.1° intervals covering in and around Japan was deployed, and the value of Mc was estimated for each gridpoint. Consequently, a clear weekly variation was revealed; the detectability is better on Sundays than on the other days. The estimated spatial variation in Mc was compared with that estimated in Iwata [2013]; the maximum difference between Mc values with and without considering the weekly variation approximately equals to 0.2, suggesting the importance of accounting for the weekly variation in the estimation of Mc.

  20. Simulating future uncertainty to guide the selection of survey designs for long-term monitoring

    USGS Publications Warehouse

    Garman, Steven L.; Schweiger, E. William; Manier, Daniel J.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    A goal of environmental monitoring is to provide sound information on the status and trends of natural resources (Messer et al. 1991, Theobald et al. 2007, Fancy et al. 2009). When monitoring observations are acquired by measuring a subset of the population of interest, probability sampling as part of a well-constructed survey design provides the most reliable and legally defensible approach to achieve this goal (Cochran 1977, Olsen et al. 1999, Schreuder et al. 2004; see Chapters 2, 5, 6, 7). Previous works have described the fundamentals of sample surveys (e.g. Hansen et al. 1953, Kish 1965). Interest in survey designs and monitoring over the past 15 years has led to extensive evaluations and new developments of sample selection methods (Stevens and Olsen 2004), of strategies for allocating sample units in space and time (Urquhart et al. 1993, Overton and Stehman 1996, Urquhart and Kincaid 1999), and of estimation (Lesser and Overton 1994, Overton and Stehman 1995) and variance properties (Larsen et al. 1995, Stevens and Olsen 2003) of survey designs. Carefully planned, “scientific” (Chapter 5) survey designs have become a standard in contemporary monitoring of natural resources. Based on our experience with the long-term monitoring program of the US National Park Service (NPS; Fancy et al. 2009; Chapters 16, 22), operational survey designs tend to be selected using the following procedures. For a monitoring indicator (i.e. variable or response), a minimum detectable trend requirement is specified, based on the minimum level of change that would result in meaningful change (e.g. degradation). A probability of detecting this trend (statistical power) and an acceptable level of uncertainty (Type I error; see Chapter 2) within a specified time frame (e.g. 10 years) are specified to ensure timely detection. Explicit statements of the minimum detectable trend, the time frame for detecting the minimum trend, power, and acceptable probability of Type I error (α) collectively form the quantitative sampling objective.

  1. The trajectory and atmospheric impact of asteroid 2014 AA

    NASA Astrophysics Data System (ADS)

    Farnocchia, Davide; Chesley, Steven R.; Brown, Peter G.; Chodas, Paul W.

    2016-08-01

    Near-Earth asteroid 2014 AA entered the Earth's atmosphere on 2014 January 2, only 21 h after being discovered by the Catalina Sky Survey. In this paper we compute the trajectory of 2014 AA by combining the available optical astrometry, seven ground-based observations over 69 min, and the International Monitoring System detection of the atmospheric impact infrasonic airwaves in a least-squares orbit estimation filter. The combination of these two sources of observations results in a tremendous improvement in the orbit uncertainties. The impact time is 3:05 UT with a 1σ uncertainty of 6 min, while the impact location corresponds to a west longitude of 44.2° and a latitude of 13.1° with a 1σ uncertainty of 140 km. The minimum impact energy estimated from the infrasound data and the impact velocity result in an estimated minimum mass of 22.6 t. By propagating the trajectory of 2014 AA backwards we find that the only window for finding precovery observations is for the three days before its discovery.

  2. Extreme Brightness Temperatures and Refractive Substructure in 3C273 with RadioAstron

    NASA Astrophysics Data System (ADS)

    Johnson, Michael D.; Kovalev, Yuri Y.; Gwinn, Carl R.; Gurvits, Leonid I.; Narayan, Ramesh; Macquart, Jean-Pierre; Jauncey, David L.; Voitsik, Peter A.; Anderson, James M.; Sokolovsky, Kirill V.; Lisakov, Mikhail M.

    2016-03-01

    Earth-space interferometry with RadioAstron provides the highest direct angular resolution ever achieved in astronomy at any wavelength. RadioAstron detections of the classic quasar 3C 273 on interferometric baselines up to 171,000 km suggest brightness temperatures exceeding expected limits from the “inverse-Compton catastrophe” by two orders of magnitude. We show that at 18 cm, these estimates most likely arise from refractive substructure introduced by scattering in the interstellar medium. We use the scattering properties to estimate an intrinsic brightness temperature of 7× {10}12 {{K}}, which is consistent with expected theoretical limits, but which is ˜15 times lower than estimates that neglect substructure. At 6.2 cm, the substructure influences the measured values appreciably but gives an estimated brightness temperature that is comparable to models that do not account for the substructure. At 1.35 {{cm}}, the substructure does not affect the extremely high inferred brightness temperatures, in excess of {10}13 {{K}}. We also demonstrate that for a source having a Gaussian surface brightness profile, a single long-baseline estimate of refractive substructure determines an absolute minimum brightness temperature, if the scattering properties along a given line of sight are known, and that this minimum accurately approximates the apparent brightness temperature over a wide range of total flux densities.

  3. Automated Land Cover Change Detection and Mapping from Hidden Parameter Estimates of Normalized Difference Vegetation Index (NDVI) Time-Series

    NASA Astrophysics Data System (ADS)

    Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.

    2017-12-01

    Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.

  4. Small negative cloud-to-ground lightning reports at the NASA Kennedy Space Center and Air Force Eastern Range

    NASA Astrophysics Data System (ADS)

    Wilson, Jennifer G.; Cummins, Kenneth L.; Krider, E. Philip

    2009-12-01

    The NASA Kennedy Space Center (KSC) and Air Force Eastern Range (ER) use data from two cloud-to-ground (CG) lightning detection networks, the Cloud-to-Ground Lightning Surveillance System (CGLSS) and the U.S. National Lightning Detection Network™ (NLDN), and a volumetric lightning mapping array, the Lightning Detection and Ranging (LDAR) system, to monitor and characterize lightning that is potentially hazardous to launch or ground operations. Data obtained from these systems during June-August 2006 have been examined to check the classification of small, negative CGLSS reports that have an estimated peak current, ∣Ip∣ less than 7 kA, and to determine the smallest values of Ip that are produced by first strokes, by subsequent strokes that create a new ground contact (NGC), and by subsequent strokes that remain in a preexisting channel (PEC). The results show that within 20 km of the KSC-ER, 21% of the low-amplitude negative CGLSS reports were produced by first strokes, with a minimum Ip of -2.9 kA; 31% were by NGCs, with a minimum Ip of -2.0 kA; and 14% were by PECs, with a minimum Ip of -2.2 kA. The remaining 34% were produced by cloud pulses or lightning events that we were not able to classify.

  5. Optimal use of land surface temperature data to detect changes in tropical forest cover

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Thijs T.; Frank, Andrew J.; Jin, Yufang; Smyth, Padhraic; Goulden, Michael L.; van der Werf, Guido R.; Randerson, James T.

    2011-06-01

    Rapid and accurate assessment of global forest cover change is needed to focus conservation efforts and to better understand how deforestation is contributing to the buildup of atmospheric CO2. Here we examined different ways to use land surface temperature (LST) to detect changes in tropical forest cover. In our analysis we used monthly 0.05° × 0.05° Terra Moderate Resolution Imaging Spectroradiometer (MODIS) observations of LST and Program for the Estimation of Deforestation in the Brazilian Amazon (PRODES) estimates of forest cover change. We also compared MODIS LST observations with an independent estimate of forest cover loss derived from MODIS and Landsat observations. Our study domain of approximately 10° × 10° included the Brazilian state of Mato Grosso. For optimal use of LST data to detect changes in tropical forest cover in our study area, we found that using data sampled during the end of the dry season (˜1-2 months after minimum monthly precipitation) had the greatest predictive skill. During this part of the year, precipitation was low, surface humidity was at a minimum, and the difference between day and night LST was the largest. We used this information to develop a simple temporal sampling algorithm appropriate for use in pantropical deforestation classifiers. Combined with the normalized difference vegetation index, a logistic regression model using day-night LST did moderately well at predicting forest cover change. Annual changes in day-night LST decreased during 2006-2009 relative to 2001-2005 in many regions within the Amazon, providing independent confirmation of lower deforestation levels during the latter part of this decade as reported by PRODES.

  6. Ellipsoids for anomaly detection in remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Grosklos, Guenchik; Theiler, James

    2015-05-01

    For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.

  7. [Estimation of the number of minimum salaries attributed to professions in function of their prestige].

    PubMed

    Sousa, F A; da Silva, J A

    2000-04-01

    The purpose of this study was to verify the relationship between professional prestige scaled through estimations and the professional prestige scaled through estimation of the number of minimum salaries attributed to professions in function of their prestige in society. Results showed: 1--the relationship between the estimation of magnitudes and the estimation of the number of minimum salaries attributed to the professions in function of their prestige is characterized by a function of potence with an exponent lower than 1,0,2--the orders of degrees of prestige of the professions resultant from different experiments involving different samples of subjects are highly concordant (W = 0.85; p < 0.001), considering the modality used as a number (estimation of magnitudes of minimum salaries).

  8. Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates

    PubMed Central

    Curtis, Caroline A.; Bradley, Bethany A.

    2016-01-01

    Background Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance. Methods We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the ‘plant characteristics’ information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF) and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN), and tested whether ΔCN was influenced by growth form or range size. Results Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001). The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation. Conclusion Our results show that distribution data are consistently broader than USDA PLANTS experts’ knowledge and likely provide more robust estimates of climatic tolerance, especially for widespread forbs and grasses. These findings suggest that widely available expert-based climatic tolerance estimates underrepresent species’ fundamental niche and likely fail to capture the realized niche. PMID:27870859

  9. Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    2015-03-01

    We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.

  10. Optimal use of land surface temperature data to detect changes in tropical forest cover

    NASA Astrophysics Data System (ADS)

    Van Leeuwen, T. T.; Frank, A. J.; Jin, Y.; Smyth, P.; Goulden, M.; van der Werf, G.; Randerson, J. T.

    2011-12-01

    Rapid and accurate assessment of global forest cover change is needed to focus conservation efforts and to better understand how deforestation is contributing to the build up of atmospheric CO2. Here we examined different ways to use remotely sensed land surface temperature (LST) to detect changes in tropical forest cover. In our analysis we used monthly 0.05×0.05 degree Terra MODerate Resolution Imaging Spectroradiometer (MODIS) observations of LST and PRODES (Program for the Estimation of Deforestation in the Brazilian Amazon) estimates of forest cover change. We also compared MODIS LST observations with an independent estimate of forest cover loss derived from MODIS and Landsat observations. Our study domain of approximately 10×10 degree included most of the Brazilian state of Mato Grosso. For optimal use of LST data to detect changes in tropical forest cover in our study area, we found that using data sampled during the end of the dry season (~1-2 months after minimum monthly precipitation) had the greatest predictive skill. During this part of the year, precipitation was low, surface humidity was at a minimum, and the difference between day and night LST was the largest. We used this information to develop a simple temporal sampling algorithm appropriate for use in pan-tropical deforestation classifiers. Combined with the normalized difference vegetation index (NDVI), a logistic regression model using day-night LST did moderately well at predicting forest cover change. Annual changes in day-night LST difference decreased during 2006-2009 relative to 2001-2005 in many regions within the Amazon, providing independent confirmation of lower deforestation levels during the latter part of this decade as reported by PRODES. The use of day-night LST differences may be particularly valuable for use with satellites that do not have spectral bands that allow for the estimation of NDVI or other vegetation indices.

  11. Finding a fox: an evaluation of survey methods to estimate abundance of a small desert carnivore.

    PubMed

    Dempsey, Steven J; Gese, Eric M; Kluever, Bryan M

    2014-01-01

    The status of many carnivore species is a growing concern for wildlife agencies, conservation organizations, and the general public. Historically, kit foxes (Vulpes macrotis) were classified as abundant and distributed in the desert and semi-arid regions of southwestern North America, but is now considered rare throughout its range. Survey methods have been evaluated for kit foxes, but often in populations where abundance is high and there is little consensus on which technique is best to monitor abundance. We conducted a 2-year study to evaluate four survey methods (scat deposition surveys, scent station surveys, spotlight survey, and trapping) for detecting kit foxes and measuring fox abundance. We determined the probability of detection for each method, and examined the correlation between the relative abundance as estimated by each survey method and the known minimum kit fox abundance as determined by radio-collared animals. All surveys were conducted on 15 5-km transects during the 3 biological seasons of the kit fox. Scat deposition surveys had both the highest detection probabilities (p = 0.88) and were most closely related to minimum known fox abundance (r2 = 0.50, P = 0.001). The next best method for kit fox detection was the scent station survey (p = 0.73), which had the second highest correlation to fox abundance (r2 = 0.46, P<0.001). For detecting kit foxes in a low density population we suggest using scat deposition transects during the breeding season. Scat deposition surveys have low costs, resilience to weather, low labor requirements, and pose no risk to the study animals. The breeding season was ideal for monitoring kit fox population size, as detections consisted of the resident population and had the highest detection probabilities. Using appropriate monitoring techniques will be critical for future conservation actions for this rare desert carnivore.

  12. Finding a Fox: An Evaluation of Survey Methods to Estimate Abundance of a Small Desert Carnivore

    PubMed Central

    Dempsey, Steven J.; Gese, Eric M.; Kluever, Bryan M.

    2014-01-01

    The status of many carnivore species is a growing concern for wildlife agencies, conservation organizations, and the general public. Historically, kit foxes (Vulpes macrotis) were classified as abundant and distributed in the desert and semi-arid regions of southwestern North America, but is now considered rare throughout its range. Survey methods have been evaluated for kit foxes, but often in populations where abundance is high and there is little consensus on which technique is best to monitor abundance. We conducted a 2-year study to evaluate four survey methods (scat deposition surveys, scent station surveys, spotlight survey, and trapping) for detecting kit foxes and measuring fox abundance. We determined the probability of detection for each method, and examined the correlation between the relative abundance as estimated by each survey method and the known minimum kit fox abundance as determined by radio-collared animals. All surveys were conducted on 15 5-km transects during the 3 biological seasons of the kit fox. Scat deposition surveys had both the highest detection probabilities (p = 0.88) and were most closely related to minimum known fox abundance (r2 = 0.50, P = 0.001). The next best method for kit fox detection was the scent station survey (p = 0.73), which had the second highest correlation to fox abundance (r2 = 0.46, P<0.001). For detecting kit foxes in a low density population we suggest using scat deposition transects during the breeding season. Scat deposition surveys have low costs, resilience to weather, low labor requirements, and pose no risk to the study animals. The breeding season was ideal for monitoring kit fox population size, as detections consisted of the resident population and had the highest detection probabilities. Using appropriate monitoring techniques will be critical for future conservation actions for this rare desert carnivore. PMID:25148102

  13. Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation

    DTIC Science & Technology

    2007-05-30

    with large region of attraction about the true minimum. The physical optics models provide features for high confidence identification of stationary...the detection test are used to estimate 3D object scattering; multiple images can be noncoherently combined to reconstruct a more complete object...Proc. SPIE Algorithms for Synthetic Aper- ture Radar Imagery XIII, The International Society for Optical Engineering, April 2006. [40] K. Varshney, M. C

  14. Diallel analysis for sex-linked and maternal effects.

    PubMed

    Zhu, J; Weir, B S

    1996-01-01

    Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.

  15. Novel high-resolution computed tomography-based radiomic classifier for screen-identified pulmonary nodules in the National Lung Screening Trial.

    PubMed

    Peikert, Tobias; Duan, Fenghai; Rajagopalan, Srinivasan; Karwoski, Ronald A; Clay, Ryan; Robb, Richard A; Qin, Ziling; Sicks, JoRean; Bartholmai, Brian J; Maldonado, Fabien

    2018-01-01

    Optimization of the clinical management of screen-detected lung nodules is needed to avoid unnecessary diagnostic interventions. Herein we demonstrate the potential value of a novel radiomics-based approach for the classification of screen-detected indeterminate nodules. Independent quantitative variables assessing various radiologic nodule features such as sphericity, flatness, elongation, spiculation, lobulation and curvature were developed from the NLST dataset using 726 indeterminate nodules (all ≥ 7 mm, benign, n = 318 and malignant, n = 408). Multivariate analysis was performed using least absolute shrinkage and selection operator (LASSO) method for variable selection and regularization in order to enhance the prediction accuracy and interpretability of the multivariate model. The bootstrapping method was then applied for the internal validation and the optimism-corrected AUC was reported for the final model. Eight of the originally considered 57 quantitative radiologic features were selected by LASSO multivariate modeling. These 8 features include variables capturing Location: vertical location (Offset carina centroid z), Size: volume estimate (Minimum enclosing brick), Shape: flatness, Density: texture analysis (Score Indicative of Lesion/Lung Aggression/Abnormality (SILA) texture), and surface characteristics: surface complexity (Maximum shape index and Average shape index), and estimates of surface curvature (Average positive mean curvature and Minimum mean curvature), all with P<0.01. The optimism-corrected AUC for these 8 features is 0.939. Our novel radiomic LDCT-based approach for indeterminate screen-detected nodule characterization appears extremely promising however independent external validation is needed.

  16. Minimal Model of Prey Localization through the Lateral-Line System

    NASA Astrophysics Data System (ADS)

    Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo

    2003-10-01

    The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.

  17. The convective noise floor for the spectroscopic detection of low mass companions to solar type stars

    NASA Technical Reports Server (NTRS)

    Deming, D.; Espenak, F.; Jennings, D. E.; Brault, J. W.

    1986-01-01

    The threshold mass for the unambiguous spectroscopic detection of low mass companions to solar type stars is defined here as the time when the maximum acceleration in the stellar radial velocity due to the Doppler reflex of the companion exceeds the apparent acceleration produced by changes in convection. An apparent acceleration of 11 m/s/yr in integrated sunlight was measured using near infrared Fourier transform spectroscopy. This drift in the apparent solar velocity is attributed to a lessening in the magnetic inhibition of granular convection as solar minimum approaches. The threshold mass for spectroscopic detection of companions to a one solar mass star is estimated at below one Jupiter mass.

  18. Minimum detectable gas concentration performance evaluation method for gas leak infrared imaging detection systems.

    PubMed

    Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo

    2017-04-01

    Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.

  19. Soil C and N minimum detectable changes and treatment differences in a multi-treatment forest experiment.

    Treesearch

    P.S. Homann; B.T. Bormann; J.R. Boyle; R.L. Darbyshire; R. Bigley

    2008-01-01

    Detecting changes in forest soil C and N is vital to the study of global budgets and long-term ecosystem productivity. Identifying differences among land-use practices may guide future management. Our objective was to determine the relation of minimum detectable changes (MDCs) and minimum detectable differences between treatments (MDDs) to soil C and N variability at...

  20. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  1. Unbalance detection in rotor systems with active bearings using self-sensing piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Ambur, Ramakrishnan; Rinderknecht, Stephan

    2018-03-01

    Machines which are developed today are highly automated due to increased use of mechatronic systems. To ensure their reliable operation, fault detection and isolation (FDI) is an important feature along with a better control. This research work aims to achieve and integrate both these functions with minimum number of components in a mechatronic system. This article investigates a rotating machine with active bearings equipped with piezoelectric actuators. There is an inherent coupling between their electrical and mechanical properties because of which they can also be used as sensors. Mechanical deflection can be reconstructed from these self-sensing actuators from measured voltage and current signals. These virtual sensor signals are utilised to detect unbalance in a rotor system. Parameters of unbalance such as its magnitude and phase are detected by parametric estimation method in frequency domain. Unbalance location has been identified using hypothesis of localization of faults. Robustness of the estimates against outliers in measurements is improved using weighted least squares method. Unbalances are detected in a real test bench apart from simulation using its model. Experiments are performed in stationary as well as in transient case. As a further step unbalances are estimated during simultaneous actuation of actuators in closed loop with an adaptive algorithm for vibration minimisation. This strategy could be used in systems which aim for both fault detection and control action.

  2. Results and evaluation of a survey to estimate Pacific walrus population size, 2006

    USGS Publications Warehouse

    Speckman, Suzann G.; Chernook, Vladimir I.; Burn, Douglas M.; Udevitz, Mark S.; Kochnev, Anatoly A.; Vasilev, Alexander; Jay, Chadwick V.; Lisovsky, Alexander; Fischbach, Anthony S.; Benter, R. Bradley

    2011-01-01

    In spring 2006, we conducted a collaborative U.S.-Russia survey to estimate abundance of the Pacific walrus (Odobenus rosmarus divergens). The Bering Sea was partitioned into survey blocks, and a systematic random sample of transects within a subset of the blocks was surveyed with airborne thermal scanners using standard strip-transect methodology. Counts of walruses in photographed groups were used to model the relation between thermal signatures and the number of walruses in groups, which was used to estimate the number of walruses in groups that were detected by the scanner but not photographed. We also modeled the probability of thermally detecting various-sized walrus groups to estimate the number of walruses in groups undetected by the scanner. We used data from radio-tagged walruses to adjust on-ice estimates to account for walruses in the water during the survey. The estimated area of available habitat averaged 668,000 km2 and the area of surveyed blocks was 318,204 km2. The number of Pacific walruses within the surveyed area was estimated at 129,000 with 95% confidence limits of 55,000 to 507,000 individuals. This value can be used by managers as a minimum estimate of the total population size.

  3. Compensating for estimation smoothing in kriging

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky, Vera

    1996-01-01

    Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.

  4. Functional Brain Networks: Does the Choice of Dependency Estimator and Binarization Method Matter?

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2016-07-01

    The human brain can be modelled as a complex networked structure with brain regions as individual nodes and their anatomical/functional links as edges. Functional brain networks are constructed by first extracting weighted connectivity matrices, and then binarizing them to minimize the noise level. Different methods have been used to estimate the dependency values between the nodes and to obtain a binary network from a weighted connectivity matrix. In this work we study topological properties of EEG-based functional networks in Alzheimer’s Disease (AD). To estimate the connectivity strength between two time series, we use Pearson correlation, coherence, phase order parameter and synchronization likelihood. In order to binarize the weighted connectivity matrices, we use Minimum Spanning Tree (MST), Minimum Connected Component (MCC), uniform threshold and density-preserving methods. We find that the detected AD-related abnormalities highly depend on the methods used for dependency estimation and binarization. Topological properties of networks constructed using coherence method and MCC binarization show more significant differences between AD and healthy subjects than the other methods. These results might explain contradictory results reported in the literature for network properties specific to AD symptoms. The analysis method should be seriously taken into account in the interpretation of network-based analysis of brain signals.

  5. Pairing call-response surveys and distance sampling for a mammalian carnivore

    USGS Publications Warehouse

    Hansen, Sara J. K.; Frair, Jacqueline L.; Underwood, Harold B.; Gibbs, James P.

    2015-01-01

    Density estimates accounting for differential animal detectability are difficult to acquire for wide-ranging and elusive species such as mammalian carnivores. Pairing distance sampling with call-response surveys may provide an efficient means of tracking changes in populations of coyotes (Canis latrans), a species of particular interest in the eastern United States. Blind field trials in rural New York State indicated 119-m linear error for triangulated coyote calls, and a 1.8-km distance threshold for call detectability, which was sufficient to estimate a detection function with precision using distance sampling. We conducted statewide road-based surveys with sampling locations spaced ≥6 km apart from June to August 2010. Each detected call (be it a single or group) counted as a single object, representing 1 territorial pair, because of uncertainty in the number of vocalizing animals. From 524 survey points and 75 detections, we estimated the probability of detecting a calling coyote to be 0.17 ± 0.02 SE, yielding a detection-corrected index of 0.75 pairs/10 km2 (95% CI: 0.52–1.1, 18.5% CV) for a minimum of 8,133 pairs across rural New York State. Importantly, we consider this an index rather than true estimate of abundance given the unknown probability of coyote availability for detection during our surveys. Even so, pairing distance sampling with call-response surveys provided a novel, efficient, and noninvasive means of monitoring populations of wide-ranging and elusive, albeit reliably vocal, mammalian carnivores. Our approach offers an effective new means of tracking species like coyotes, one that is readily extendable to other species and geographic extents, provided key assumptions of distance sampling are met.

  6. Anti-D immunoglobulin preparations: the stability of anti-D concentrations and the error of the assay of anti-D.

    PubMed

    Hughes-Jones, N C; Hunt, V A; Maycock, W D; Wesley, E D; Vallet, L

    1978-01-01

    An analysis of the assay of 28 preparations of anti-D immunoglobulin using a radioisotope method carried out at 6-montly intervals for 2--4.5 years showed an average fall in anti-D concentration of 10.6% each year, with 99% confidence limits of 6.8--14.7%. The fall in anti-D concentration after storage at 37 degrees C for 1 month was less than 8%, the minimum change that could be detected. No significant change in physical characteristics of the immunoglobulin were detected. The error of a single estimate of anti-D by the radioisotope method (125I-labelled anti-IgG) used here was calculated to be such that the true value probably (p = 0.95) lay between 66 and 150% of the estimated value.

  7. Constrained signal reconstruction from wavelet transform coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1991-12-31

    A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less

  8. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography.

    PubMed

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-04-01

    Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.

  9. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography

    PubMed Central

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-01-01

    Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547

  10. Trends in annual minimum exposed snow and ice cover in High Mountain Asia from MODIS

    NASA Astrophysics Data System (ADS)

    Rittger, Karl; Brodzik, Mary J.; Painter, Thomas H.; Racoviteanu, Adina; Armstrong, Richard; Dozier, Jeff

    2016-04-01

    Though a relatively short record on climatological scales, data from the Moderate Resolution Imaging Spectroradiometer (MODIS) from 2000-2014 can be used to evaluate changes in the cryosphere and provide a robust baseline for future observations from space. We use the MODIS Snow Covered Area and Grain size (MODSCAG) algorithm, based on spectral mixture analysis, to estimate daily fractional snow and ice cover and the MODICE Persistent Ice (MODICE) algorithm to estimate the annual minimum snow and ice fraction (fSCA) for each year from 2000 to 2014 in High Mountain Asia. We have found that MODSCAG performs better than other algorithms, such as the Normalized Difference Index (NDSI), at detecting snow. We use MODICE because it minimizes false positives (compared to maximum extents), for example, when bright soils or clouds are incorrectly classified as snow, a common problem with optical satellite snow mapping. We analyze changes in area using the annual MODICE maps of minimum snow and ice cover for over 15,000 individual glaciers as defined by the Randolph Glacier Inventory (RGI) Version 5, focusing on the Amu Darya, Syr Darya, Upper Indus, Ganges, and Brahmaputra River basins. For each glacier with an area of at least 1 km2 as defined by RGI, we sum the total minimum snow and ice covered area for each year from 2000 to 2014 and estimate the trends in area loss or gain. We find the largest loss in annual minimum snow and ice extent for 2000-2014 in the Brahmaputra and Ganges with 57% and 40%, respectively, of analyzed glaciers with significant losses (p-value<0.05). In the Upper Indus River basin, we see both gains and losses in minimum snow and ice extent, but more glaciers with losses than gains. Our analysis shows that a smaller proportion of glaciers in the Amu Darya and Syr Darya are experiencing significant changes in minimum snow and ice extent (3.5% and 12.2%), possibly because more of the glaciers in this region are smaller than 1 km2 than in the Indus, Ganges, and Brahmaputra making analysis from MODIS (pixel area ~0.25 km2) difficult. Overall, we see 23% of the glaciers in the 5 river basins with significant trends (in either direction). We relate these changes in area to topography and climate to understand the driving processes related to these changes. In addition to annual minimum snow and ice cover, the MODICE algorithm also provides the date of minimum fSCA for each pixel. To determine whether the surface was snow or ice we use the date of minimum fSCA from MODICE to index daily maps of snow on ice (SOI), or exposed glacier ice (EGI) and systematically derive an equilibrium line altitude (ELA) for each year from 2000-2014. We test this new algorithm in the Upper Indus basin and produce annual estimates of ELA. For the Upper Indus basin we are deriving annual ELAs that range from 5350 m to 5450 m which is slightly higher than published values of 5200 m for this region.

  11. Estimation of Leakage Potential of Selected Sites in Interstate and Tri-State Canals Using Geostatistical Analysis of Selected Capacitively Coupled Resistivity Profiles, Western Nebraska, 2004

    USGS Publications Warehouse

    Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.

    2009-01-01

    With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.

  12. Generalizing boundaries for triangular designs, and efficacy estimation at extended follow-ups.

    PubMed

    Allison, Annabel; Edwards, Tansy; Omollo, Raymond; Alves, Fabiana; Magirr, Dominic; E Alexander, Neal D

    2015-11-16

    Visceral leishmaniasis (VL) is a parasitic disease transmitted by sandflies and is fatal if left untreated. Phase II trials of new treatment regimens for VL are primarily carried out to evaluate safety and efficacy, while pharmacokinetic data are also important to inform future combination treatment regimens. The efficacy of VL treatments is evaluated at two time points, initial cure, when treatment is completed and definitive cure, commonly 6 months post end of treatment, to allow for slow response to treatment and detection of relapses. This paper investigates a generalization of the triangular design to impose a minimum sample size for pharmacokinetic or other analyses, and methods to estimate efficacy at extended follow-up accounting for the sequential design and changes in cure status during extended follow-up. We provided R functions that generalize the triangular design to impose a minimum sample size before allowing stopping for efficacy. For estimation of efficacy at a second, extended, follow-up time, the performance of a shrinkage estimator (SHE), a probability tree estimator (PTE) and the maximum likelihood estimator (MLE) for estimation was assessed by simulation. The SHE and PTE are viable approaches to estimate an extended follow-up although the SHE performed better than the PTE: the bias and root mean square error were lower and coverage probabilities higher. Generalization of the triangular design is simple to implement for adaptations to meet requirements for pharmacokinetic analyses. Using the simple MLE approach to estimate efficacy at extended follow-up will lead to biased results, generally over-estimating treatment success. The SHE is recommended in trials of two or more treatments. The PTE is an acceptable alternative for one-arm trials or where use of the SHE is not possible due to computational complexity. NCT01067443 , February 2010.

  13. A 2D eye gaze estimation system with low-resolution webcam images

    NASA Astrophysics Data System (ADS)

    Ince, Ibrahim Furkan; Kim, Jin Woo

    2011-12-01

    In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI) algorithm. Deformable template-based 2D gaze estimation (DTBGE) algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  14. AH Leo and the Blazhko Effect

    NASA Astrophysics Data System (ADS)

    Phillips, J.; Gay, P. L.

    2004-12-01

    We obtained 563 V-Band observations of AH Leo between January 27 and May 12, 2004. All observations were obtained with a 12-inch Schmidt-Cassegrain located on the island of Saipan, in the Commonwealth of the Northern Mariana Islands. We show that AH Leo is a type RRab RR Lyrae star with a minimum magnitude of V=14.658 magnitudes, a maximum amplitude of 0.989 magnitudes and a minimum amplitude of perhaps just 0.4 magnitudes. Its primary period is 0.4662609 days. Our observations also confirm the presence of the Blazhko effect, which had previously been detected by Smith and Gay (private communication) in 1993 and 1994. We estimate the Blazhko period to be roughly 20-days, however poor phase coverage at maximum light makes exact determination impossible. We also note that the bump during minimum, which is common in many RR Lyraes, varied throughout the Blazhko cycle, demonstrating amplitudes between 0 and 0.15 magnitudes. We would like to thank Sarah Maddison and Swinburne Astronomy Online for supporting this project

  15. 12 CFR Appendix M1 to Part 1026 - Repayment Disclosures

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...

  16. 12 CFR Appendix M1 to Part 1026 - Repayment Disclosures

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... terms of a cardholder's account that will expire in a fixed period of time, as set forth by the card... estimates. (1) Minimum payment formulas. When calculating the minimum payment repayment estimate, card... calculate the minimum payment amount for special purchases, such as a “club plan purchase.” Also, assume...

  17. Estimating missing daily temperature extremes in Jaffna, Sri Lanka

    NASA Astrophysics Data System (ADS)

    Thevakaran, A.; Sonnadara, D. U. J.

    2018-04-01

    The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.

  18. Pseudorange Measurement Method Based on AIS Signals.

    PubMed

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-05-22

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system.

  19. Pseudorange Measurement Method Based on AIS Signals

    PubMed Central

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-01-01

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153

  20. Activation product analysis in a mixed sample containing both fission and neutron activation products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison, Samuel S.; Clark, Sue B.; Eggemeyer, Tere A.

    Activation analysis of gold (Au) is used to estimate neutron fluence resulting from a criticality event; however, such analyses are complicated by simultaneous production of other gamma-emitting fission products. Confidence in neutron fluence estimates can be increased by quantifying additional activation products such as platinum (Pt), tantalum (Ta), and tungsten (W). This work describes a radiochemical separation procedure for the determination of these activation products. Anion exchange chromatography is used to separate anionic forms of these metals in a nitric acid matrix; thiourea is used to isolate the Au and Pt fraction, followed by removal of the Ta fraction usingmore » hydrogen peroxide. W, which is not retained on the first anion exchange column, is transposed to an HCl/HF matrix to enhance retention on a second anion exchange column and finally eluted using HNO3/HF. Chemical separations result in a reduction in the minimum detectable activity by a factor of 287, 207, 141, and 471 for 182Ta, 187W, 197Pt, and 198Au respectively, with greater than 90% recovery for all elements. These results represent the highest recoveries and lowest minimum detectable activities for 182Ta, 187W, 197Pt, and 198Au from mixed fission-activation product samples to date, enabling considerable refinement in the measurement uncertainties for neutron fluences in highly complex sample matrices.« less

  1. Lung counting: comparison of detector performance with a four detector array that has either metal or carbon fibre end caps, and the effect on mda calculation.

    PubMed

    Ahmed, Asm Sabbir; Hauck, Barry; Kramer, Gary H

    2012-08-01

    This study described the performance of an array of high-purity Germanium detectors, designed with two different end cap materials-steel and carbon fibre. The advantages and disadvantages of using this detector type in the estimation of the minimum detectable activity (MDA) for different energy peaks of isotope (152)Eu were illustrated. A Monte Carlo model was developed to study the detection efficiency for the detector array. A voxelised Lawrence Livermore torso phantom, equipped with lung, chest plates and overlay plates, was used to mimic a typical lung counting protocol with the array of detectors. The lung of the phantom simulated the volumetric source organ. A significantly low MDA was estimated for energy peaks at 40 keV and at a chest wall thickness of 6.64 cm.

  2. Using Aerosol Reflectance for Dust Detection

    NASA Astrophysics Data System (ADS)

    Bahramvash Shams, S.; Mohammadzade, A.

    2013-09-01

    In this study we propose an approach for dust detection by aerosol reflectance over arid and urban region in clear sky condition. In urban and arid areas surface reflectance in red and infrared spectral is bright and hence shorter wavelength is required for this detections. Main step of our approach can be mentioned as: cloud mask for excluding cloudy pixels from our calculation, calculate Rayleigh path radiance, construct a surface reflectance data base, estimate aerosol reflectance, detect dust aerosol, dust detection and evaluations of dust detection. Spectral with wavelength 0.66, 0.55, 0.47 μm has been used in our dust detection. Estimating surface reflectance is the most challenging step of obtaining aerosol reflectance from top of atmosphere (TOA) reflectance. Hence for surface estimation we had created a surface reflectance database of 0.05 degree latitude by 0.05 degree longitude resolution by using minimum reflectivity technique (MRT). In order to evaluate our dust detection algorithm MODIS aerosol product MOD04 and common dust detection method named Brightness Temperature Difference (BTD) had been used. We had implemented this method to Moderate Resolution Imaging Spectroradiometer (MODIS) image of part of Iran (7 degree latitude and 8 degree longitude) spring 2005 dust phenomenon from April to June. This study uses MODIS LIB calibrated reflectance high spatial resolution (500 m) MOD02Hkm on TERRA spacecraft. Hence our dust detection spatial resolution will be higher spatial resolution than MODIS aerosol product MOD04 which has 10 × 10 km2 and BTD resolution is 1 km due to the band 29 (8.7 μm), 31 (11 μm), and 32 (12 μm) spatial resolutions.

  3. Estimating contaminant loads in rivers: An application of adjusted maximum likelihood to type 1 censored data

    USGS Publications Warehouse

    Cohn, Timothy A.

    2005-01-01

    This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.

  4. Novel utilisation of a circular multi-reflection cell applied to materials ageing experiments

    NASA Astrophysics Data System (ADS)

    Knox, D. A.; King, A. K.; McNaghten, E. D.; Brooks, S. J.; Martin, P. A.; Pimblott, S. M.

    2015-04-01

    We report on the novel utilisation of a circular multi-reflection (CMR) cell applied to materials ageing experiments. This enabled trace gas detection within a narrow interfacial region located between two sample materials and remotely interrogated with near-infrared sources combined with fibre-optic coupling. Tunable diode laser absorption spectroscopy was used to detect water vapour and carbon dioxide at wavelengths near 1,358 and 2,004 nm, respectively, with corresponding detection limits of 7 and 1,139 ppm m Hz-0.5. The minimum detectable absorption was estimated to be 2.82 × 10-3 over a 1-s average. In addition, broadband absorption spectroscopy was carried out for the detection of acetic acid, using a super-luminescent light emitting diode centred around 1,430 nm. The 69 cm measurement pathlength was limited by poor manufacturing tolerances of the spherical CMR mirrors and the consequent difficulty of collecting all the cell output light.

  5. Cost-effective sampling of (137)Cs-derived net soil redistribution: part 2 - estimating the spatial mean change over time.

    PubMed

    Chappell, A; Li, Y; Yu, H Q; Zhang, Y Z; Li, X Y

    2015-06-01

    The caesium-137 ((137)Cs) technique for estimating net, time-integrated soil redistribution by the processes of wind, water and tillage is increasingly being used with repeated sampling to form a baseline to evaluate change over small (years to decades) timeframes. This interest stems from knowledge that since the 1950s soil redistribution has responded dynamically to different phases of land use change and management. Currently, there is no standard approach to detect change in (137)Cs-derived net soil redistribution and thereby identify the driving forces responsible for change. We outline recent advances in space-time sampling in the soil monitoring literature which provide a rigorous statistical and pragmatic approach to estimating the change over time in the spatial mean of environmental properties. We apply the space-time sampling framework, estimate the minimum detectable change of net soil redistribution and consider the information content and cost implications of different sampling designs for a study area in the Chinese Loess Plateau. Three phases (1954-1996, 1954-2012 and 1996-2012) of net soil erosion were detectable and attributed to well-documented historical change in land use and management practices in the study area and across the region. We recommend that the design for space-time sampling is considered carefully alongside cost-effective use of the spatial mean to detect and correctly attribute cause of change over time particularly across spatial scales of variation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The minimum mass of detectable planets in protoplanetary discs and the derivation of planetary masses from high-resolution observations.

    PubMed

    Rosotti, Giovanni P; Juhasz, Attila; Booth, Richard A; Clarke, Cathie J

    2016-07-01

    We investigate the minimum planet mass that produces observable signatures in infrared scattered light and submillimetre (submm) continuum images and demonstrate how these images can be used to measure planet masses to within a factor of about 2. To this end, we perform multi-fluid gas and dust simulations of discs containing low-mass planets, generating simulated observations at 1.65, 10 and 850 μm. We show that the minimum planet mass that produces a detectable signature is ∼15 M ⊕ : this value is strongly dependent on disc temperature and changes slightly with wavelength (favouring the submm). We also confirm previous results that there is a minimum planet mass of ∼20 M ⊕ that produces a pressure maximum in the disc: only planets above this threshold mass generate a dust trap that can eventually create a hole in the submm dust. Below this mass, planets produce annular enhancements in dust outwards of the planet and a reduction in the vicinity of the planet. These features are in steady state and can be understood in terms of variations in the dust radial velocity, imposed by the perturbed gas pressure radial profile, analogous to a traffic jam. We also show how planet masses can be derived from structure in scattered light and submm images. We emphasize that simulations with dust need to be run over thousands of planetary orbits so as to allow the gas profile to achieve a steady state and caution against the estimation of planet masses using gas-only simulations.

  7. New Hypervelocity Terminal Intercept Guidance Systems for Deflecting/Disrupting Hazardous Asteroids

    NASA Astrophysics Data System (ADS)

    Lyzhoft, Joshua Richard

    Computational modeling and simulations of visual and infrared (IR) sensors are investigated for a new hypervelocity terminal guidance system of intercepting small asteroids (50 to 150 meters in diameter). Computational software tools for signal-to-noise ratio estimation of visual and IR sensors, estimation of minimum and maximum ranges of target detection, and GPU (Graphics Processing Units)-accelerated simulations of the IR-based terminal intercept guidance systems are developed. Scaled polyhedron models of known objects, such as the Rosetta mission's Comet 67P/C-G, NASA's OSIRIS-REx Bennu, and asteroid 433 Eros, are utilized in developing a GPU-based simulation tool for the IR-based terminal intercept guidance systems. A parallelized-ray tracing algorithm for simulating realistic surface-to-surface shadowing of irregular-shaped asteroids or comets is developed. Polyhedron solid-angle approximation is also considered. Using these computational models, digital image processing is investigated to determine single or multiple impact locations to assess the technical feasibility of new planetary defense mission concepts of utilizing a Hypervelocity Asteroid Intercept Vehicle (HAIV) or a Multiple Kinetic-energy Interceptor Vehicle (MKIV). Study results indicate that the IR-based guidance system outperforms the visual-based system in asteroid detection and tracking. When using an IR sensor, predicting impact locations from filtered images resulted in less jittery spacecraft control accelerations than conducting missions with a visual sensor. Infrared sensors have also the possibility to detect asteroids at greater distances, and if properly used, can aid in terminal phase guidance for proper impact location determination for the MKIV system. Emerging new topics of the Minimum Orbit Intersection Distance (MOID) estimation and the Full-Two-Body Problem (F2BP) formulation are also investigated to assess a potential near-Earth object collision risk and the proximity gravity effects of an irregular-shaped binary-asteroid target on a standoff nuclear explosion mission.

  8. Estimation of minimum ventilation requirement of dairy cattle barns for different outdoor temperature and its affects on indoor temperature: Bursa case.

    PubMed

    Yaslioglu, Erkan; Simsek, Ercan; Kilic, Ilker

    2007-04-15

    In the study, 10 different dairy cattle barns with natural ventilation system were investigated in terms of structural aspects. VENTGRAPH software package was used to estimate minimum ventilation requirements for three different outdoor design temperatures (-3, 0 and 1.7 degrees C). Variation in indoor temperatures was also determined according to the above-mentioned conditions. In the investigated dairy cattle barns, on condition that minimum ventilation requirement to be achieved for -3, 0 and 1.7 degrees C outdoor design temperature and 70, 80% Indoor Relative Humidity (IRH), estimated indoor temperature were ranged from 2.2 to 12.2 degrees C for 70% IRH, 4.3 to 15.0 degrees C for 80% IRH. Barn type, outdoor design temperature and indoor relative humidity significantly (p < 0.01) affect the indoor temperature. The highest ventilation requirement was calculated for straw yard (13879 m3 h(-1)) while the lowest was estimated for tie-stall (6169.20 m3 h(-1)). Estimated minimum ventilation requirements per animal were significantly (p < 0.01) different according to the barn types. Effect of outdoor esign temperatures on minimum ventilation requirements and minimum ventilation requirements per animal was found to be significant (p < 0.05, p < 0.01). Estimated indoor temperatures were in thermoneutral zone (-2 to 20 degrees C). Therefore, one can be said that use of naturally ventilated cold dairy barns in the region will not lead to problems associated with animal comfort in winter.

  9. Annual Estimated Minimum School Program of Utah School Districts, 1984-85.

    ERIC Educational Resources Information Center

    Utah State Office of Education, Salt Lake City. School Finance and Business Section.

    This bulletin presents both the statistical and financial data of the Estimated Annual State-Supported Minimum School Program for the 40 school districts of the State of Utah for the 1984-85 school year. It is published for the benefit of those interested in research into the minimum school programs of the various Utah school districts. A brief…

  10. Airborne Collision Detection and Avoidance for Small UAS Sense and Avoid Systems

    NASA Astrophysics Data System (ADS)

    Sahawneh, Laith Rasmi

    The increasing demand to integrate unmanned aircraft systems (UAS) into the national airspace is motivated by the rapid growth of the UAS industry, especially small UAS weighing less than 55 pounds. Their use however has been limited by the Federal Aviation Administration regulations due to collision risk they pose, safety and regulatory concerns. Therefore, before civil aviation authorities can approve routine UAS flight operations, UAS must be equipped with sense-and-avoid technology comparable to the see-and-avoid requirements for manned aircraft. The sense-and-avoid problem includes several important aspects including regulatory and system-level requirements, design specifications and performance standards, intruder detecting and tracking, collision risk assessment, and finally path planning and collision avoidance. In this dissertation, our primary focus is on developing an collision detection, risk assessment and avoidance framework that is computationally affordable and suitable to run on-board small UAS. To begin with, we address the minimum sensing range for the sense-and-avoid (SAA) system. We present an approximate close form analytical solution to compute the minimum sensing range to safely avoid an imminent collision. The approach is then demonstrated using a radar sensor prototype that achieves the required minimum sensing range. In the area of collision risk assessment and collision prediction, we present two approaches to estimate the collision risk of an encounter scenario. The first is a deterministic approach similar to those been developed for Traffic Alert and Collision Avoidance (TCAS) in manned aviation. We extend the approach to account for uncertainties of state estimates by deriving an analytic expression to propagate the error variance using Taylor series approximation. To address unanticipated intruders maneuvers, we propose an innovative probabilistic approach to quantify likely intruder trajectories and estimate the probability of collision risk using the uncorrelated encounter model (UEM) developed by MIT Lincoln Laboratory. We evaluate the proposed approach using Monte Carlo simulations and compare the performance with linearly extrapolated collision detection logic. For the path planning and collision avoidance part, we present multiple reactive path planning algorithms. We first propose a collision avoidance algorithm based on a simulated chain that responds to a virtual force field produced by encountering intruders. The key feature of the proposed approach is to model the future motion of both the intruder and the ownship using a chain of waypoints that are equally spaced in time. This timing information is used to continuously re-plan paths that minimize the probability of collision. Second, we present an innovative collision avoidance logic using an ownship centered coordinate system. The technique builds a graph in the local-level frame and uses the Dijkstra's algorithm to find the least cost path. An advantage of this approach is that collision avoidance is inherently a local phenomenon and can be more naturally represented in the local coordinates than the global coordinates. Finally, we propose a two step path planner for ground-based SAA systems. In the first step, an initial suboptimal path is generated using A* search. In the second step, using the A* solution as an initial condition, a chain of unit masses connected by springs and dampers evolves in a simulated force field. The chain is described by a set of ordinary differential equations that is driven by virtual forces to find the steady-state equilibrium. The simulation results show that the proposed approach produces collision-free plans while minimizing the path length. To move towards a deployable system, we apply collision detection and avoidance techniques to a variety of simulation and sensor modalities including camera, radar and ADS-B along with suitable tracking schemes. Keywords: unmanned aircraft system, small UAS, sense and avoid, minimum sensing range, airborne collision detection and avoidance, collision detection, collision risk assessment, collision avoidance, conflict detection, conflict avoidance, path planning.

  11. Biological nitrogen fixation in the oxygen-minimum region of the eastern tropical North Pacific ocean.

    PubMed

    Jayakumar, Amal; Chang, Bonnie X; Widner, Brittany; Bernhardt, Peter; Mulholland, Margaret R; Ward, Bess B

    2017-10-01

    Biological nitrogen fixation (BNF) was investigated above and within the oxygen-depleted waters of the oxygen-minimum zone of the Eastern Tropical North Pacific Ocean. BNF rates were estimated using an isotope tracer method that overcame the uncertainty of the conventional bubble method by directly measuring the tracer enrichment during the incubations. Highest rates of BNF (~4 nM day -1 ) occurred in coastal surface waters and lowest detectable rates (~0.2 nM day -1 ) were found in the anoxic region of offshore stations. BNF was not detectable in most samples from oxygen-depleted waters. The composition of the N 2 -fixing assemblage was investigated by sequencing of nifH genes. The diazotrophic assemblage in surface waters contained mainly Proteobacterial sequences (Cluster I nifH), while both Proteobacterial sequences and sequences with high identities to those of anaerobic microbes characterized as Clusters III and IV type nifH sequences were found in the anoxic waters. Our results indicate modest input of N through BNF in oxygen-depleted zones mainly due to the activity of proteobacterial diazotrophs.

  12. Trend analysis of long-term temperature time series in the Greater Toronto Area (GTA)

    NASA Astrophysics Data System (ADS)

    Mohsin, Tanzina; Gough, William A.

    2010-08-01

    As the majority of the world’s population is living in urban environments, there is growing interest in studying local urban climates. In this paper, for the first time, the long-term trends (31-162 years) of temperature change have been analyzed for the Greater Toronto Area (GTA). Annual and seasonal time series for a number of urban, suburban, and rural weather stations are considered. Non-parametric statistical techniques such as Mann-Kendall test and Theil-Sen slope estimation are used primarily for the assessing of the significance and detection of trends, and the sequential Mann test is used to detect any abrupt climate change. Statistically significant trends for annual mean and minimum temperatures are detected for almost all stations in the GTA. Winter is found to be the most coherent season contributing substantially to the increase in annual minimum temperature. The analyses of the abrupt changes in temperature suggest that the beginning of the increasing trend in Toronto started after the 1920s and then continued to increase to the 1960s. For all stations, there is a significant increase of annual and seasonal (particularly winter) temperatures after the 1980s. In terms of the linkage between urbanization and spatiotemporal thermal patterns, significant linear trends in annual mean and minimum temperature are detected for the period of 1878-1978 for the urban station, Toronto, while for the rural counterparts, the trends are not significant. Also, for all stations in the GTA that are situated in all directions except south of Toronto, substantial temperature change is detected for the periods of 1970-2000 and 1989-2000. It is concluded that the urbanization in the GTA has significantly contributed to the increase of the annual mean temperatures during the past three decades. In addition to urbanization, the influence of local climate, topography, and larger scale warming are incorporated in the analysis of the trends.

  13. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  14. A point of minimal important difference (MID): a critique of terminology and methods.

    PubMed

    King, Madeleine T

    2011-04-01

    The minimal important difference (MID) is a phrase with instant appeal in a field struggling to interpret health-related quality of life and other patient-reported outcomes. The terminology can be confusing, with several terms differing only slightly in definition (e.g., minimal clinically important difference, clinically important difference, minimally detectable difference, the subjectively significant difference), and others that seem similar despite having quite different meanings (minimally detectable difference versus minimum detectable change). Often, nuances of definition are of little consequence in the way that these quantities are estimated and used. Four methods are commonly employed to estimate MIDs: patient rating of change (global transition items); clinical anchors; standard error of measurement; and effect size. These are described and critiqued in this article. There is no universal MID, despite the appeal of the notion. Indeed, for a particular patient-reported outcome instrument or scale, the MID is not an immutable characteristic, but may vary by population and context. At both the group and individual level, the MID may depend on the clinical context and decision at hand, the baseline from which the patient starts, and whether they are improving or deteriorating. Specific estimates of MIDs should therefore not be overinterpreted. For a given health-related quality-of-life scale, all available MID estimates (and their confidence intervals) should be considered, amalgamated into general guidelines and applied judiciously to any particular clinical or research context.

  15. Antarctic meteor observations using the Davis MST and meteor radars

    NASA Astrophysics Data System (ADS)

    Holdsworth, David A.; Murphy, Damian J.; Reid, Iain M.; Morris, Ray J.

    2008-07-01

    This paper presents the meteor observations obtained using two radars installed at Davis (68.6°S, 78.0°E), Antarctica. The Davis MST radar was installed primarily for observation of polar mesosphere summer echoes, with additional transmit and receive antennas installed to allow all-sky interferometric meteor radar observations. The Davis meteor radar performs dedicated all-sky interferometric meteor radar observations. The annual count rate variation for both radars peaks in mid-summer and minimizes in early Spring. The height distribution shows significant annual variation, with minimum (maximum) peak heights and maximum (minimum) height widths in early Spring (mid-summer). Although the meteor radar count rate and height distribution variations are consistent with a similar frequency meteor radar operating at Andenes (69.3°N), the peak heights show a much larger variation than at Andenes, while the count rate maximum-to-minimum ratios show a much smaller variation. Investigation of the effects of the temporal sampling parameters suggests that these differences are consistent with the different temporal sampling strategies used by the Davis and Andenes meteor radars. The new radiant mapping procedure of [Jones, J., Jones, W., Meteor radiant activity mapping using single-station radar observations, Mon. Not. R. Astron. Soc., 367(3), 1050-1056, doi: 10.1111/j.1365-2966.2006.10025.x, 2006] is investigated. The technique is used to detect the Southern delta-Aquarid meteor shower, and a previously unknown weak shower. Meteoroid speeds obtained using the Fresnel transform are presented. The diurnal, annual, and height variation of meteoroid speeds are presented, with the results found to be consistent with those obtained using specular meteor radars. Meteoroid speed estimates for echoes identified as Southern delta-Aquarid and Sextantid meteor candidates show good agreement with the theoretical pre-atmospheric speeds of these showers (41 km s -1 and 32 km s -1, respectively). The meteoroid speeds estimated for these showers show decreasing speed with decreasing height, consistent with the effects of meteoroid deceleration. Finally, we illustrate how the new radiant mapping and meteoroid speed techniques can be combined for unambiguous meteor shower detection, and use these techniques to detect a previously unknown weak shower.

  16. Non-thermal emission in the core of Perseus: results from a long XMM-Newton observation

    NASA Astrophysics Data System (ADS)

    Molendi, S.; Gastaldello, F.

    2009-01-01

    We employ a long XMM-Newton observation of the core of the Perseus cluster to validate claims of a non-thermal component discovered with Chandra. From a meticulous analysis of our dataset, which includes a detailed treatment of systematic errors, we find the 2-10 keV surface brightness of the non-thermal component to be less than about 5 × 10-16 erg~cm-2 s-1 arcsec-2. The most likely explanation for the discrepancy between the XMM-Newton and Chandra estimates is a problem in the effective area calibration of the latter. Our EPIC-based magnetic field lower limits do not disagree with Faraday rotation measure estimates on a few cool cores and with a minimum energy estimate on Perseus. In the not too distant future Simbol-X may allow detection of non-thermal components with intensities more than 10 times lower than those that can be measured with EPIC; nonetheless even the exquisite sensitivity within reach for Simbol-X might be insufficient to detect the IC emission from Perseus.

  17. Airborne multispectral detection of regrowth cotton fields

    NASA Astrophysics Data System (ADS)

    Westbrook, John K.; Suh, Charles P.-C.; Yang, Chenghai; Lan, Yubin; Eyster, Ritchie S.

    2015-01-01

    Effective methods are needed for timely areawide detection of regrowth cotton plants because boll weevils (a quarantine pest) can feed and reproduce on these plants beyond the cotton production season. Airborne multispectral images of regrowth cotton plots were acquired on several dates after three shredding (i.e., stalk destruction) dates. Linear spectral unmixing (LSU) classification was applied to high-resolution airborne multispectral images of regrowth cotton plots to estimate the minimum detectable size and subsequent growth of plants. We found that regrowth cotton fields can be identified when the mean plant width is ˜0.2 m for an image resolution of 0.1 m. LSU estimates of canopy cover of regrowth cotton plots correlated well (r2=0.81) with the ratio of mean plant width to row spacing, a surrogate measure of plant canopy cover. The height and width of regrowth plants were both well correlated (r2=0.94) with accumulated degree-days after shredding. The results will help boll weevil eradication program managers use airborne multispectral images to detect and monitor the regrowth of cotton plants after stalk destruction, and identify fields that may require further inspection and mitigation of boll weevil infestations.

  18. Software Development Cost Estimation Executive Summary

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Menzies, Tim

    2006-01-01

    Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.

  19. Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces.

    PubMed

    Abu-Alqumsan, Mohammad; Peer, Angelika

    2016-06-01

    Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.

  20. Wavefront reconstruction method based on wavelet fractal interpolation for coherent free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng

    2018-03-01

    Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.

  1. Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.

    PubMed

    Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro

    2018-05-28

    In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.

  2. Structural physical approximation for the realization of the optimal singlet fraction with two measurements

    NASA Astrophysics Data System (ADS)

    Adhikari, Satyabrata

    2018-04-01

    Structural physical approximation (SPA) has been exploited to approximate nonphysical operation such as partial transpose. It has already been studied in the context of detection of entanglement and found that if the minimum eigenvalue of SPA to partial transpose is less than 2/9 then the two-qubit state is entangled. We find application of SPA to partial transpose in the estimation of the optimal singlet fraction. We show that the optimal singlet fraction can be expressed in terms of the minimum eigenvalue of SPA to partial transpose. We also show that the optimal singlet fraction can be realized using Hong-Ou-Mandel interferometry with only two detectors. Further we have shown that the generated hybrid entangled state between a qubit and a binary coherent state can be used as a resource state in quantum teleportation.

  3. Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits. Although no interaction was detected between number of points and number of visits, when paired reciprocals were compared, more points invariably yielded a significantly greater cumulative number of species than more visits to a point. Still, 36 point counts per stand during each of two breeding seasons detected only 52 percent of the known available species pool in DEF.

  4. The method of pulsed x-ray detection with a diode laser.

    PubMed

    Liu, Jun; Ouyang, Xiaoping; Zhang, Zhongbing; Sheng, Liang; Chen, Liang; Tan, Xinjian; Weng, Xiufeng

    2016-12-01

    A new class of pulsed X-ray detection methods by sensing carrier changes in a diode laser cavity has been presented and demonstrated. The proof-of-principle experiments on detecting pulsed X-ray temporal profile have been done through the diode laser with a multiple quantum well active layer. The result shows that our method can achieve the aim of detecting the temporal profile of a pulsed X-ray source. We predict that there is a minimum value for the pre-bias current of the diode laser by analyzing the carrier rate equation, which exists near the threshold current of the diode laser chip in experiments. This behaviour generally agrees with the characterizations of theoretical analysis. The relative sensitivity is estimated at about 3.3 × 10 -17 C ⋅ cm 2 . We have analyzed the time scale of about 10 ps response with both rate equation and Monte Carlo methods.

  5. Particle Filtering for Obstacle Tracking in UAS Sense and Avoid Applications

    PubMed Central

    Moccia, Antonio

    2014-01-01

    Obstacle detection and tracking is a key function for UAS sense and avoid applications. In fact, obstacles in the flight path must be detected and tracked in an accurate and timely manner in order to execute a collision avoidance maneuver in case of collision threat. The most important parameter for the assessment of a collision risk is the Distance at Closest Point of Approach, that is, the predicted minimum distance between own aircraft and intruder for assigned current position and speed. Since assessed methodologies can cause some loss of accuracy due to nonlinearities, advanced filtering methodologies, such as particle filters, can provide more accurate estimates of the target state in case of nonlinear problems, thus improving system performance in terms of collision risk estimation. The paper focuses on algorithm development and performance evaluation for an obstacle tracking system based on a particle filter. The particle filter algorithm was tested in off-line simulations based on data gathered during flight tests. In particular, radar-based tracking was considered in order to evaluate the impact of particle filtering in a single sensor framework. The analysis shows some accuracy improvements in the estimation of Distance at Closest Point of Approach, thus reducing the delay in collision detection. PMID:25105154

  6. Rotation of the optical polarization angle associated with the 2008 γ-ray flare of blazar W Comae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorcia, Marco; Benítez, Erika; Cabrera, José I.

    2014-10-10

    An R-band photopolarimetric variability analysis of the TeV bright blazar W Comae between 2008 February 28 and 2013 May 17 is presented. The source showed a gradual tendency to decrease its mean flux level with a total change of 3 mJy. A maximum and minimum brightness states in the R band of 14.25 ± 0.04 and 16.52 ± 0.1 mag, respectively, were observed, corresponding to a maximum variation of ΔF = 5.40 mJy. We estimated a minimum variability timescale of Δt = 3.3 days. A maximum polarization degree P = 33.8% ± 1.6%, with a maximum variation of ΔP =more » 33.2%, was found. One of our main results is the detection of a large rotation of the polarization angle from 78° to 315° (Δθ ∼ 237°) that coincides in time with the γ-ray flare observed in 2008 June. This result indicates that both optical and γ-ray emission regions could be co-spatial. During this flare, a correlation between the R-band flux and polarization degree was found with a correlation coefficient of r {sub F} {sub –} {sub p} = 0.93 ± 0.11. From the Stokes parameters, we infer the existence of two optically thin synchrotron components that contribute to the polarized flux. One of them is stable with a constant polarization degree of 11%. Assuming a shock-in jet model during the 2008 flare, we estimated a maximum Doppler factor δ {sub D} ∼ 27 and a minimum of δ {sub D} ∼ 16; a minimum viewing angle of the jet ∼2.°0; and a magnetic field B ∼ 0.12 G.« less

  7. Reliability and convergent validity of the five-step test in people with chronic stroke.

    PubMed

    Ng, Shamay S M; Tse, Mimi M Y; Tam, Eric W C; Lai, Cynthia Y Y

    2018-01-10

    (i) To estimate the intra-rater, inter-rater and test-retest reliabilities of the Five-Step Test (FST), as well as the minimum detectable change in FST completion times in people with stroke. (ii) To estimate the convergent validity of the FST with other measures of stroke-specific impairments. (iii) To identify the best cut-off times for distinguishing FST performance in people with stroke from that of healthy older adults. A cross-sectional study. University-based rehabilitation centre. Forty-eight people with stroke and 39 healthy controls. None. The FST, along with (for the stroke survivors only) scores on the Fugl-Meyer Lower Extremity Assessment (FMA-LE), the Berg Balance Scale (BBS), Limits of Stability (LOS) tests, and Activities-specific Balance Confidence (ABC) scale were tested. The FST showed excellent intra-rater (intra-class correlation coefficient; ICC = 0.866-0.905), inter-rater (ICC = 0.998), and test-retest (ICC = 0.838-0.842) reliabilities. A minimum detectable change of 9.16 s was found for the FST in people with stroke. The FST correlated significantly with the FMA-LE, BBS, and LOS results in the forward and sideways directions (r = -0.411 to -0.716, p < 0.004). The FST completion time of 13.35 s was shown to discriminate reliably between people with stroke and healthy older adults. The FST is a reliable, easy-to-administer clinical test for assessing stroke survivors' ability to negotiate steps and stairs.

  8. The effectiveness of robust RMCD control chart as outliers’ detector

    NASA Astrophysics Data System (ADS)

    Darmanto; Astutik, Suci

    2017-12-01

    A well-known control chart to monitor a multivariate process is Hotelling’s T 2 which its parameters are estimated classically, very sensitive and also marred by masking and swamping of outliers data effect. To overcome these situation, robust estimators are strongly recommended. One of robust estimators is re-weighted minimum covariance determinant (RMCD) which has robust characteristics as same as MCD. In this paper, the effectiveness term is accuracy of the RMCD control chart in detecting outliers as real outliers. In other word, how effectively this control chart can identify and remove masking and swamping effects of outliers. We assessed the effectiveness the robust control chart based on simulation by considering different scenarios: n sample sizes, proportion of outliers, number of p quality characteristics. We found that in some scenarios, this RMCD robust control chart works effectively.

  9. Effective channel estimation and efficient symbol detection for multi-input multi-output underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Ling, Jun

    Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.

  10. Heterogeneous autoregressive model with structural break using nearest neighbor truncation volatility estimators for DAX.

    PubMed

    Chin, Wen Cheong; Lee, Min Cherng; Yap, Grace Lee Ching

    2016-01-01

    High frequency financial data modelling has become one of the important research areas in the field of financial econometrics. However, the possible structural break in volatile financial time series often trigger inconsistency issue in volatility estimation. In this study, we propose a structural break heavy-tailed heterogeneous autoregressive (HAR) volatility econometric model with the enhancement of jump-robust estimators. The breakpoints in the volatility are captured by dummy variables after the detection by Bai-Perron sequential multi breakpoints procedure. In order to further deal with possible abrupt jump in the volatility, the jump-robust volatility estimators are composed by using the nearest neighbor truncation approach, namely the minimum and median realized volatility. Under the structural break improvements in both the models and volatility estimators, the empirical findings show that the modified HAR model provides the best performing in-sample and out-of-sample forecast evaluations as compared with the standard HAR models. Accurate volatility forecasts have direct influential to the application of risk management and investment portfolio analysis.

  11. Detection of tripping gait patterns in the elderly using autoregressive features and support vector machines.

    PubMed

    Lai, Daniel T H; Begg, Rezaul K; Taylor, Simon; Palaniswami, Marimuthu

    2008-01-01

    Elderly tripping falls cost billions annually in medical funds and result in high mortality rates often perpetrated by pulmonary embolism (internal bleeding) and infected fractures that do not heal well. In this paper, we propose an intelligent gait detection system (AR-SVM) for screening elderly individuals at risk of suffering tripping falls. The motivation of this system is to provide early detection of elderly gait reminiscent of tripping characteristics so that preventive measures could be administered. Our system is composed of two stages, a predictor model estimated by an autoregressive (AR) process and a support vector machine (SVM) classifier. The system input is a digital signal constructed from consecutive measurements of minimum toe clearance (MTC) representative of steady-state walking. The AR-SVM system was tested on 23 individuals (13 healthy and 10 having suffered at least one tripping fall in the past year) who each completed a minimum of 10 min of walking on a treadmill at a self-selected pace. In the first stage, a fourth order AR model required at least 64 MTC values to correctly detect all fallers and non-fallers. Detection was further improved to less than 1 min of walking when the model coefficients were used as input features to the SVM classifier. The system achieved a detection accuracy of 95.65% with the leave one out method using only 16 MTC samples, but was reduced to 69.57% when eight MTC samples were used. These results demonstrate a fast and efficient system requiring a small number of strides and only MTC measurements for accurate detection of tripping gait characteristics.

  12. Roughness Influence on Initiation of Fretting Fatigue Scar of Ti-6Al-4V Alloy

    NASA Astrophysics Data System (ADS)

    Capitanu, L.; Badita, L. L.; Florescu, V.; Tiganesteanu, C.

    2018-01-01

    This paper reports on the experimental studies undertaken to detect the early stage when appears the fretting wear of the Ti-6Al-4V alloy used for the hip prostheses. Wear is a critical aspect for estimating the fretting fatigue. Studies were performed on samples of special shape, in order to be able to study the influence of in contact surfaces roughness on the durability to fretting. Fretting buffers, with roughnesses Ra of the contact surface of 0.015 and 0.045 μm, and Ti-6Al-4V samples with roughnesses Ra = 0.045 μm, Ra = 0.075 μm and Ra = 0.19 μm, were used. Testing periods of 3 seconds, 1 minute and 5 minutes were selected to capture the moment of the fretting scar appearance, long before these initiate the eventual fretting cracking. Simultaneously with fretting wear of the surface, the friction coefficient was also measured. From the in time evolution determinations of the fretting wear, it resulted that, under the experimental conditions used, the minimum wear occurs at a certain value of the roughness and not at the minimum roughness. Surprisingly, the minimum friction coefficient does not coincide with the minimum fretting wear.

  13. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  14. CE with a boron-doped diamond electrode for trace detection of endocrine disruptors in water samples.

    PubMed

    Browne, Damien J; Zhou, Lin; Luong, John H T; Glennon, Jeremy D

    2013-07-01

    Off-line SPE and CE coupled with electrochemical detection have been used for the determination of bisphenol A (BPA), bisphenol F, 4-ethylphenol, and bisphenol A diglycidyl ether in bottled drinking water. The use of boron-doped diamond electrode as an electrochemical detector in amperometric mode that provides a favorable analytical performance for detecting these endocrine-disrupting compounds, such as lower noise levels, higher peak resolution with enhanced sensitivity, and improved resistance against electrode passivation. The oxidative electrochemical detection of the endocrine-disrupting compounds was accomplished by boron-doped diamond electrode poised at +1.4 V versus Ag/AgCl without electrode pretreatment. An off-line SPE procedure (Bond Elut® C18 SPE cartridge) was utilized to extract and preconcentrate the compounds prior to separation and detection. The minimum concentration detectable for all four compounds ranged from 0.01 to 0.06 μM, having S/N equal to three. After exposing the plastic bottle water container under sunlight for 7 days, the estimated concentration of BPA in the bottled drinking water was estimated to be 0.03 μM. This proposed approach has great potential for rapid and effective determination of BPA content present in water packaging of plastic bottles that have been exposed to sunlight for an extended period of time. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values

    DTIC Science & Technology

    2016-12-01

    UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square  error (MMSE)  estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem.                     3    Introduction      Minimum mean‐ square  error (MMSE) estimation is applied to target imaging with synthetic aperture

  16. Landsat 8 and ICESat-2: Performance and potential synergies for quantifying dryland ecosystem vegetation cover and biomass

    USGS Publications Warehouse

    Glenn, Nancy F.; Neuenschwander, Amy; Vierling, Lee A.; Spaete, Lucas; Li, Aihua; Shinneman, Douglas; Pilliod, David S.; Arkle, Robert; McIlroy, Susan

    2016-01-01

    To estimate the potential synergies of OLI and ICESat-2 we used simulated ICESat-2 photon data to predict vegetation structure. In a shrubland environment with a vegetation mean height of 1 m and mean vegetation cover of 33%, vegetation photons are able to explain nearly 50% of the variance in vegetation height. These results, and those from a comparison site, suggest that a lower detection threshold of ICESat-2 may be in the range of 30% canopy cover and roughly 1 m height in comparable dryland environments and these detection thresholds could be used to combine future ICESat-2 photon data with OLI spectral data for improved vegetation structure. Overall, the synergistic use of Landsat 8 and ICESat-2 may improve estimates of above-ground biomass and carbon storage in drylands that meet these minimum thresholds, increasing our ability to monitor drylands for fuel loading and the potential to sequester carbon.

  17. Detection of hail signatures from single-polarization C-band radar reflectivity

    NASA Astrophysics Data System (ADS)

    Kunz, Michael; Kugel, Petra I. S.

    2015-02-01

    Five different criteria that estimate hail signatures from single-polarization radar data are statistically evaluated over a 15-year period by categorical verification against loss data provided by a building insurance company. The criteria consider different levels or thresholds of radar reflectivity, some of them complemented by estimates of the 0 °C level or cloud top temperature. Applied to reflectivity data from a single C-band radar in southwest Germany, it is found that all criteria are able to reproduce most of the past damage-causing hail events. However, the criteria substantially overestimate hail occurrence by up to 80%, mainly due to the verification process using damage data. Best results in terms of highest Heidke Skill Score HSS or Critical Success Index CSI are obtained for the Hail Detection Algorithm (HDA) and the Probability of Severe Hail (POSH). Radar-derived hail probability shows a high spatial variability with a maximum on the lee side of the Black Forest mountains and a minimum in the broad Rhine valley.

  18. Changes in seasonal streamflow extremes experienced in rivers of Northwestern South America (Colombia)

    NASA Astrophysics Data System (ADS)

    Pierini, J. O.; Restrepo, J. C.; Aguirre, J.; Bustamante, A. M.; Velásquez, G. J.

    2017-04-01

    A measure of the variability in seasonal extreme streamflow was estimated for the Colombian Caribbean coast, using monthly time series of freshwater discharge from ten watersheds. The aim was to detect modifications in the streamflow monthly distribution, seasonal trends, variance and extreme monthly values. A 20-year length time moving window, with 1-year successive shiftments, was applied to the monthly series to analyze the seasonal variability of streamflow. The seasonal-windowed data were statistically fitted through the Gamma distribution function. Scale and shape parameters were computed using the Maximum Likelihood Estimation (MLE) and the bootstrap method for 1000 resample. A trend analysis was performed for each windowed-serie, allowing to detect the window of maximum absolute values for trends. Significant temporal shifts in seasonal streamflow distribution and quantiles (QT), were obtained for different frequencies. Wet and dry extremes periods increased significantly in the last decades. Such increase did not occur simultaneously through the region. Some locations exhibited continuous increases only at minimum QT.

  19. Detection of cow milk adulteration in yak milk by ELISA.

    PubMed

    Ren, Q R; Zhang, H; Guo, H Y; Jiang, L; Tian, M; Ren, F Z

    2014-10-01

    In the current study, a simple, sensitive, and specific ELISA assay using a high-affinity anti-bovine β-casein monoclonal antibody was developed for the rapid detection of cow milk in adulterated yak milk. The developed ELISA was highly specific and could be applied to detect bovine β-casein (10-8,000 μg/mL) and cow milk (1:1,300 to 1:2 dilution) in yak milk. Cross-reactivity was <1% when tested against yak milk. The linear range of adulterant concentration was 1 to 80% (vol/vol) and the minimum detection limit was 1% (vol/vol) cow milk in yak milk. Different treatments, including heating, acidification, and rennet addition, did not interfere with the assay. Moreover, the results were highly reproducible (coefficient of variation <10%) and we detected no significant differences between known and estimated values. Therefore, this assay is appropriate for the routine analysis of yak milk adulterated with cow milk. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. Method and apparatus for detecting dilute concentrations of radioactive xenon in samples of xenon extracted from the atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warburton, William K.; Hennig, Wolfgang G.

    A method and apparatus for measuring the concentrations of radioxenon isotopes in a gaseous sample wherein the sample cell is surrounded by N sub-detectors that are sensitive to both electrons and to photons from radioxenon decays. Signal processing electronics are provided that can detect events within the sub-detectors, measure their energies, determine whether they arise from electrons or photons, and detect coincidences between events within the same or different sub-detectors. The energies of detected two or three event coincidences are recorded as points in associated two or three-dimensional histograms. Counts within regions of interest in the histograms are then usedmore » to compute estimates of the radioxenon isotope concentrations. The method achieves lower backgrounds and lower minimum detectable concentrations by using smaller detector crystals, eliminating interference between double and triple coincidence decay branches, and segregating double coincidences within the same sub-detector from those occurring between different sub-detectors.« less

  1. Estimate of the influence of muzzle smoke on function range of infrared system

    NASA Astrophysics Data System (ADS)

    Luo, Yan-ling; Wang, Jun; Wu, Jiang-hui; Wu, Jun; Gao, Meng; Gao, Fei; Zhao, Yu-jie; Zhang, Lei

    2013-09-01

    Muzzle smoke produced by weapons shooting has important influence on infrared (IR) system while detecting targets. Based on the theoretical model of detecting spot targets and surface targets of IR system while there is muzzle smoke, the function range for detecting spot targets and surface targets are deduced separately according to the definition of noise equivalent temperature difference(NETD) and minimum resolution temperature difference(MRTD). Also parameters of muzzle smoke affecting function range of IR system are analyzed. Base on measured data of muzzle smoke for single shot, the function range of an IR system for detecting typical targets are calculated separately while there is muzzle smoke and there is no muzzle smoke at 8-12 micron waveband. For our IR system function range has reduced by over 10% for detecting tank if muzzle smoke exists. The results will provide evidence for evaluating the influence of muzzle smoke on IR system and will help researchers to improve ammo craftwork.

  2. Modeling bird mortality associated with the M/V Citrus oil spill off St. Paul Island, Alaska

    USGS Publications Warehouse

    Flint, Paul L.; Fowler, Ada C.; Rockwell, Robert F.

    1999-01-01

    We developed a model to estimate the number of bird carcasses that were likely deposited on the beaches of St. Paul Island, Alaska following the M/V Citrus oil spill in February 1996. Most of the islands beaches were searched on an irregular schedule, resulting in the recovery of 876 King Eider carcasses. A sub-sample of beaches were intensively studied to estimate daily persistence rate and detection probability [Fowler, A.C., Flint, P.L., 1997. Marine Pollution Bulletin]. Using these data, our model predicted that an additional 733±70 King Eider carcasses were not detected during our searches. Therefore, we estimate that at least 1609±70 King Eider carcasses occurred on beaches as a result of the spill. We lacked sufficient sample size to model losses for other species, thus we applied the estimated recovery rate for King Eiders (54%) to other species and estimate a total combined loss of 1765 birds. In addition, 165 birds were captured alive making the total estimated number of birds impacted by the M/V Citrus spill 1930. Given that oiled birds occurred in places on the island which could not be systematically searched combined with the fact that it was unlikely that oiled birds that died at sea would have been recovered during our searches [Flint, P.L., Fowler, A.C., 1998. Marine Pollution Bulletin], our estimate of total mortality associated with the spill should be considered a minimum.

  3. Extracting harmonic signal from a chaotic background with local linear model

    NASA Astrophysics Data System (ADS)

    Li, Chenlong; Su, Liyun

    2017-02-01

    In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.

  4. A baseline study on mental disorders in Guiné-Bissau.

    PubMed

    de Jong, J T; de Klein, G A; ten Horn, S G

    1986-01-01

    Adults attending general health facilities in Guiné-Bissau were screened for the presence of mental disorder; minimum estimate of definite mentally ill cases was found to be 12%. The proportion correctly identified by general health workers was low: only one of every three patients with a mental disorder was recognised and of every 100 non-cases 12 patients were wrongly diagnosed by the health worker as suffering from psychiatric illness. On the basis of these results health workers are now being taught how to detect mental illness.

  5. [Comparison of the "Trigger" tool with the minimum basic data set for detecting adverse events in general surgery].

    PubMed

    Pérez Zapata, A I; Gutiérrez Samaniego, M; Rodríguez Cuéllar, E; Gómez de la Cámara, A; Ruiz López, P

    Surgery is a high risk for the occurrence of adverse events (AE). The main objective of this study is to compare the effectiveness of the Trigger tool with the Hospital National Health System registration of Discharges, the minimum basic data set (MBDS), in detecting adverse events in patients admitted to General Surgery and undergoing surgery. Observational and descriptive retrospective study of patients admitted to general surgery of a tertiary hospital, and undergoing surgery in 2012. The identification of adverse events was made by reviewing the medical records, using an adaptation of "Global Trigger Tool" methodology, as well as the (MBDS) registered on the same patients. Once the AE were identified, they were classified according to damage and to the extent to which these could have been avoided. The area under the curve (ROC) were used to determine the discriminatory power of the tools. The Hanley and Mcneil test was used to compare both tools. AE prevalence was 36.8%. The TT detected 89.9% of all AE, while the MBDS detected 28.48%. The TT provides more information on the nature and characteristics of the AE. The area under the curve was 0.89 for the TT and 0.66 for the MBDS. These differences were statistically significant (P<.001). The Trigger tool detects three times more adverse events than the MBDS registry. The prevalence of adverse events in General Surgery is higher than that estimated in other studies. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Stratospheric sounding by infrared heterodyne spectroscopy

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Kunde, V. G.; Mumma, M. J.; Kostiuk, T.; Buhl, D.; Frerking, M. A.

    1978-01-01

    Intensity profiles of infrared spectral lines of stratospheric constituents can be fully resolved with a heterodyne spectrometer of sufficiently high resolution. The constituents' vertical distributions can then be evaluated accurately by analytic inversion of the measured line profiles. Estimates of the detection sensitivity of a heterodyne receiver are given in terms of minimum detectable volume mixing ratios of stratospheric constituents, indicating a large number of minor constituents which can be studied. Stratospheric spectral line shapes, and the resolution required to measure them are discussed in light of calculated synthetic line profiles for some stratospheric molecules in a model atmosphere. The inversion technique for evaluation of gas concentration profiles is briefly described and applications to synthetic lines of O3, CO2, CH4 and N2O are given.

  7. Theoretical assessment of whole body counting performances using numerical phantoms of different gender and sizes.

    PubMed

    Marzocchi, O; Breustedt, B; Mostacci, D; Zankl, M; Urban, M

    2011-03-01

    A goal of whole body counting (WBC) is the estimation of the total body burden of radionuclides disregarding the actual position within the body. To achieve the goal, the detectors need to be placed in regions where the photon flux is as independent as possible from the distribution of the source. At the same time, the detectors need high photon fluxes in order to achieve better efficiency and lower minimum detectable activities. This work presents a method able to define the layout of new WBC systems and to study the behaviour of existing ones using both detection efficiency and its dependence on the position of the source within the body of computational phantoms.

  8. A portable detection system for in vivo monitoring of 131I in routine and emergency situations

    NASA Astrophysics Data System (ADS)

    Lucena, EA; Dantas, ALA; Dantas, BM

    2018-03-01

    In vivo monitoring of 131I in human thyroid is often used to evaluate occupational exposure in nuclear medicine facilities and in the case of accidental intakes in nuclear power plants for the monitoring of workers and population. The device presented in this work consists on a Pb-collimated NaI(Tl)3”x3” scintillation detector assembled on a tripod and connected to a portable PC. The evaluation of the applicability and limitations of the system is based on the estimation of the committed effective doses associated to the minimum detectable activities in different facilities. It has been demonstrated that the system is suitable for use in routine and accidental situations.

  9. The transition from the open minimum to the ring minimum on the ground state and on the lowest excited state of like symmetry in ozone: A configuration interaction study

    DOE PAGES

    Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.; ...

    2016-03-10

    The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less

  10. The transition from the open minimum to the ring minimum on the ground state and on the lowest excited state of like symmetry in ozone: A configuration interaction study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theis, Daniel; Ivanic, Joseph; Windus, Theresa L.

    The metastable ring structure of the ozone 1 1A 1 ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two 1A 1 states. In the present work, valence correlated energies of the 1 1A 1 state and the 2 1A 1 state were calculated at the 1 1A 1 open minimum, the 1 1A 1 ring minimum, themore » transition state between these two minima, the minimum of the 2 1A 1 state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of CorrelationEnergy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 11A1 state, the present calculations yield the estimates of (ring minimum—open minimum) ~45–50 mh and (transition state—open minimum) ~85–90 mh. For the (2 1A 1– 1A 1) excitation energy, the estimate of ~130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2 1A 1– 1A 1) is found to be between 1 and 10 mh. The geometry of the transition state on the 11A1 surface and that of the minimum on the 2 1A 1 surface nearly coincide. More accurate predictions of the energydifferences also require CI expansions to at least sextuple excitations with respect to the valence space. Furthermore, for every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less

  11. Estimating black bear density using DNA data from hair snares

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.; Rainbolt, R.E.; Curtis, P.D.

    2010-01-01

    DNA-based mark-recapture has become a methodological cornerstone of research focused on bear species. The objective of such studies is often to estimate population size; however, doing so is frequently complicated by movement of individual bears. Movement affects the probability of detection and the assumption of closure of the population required in most models. To mitigate the bias caused by movement of individuals, population size and density estimates are often adjusted using ad hoc methods, including buffering the minimum polygon of the trapping array. We used a hierarchical, spatial capturerecapture model that contains explicit components for the spatial-point process that governs the distribution of individuals and their exposure to (via movement), and detection by, traps. We modeled detection probability as a function of each individual's distance to the trap and an indicator variable for previous capture to account for possible behavioral responses. We applied our model to a 2006 hair-snare study of a black bear (Ursus americanus) population in northern New York, USA. Based on the microsatellite marker analysis of collected hair samples, 47 individuals were identified. We estimated mean density at 0.20 bears/km2. A positive estimate of the indicator variable suggests that bears are attracted to baited sites; therefore, including a trap-dependence covariate is important when using bait to attract individuals. Bayesian analysis of the model was implemented in WinBUGS, and we provide the model specification. The model can be applied to any spatially organized trapping array (hair snares, camera traps, mist nests, etc.) to estimate density and can also account for heterogeneity and covariate information at the trap or individual level. ?? The Wildlife Society.

  12. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  13. Effects of Phasor Measurement Uncertainty on Power Line Outage Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chen; Wang, Jianhui; Zhu, Hao

    2014-12-01

    Phasor measurement unit (PMU) technology provides an effective tool to enhance the wide-area monitoring systems (WAMSs) in power grids. Although extensive studies have been conducted to develop several PMU applications in power systems (e.g., state estimation, oscillation detection and control, voltage stability analysis, and line outage detection), the uncertainty aspects of PMUs have not been adequately investigated. This paper focuses on quantifying the impact of PMU uncertainty on power line outage detection and identification, in which a limited number of PMUs installed at a subset of buses are utilized to detect and identify the line outage events. Specifically, the linemore » outage detection problem is formulated as a multi-hypothesis test, and a general Bayesian criterion is used for the detection procedure, in which the PMU uncertainty is analytically characterized. We further apply the minimum detection error criterion for the multi-hypothesis test and derive the expected detection error probability in terms of PMU uncertainty. The framework proposed provides fundamental guidance for quantifying the effects of PMU uncertainty on power line outage detection. Case studies are provided to validate our analysis and show how PMU uncertainty influences power line outage detection.« less

  14. Robust Spacecraft Component Detection in Point Clouds.

    PubMed

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  15. Robust Spacecraft Component Detection in Point Clouds

    PubMed Central

    Wei, Quanmao; Jiang, Zhiguo

    2018-01-01

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828

  16. Optimal Search for an Astrophysical Gravitational-Wave Background

    NASA Astrophysics Data System (ADS)

    Smith, Rory; Thrane, Eric

    2018-04-01

    Roughly every 2-10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals) for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both "safe" and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources), simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.

  17. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Potential benefits of minimum unit pricing for alcohol versus a ban on below cost selling in England 2014: modelling study.

    PubMed

    Brennan, Alan; Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S

    2014-09-30

    To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Modelling study using the Sheffield Alcohol Policy Model version 2.5. England 2014-15. Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45 p, and 50 p per unit (7.9 g/10 mL) of pure alcohol. Changes in mean consumption in terms of units of alcohol, drinkers' expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45 p minimum unit price. Below cost selling is estimated to reduce harmful drinkers' mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45 p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health-saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45 p minimum unit price is estimated to save 624 deaths and 23,700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40 p and 50 p per unit, is estimated to have an approximately 40-50 times greater effect. © Brennan et al 2014.

  19. Potential benefits of minimum unit pricing for alcohol versus a ban on below cost selling in England 2014: modelling study

    PubMed Central

    Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S

    2014-01-01

    Objective To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Design Modelling study using the Sheffield Alcohol Policy Model version 2.5. Setting England 2014-15. Population Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Interventions Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45p, and 50p per unit (7.9 g/10 mL) of pure alcohol. Main outcome measures Changes in mean consumption in terms of units of alcohol, drinkers’ expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. Results The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45p minimum unit price. Below cost selling is estimated to reduce harmful drinkers’ mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health—saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45p minimum unit price is estimated to save 624 deaths and 23 700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. Conclusions The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40p and 50p per unit, is estimated to have an approximately 40-50 times greater effect. PMID:25270743

  20. Estimating the breeding population of long-billed curlew in the United States

    USGS Publications Warehouse

    Stanley, T.R.; Skagen, S.K.

    2007-01-01

    Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.

  1. Pattern recognition for passive polarimetric data using nonparametric classifiers

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.

    2005-08-01

    Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.

  2. Screening for tinea unguium by Dermatophyte Test Strip.

    PubMed

    Tsunemi, Y; Takehara, K; Miura, Y; Nakagami, G; Sanada, H; Kawashima, M

    2014-02-01

    The direct microscopy, fungal culture and histopathology that are necessary for the definitive diagnosis of tinea unguium are disadvantageous in that detection sensitivity is affected by the level of skill of the person who performs the testing, and the procedures take a long time. The Dermatophyte Test Strip, which was developed recently, can simply and easily detect filamentous fungi in samples in a short time, and there are expectations for its use as a method for tinea unguium screening. With this in mind, we examined the detection capacity of the Dermatophyte Test Strip for tinea unguium. The presence or absence of fungal elements was judged by direct microscopy and Dermatophyte Test Strip in 165 nail samples obtained from residents in nursing homes for the elderly. Moreover, the minimum sample amount required for positive determination was estimated using 32 samples that showed positive results by Dermatophyte Test Strip. The Dermatophyte Test Strip showed 98% sensitivity, 78% specificity, 84·8% positive predictive value, 97% negative predictive value and a positive and negative concordance rate of 89·1%. The minimum sample amount required for positive determination was 0·002-0·722 mg. The Dermatophyte Test Strip showed very high sensitivity and negative predictive value, and was considered a potentially useful method for tinea unguium screening. Positive determination was considered to be possible with a sample amount of about 1 mg. © 2013 British Association of Dermatologists.

  3. Prevalence of autosomal dominant polycystic kidney disease in the European Union.

    PubMed

    Willey, Cynthia J; Blais, Jaime D; Hall, Anthony K; Krasa, Holly B; Makin, Andrew J; Czerwiec, Frank S

    2017-08-01

    Autosomal dominant polycystic kidney disease (ADPKD) is a leading cause of end-stage renal disease, but estimates of its prevalence vary by >10-fold. The objective of this study was to examine the public health impact of ADPKD in the European Union (EU) by estimating minimum prevalence (point prevalence of known cases) and screening prevalence (minimum prevalence plus cases expected after population-based screening). A review of the epidemiology literature from January 1980 to February 2015 identified population-based studies that met criteria for methodological quality. These examined large German and British populations, providing direct estimates of minimum prevalence and screening prevalence. In a second approach, patients from the 2012 European Renal Association‒European Dialysis and Transplant Association (ERA-EDTA) Registry and literature-based inflation factors that adjust for disease severity and screening yield were used to estimate prevalence across 19 EU countries (N = 407 million). Population-based studies yielded minimum prevalences of 2.41 and 3.89/10 000, respectively, and corresponding estimates of screening prevalences of 3.3 and 4.6/10 000. A close correspondence existed between estimates in countries where both direct and registry-derived methods were compared, which supports the validity of the registry-based approach. Using the registry-derived method, the minimum prevalence was 3.29/10 000 (95% confidence interval 3.27-3.30), and if ADPKD screening was implemented in all countries, the expected prevalence was 3.96/10 000 (3.94-3.98). ERA-EDTA-based prevalence estimates and application of a uniform definition of prevalence to population-based studies consistently indicate that the ADPKD point prevalence is <5/10 000, the threshold for rare disease in the EU. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA.

  4. Development of loop-mediated isothermal amplification and SYBR green real-time PCR methods for the detection of Citrus yellow mosaic badnavirus in citrus species.

    PubMed

    Anthony Johnson, A M; Dasgupta, I; Sai Gopal, D V R

    2014-07-01

    Citrus yellow mosaic badnavirus (CMBV) is an important pathogen in southern India spread by infected citrus propagules. One of the measures to arrest the spread of CMBV is to develop methods to screen and certify citrus propagules as CMBV-free. The methods loop-mediated isothermal amplification (LAMP) and SYBR green real-time PCR (SGRTPCR) have been developed for the efficient detection of CMBV in citrus propagules. This paper compares the sensitivities of LAMP and SGRTPCR with polymerase chain reaction (PCR) for the detection of CMBV. Whereas PCR and LAMP were able to detect CMBV from a minimum of 10 ng of total DNA of infected leaf samples, SGRTPCR could detect the same from 1 ng of total DNA. Using SGRTPCR, the viral titres were estimated to be the highest in rough lemon and lowest in Nagpur Mandarin of the five naturally infected citrus species tested. The results will help in designing suitable strategies for the sensitive detection of CMBV from citrus propagules. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Minimum Detectable Dose as a Measure of Bioassay Programme Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, Eugene H.

    2003-01-01

    This paper suggests that minimum detectable dose (MDD) be used to describe the capability of bioassay programs for which intakes are expected to be rare. This allows expression of the capability in units that correspond directly to primary dose limits. The concept uses the well-established analytical statistic minimum detectable amount (MDA) as the starting point and assumes MDA detection at a prescribed time post intake. The resulting dose can then be used as an indication of the adequacy or capability of the program for demonstrating compliance with the performance criteria. MDDs can be readily tabulated or plotted to demonstrate themore » effectiveness of different types of monitoring programs. The inclusion of cost factors for bioassay measurements can allow optimisation.« less

  6. Minimum detectable dose as a measure of bioassay programme capability.

    PubMed

    Carbaugh, E H

    2003-01-01

    This paper suggests that minimum detectable dose (MDD) be used to describe the capability of bioassay programmes for which intakes are expected to be rare. This allows expression of the capability in units that correspond directly to primary dose limits. The concept uses the well established analytical statistic minimum detectable amount (MDA) as the starting point, and assumes MDA detection at a prescribed time post-intake. The resulting dose can then be used as an indication of the adequacy or capability of the programme for demonstrating compliance with the performance criteria. MDDs can be readily tabulated or plotted to demonstrate the effectiveness of different types of monitoring programmes. The inclusion of cost factors for bioassay measurements can allow optimisation.

  7. Minimum viable populations: Is there a 'magic number' for conservation practitioners?

    Treesearch

    Curtis H. Flather; Gregory D. Hayward; Steven R. Beissinger; Philip A. Stephens

    2011-01-01

    Establishing species conservation priorities and recovery goals is often enhanced by extinction risk estimates. The need to set goals, even in data-deficient situations, has prompted researchers to ask whether general guidelines could replace individual estimates of extinction risk. To inform conservation policy, recent studies have revived the concept of the minimum...

  8. Minimum Wage Effects on Educational Enrollments in New Zealand

    ERIC Educational Resources Information Center

    Pacheco, Gail A.; Cruickshank, Amy A.

    2007-01-01

    This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…

  9. Employment Effects of Minimum and Subminimum Wages. Recent Evidence.

    ERIC Educational Resources Information Center

    Neumark, David

    Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…

  10. Does the Minimum Wage Affect Welfare Caseloads?

    ERIC Educational Resources Information Center

    Page, Marianne E.; Spetz, Joanne; Millar, Jane

    2005-01-01

    Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…

  11. Minimum area requirements for an at-risk butterfly based on movement and demography.

    PubMed

    Brown, Leone M; Crone, Elizabeth E

    2016-02-01

    Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.

  12. Structural Properties and Estimation of Delay Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H. S.

    1975-01-01

    Two areas in the theory of delay systems were studied: structural properties and their applications to feedback control, and optimal linear and nonlinear estimation. The concepts of controllability, stabilizability, observability, and detectability were investigated. The property of pointwise degeneracy of linear time-invariant delay systems is considered. Necessary and sufficient conditions for three dimensional linear systems to be made pointwise degenerate by delay feedback were obtained, while sufficient conditions for this to be possible are given for higher dimensional linear systems. These results were applied to obtain solvability conditions for the minimum time output zeroing control problem by delay feedback. A representation theorem is given for conditional moment functionals of general nonlinear stochastic delay systems, and stochastic differential equations are derived for conditional moment functionals satisfying certain smoothness properties.

  13. Effect of censoring trace-level water-quality data on trend-detection capability

    USGS Publications Warehouse

    Gilliom, R.J.; Hirsch, R.M.; Gilroy, E.J.

    1984-01-01

    Monte Carlo experiments were used to evaluate whether trace-level water-quality data that are routinely censored (not reported) contain valuable information for trend detection. Measurements are commonly censored if they fall below a level associated with some minimum acceptable level of reliability (detection limit). Trace-level organic data were simulated with best- and worst-case estimates of measurement uncertainty, various concentrations and degrees of linear trend, and different censoring rules. The resulting classes of data were subjected to a nonparametric statistical test for trend. For all classes of data evaluated, trends were most effectively detected in uncensored data as compared to censored data even when the data censored were highly unreliable. Thus, censoring data at any concentration level may eliminate valuable information. Whether or not valuable information for trend analysis is, in fact, eliminated by censoring of actual rather than simulated data depends on whether the analytical process is in statistical control and bias is predictable for a particular type of chemical analyses.

  14. Detection of bremsstrahlung radiation of 90Sr-90Y for emergency lung counting.

    PubMed

    Ho, A; Hakmana Witharana, S S; Jonkmans, G; Li, L; Surette, R A; Dubeau, J; Dai, X

    2012-09-01

    This study explores the possibility of developing a field-deployable (90)Sr detector for rapid lung counting in emergency situations. The detection of beta-emitters (90)Sr and its daughter (90)Y inside the human lung via bremsstrahlung radiation was performed using a 3″ × 3″ NaI(Tl) crystal detector and a polyethylene-encapsulated source to emulate human lung tissue. The simulation results show that this method is a viable technique for detecting (90)Sr with a minimum detectable activity (MDA) of 1.07 × 10(4) Bq, using a realistic dual-shielded detector system in a 0.25-µGy h(-1) background field for a 100-s scan. The MDA is sufficiently sensitive to meet the requirement for emergency lung counting of Type S (90)Sr intake. The experimental data were verified using Monte Carlo calculations, including an estimate for internal bremsstrahlung, and an optimisation of the detector geometry was performed. Optimisations in background reduction techniques and in the electronic acquisition systems are suggested.

  15. Sources of Variation in a Two-Step Monitoring Protocol for Species Clustered in Conspicuous Points: Dolichotis patagonum as a Case Study.

    PubMed

    Alonso Roldán, Virginia; Bossio, Luisina; Galván, David E

    2015-01-01

    In species showing distributions attached to particular features of the landscape or conspicuous signs, counts are commonly made by making focal observations where animals concentrate. However, to obtain density estimates for a given area, independent searching for signs and occupancy rates of suitable sites is needed. In both cases, it is important to estimate detection probability and other possible sources of variation to avoid confounding effects on measurements of abundance variation. Our objective was to assess possible bias and sources of variation in a two-step protocol in which random designs were applied to search for signs while continuously recording video cameras were used to perform abundance counts where animals are concentrated, using mara (Dolichotis patagonum) as a case study. The protocol was successfully applied to maras within the Península Valdés protected area, given that the protocol was logistically suitable, allowed warrens to be found, the associated adults to be counted, and the detection probability to be estimated. Variability was documented in both components of the two-step protocol. These sources of variation should be taken into account when applying this protocol. Warren detectability was approximately 80% with little variation. Factors related to false positive detection were more important than imperfect detection. The detectability for individuals was approximately 90% using the entire day of observations. The shortest sampling period with a similar detection capacity than a day was approximately 10 hours, and during this period, the visiting dynamic did not show trends. For individual mara, the detection capacity of the camera was not significantly different from the observer during fieldwork. The presence of the camera did not affect the visiting behavior of adults to the warren. Application of this protocol will allow monitoring of the near-threatened mara providing a minimum local population size and a baseline for measuring long-term trends.

  16. Advancing the detection of steady-state visual evoked potentials in brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Abu-Alqumsan, Mohammad; Peer, Angelika

    2016-06-01

    Objective. Spatial filtering has proved to be a powerful pre-processing step in detection of steady-state visual evoked potentials and boosted typical detection rates both in offline analysis and online SSVEP-based brain-computer interface applications. State-of-the-art detection methods and the spatial filters used thereby share many common foundations as they all build upon the second order statistics of the acquired Electroencephalographic (EEG) data, that is, its spatial autocovariance and cross-covariance with what is assumed to be a pure SSVEP response. The present study aims at highlighting the similarities and differences between these methods. Approach. We consider the canonical correlation analysis (CCA) method as a basis for the theoretical and empirical (with real EEG data) analysis of the state-of-the-art detection methods and the spatial filters used thereby. We build upon the findings of this analysis and prior research and propose a new detection method (CVARS) that combines the power of the canonical variates and that of the autoregressive spectral analysis in estimating the signal and noise power levels. Main results. We found that the multivariate synchronization index method and the maximum contrast combination method are variations of the CCA method. All three methods were found to provide relatively unreliable detections in low signal-to-noise ratio (SNR) regimes. CVARS and the minimum energy combination methods were found to provide better estimates for different SNR levels. Significance. Our theoretical and empirical results demonstrate that the proposed CVARS method outperforms other state-of-the-art detection methods when used in an unsupervised fashion. Furthermore, when used in a supervised fashion, a linear classifier learned from a short training session is able to estimate the hidden user intention, including the idle state (when the user is not attending to any stimulus), rapidly, accurately and reliably.

  17. Detection Thresholds of Falling Snow From Satellite-Borne Active and Passive Sensors

    NASA Technical Reports Server (NTRS)

    Skofronick-Jackson, Gail M.; Johnson, Benjamin T.; Munchak, S. Joseph

    2013-01-01

    There is an increased interest in detecting and estimating the amount of falling snow reaching the Earths surface in order to fully capture the global atmospheric water cycle. An initial step toward global spaceborne falling snow algorithms for current and future missions includes determining the thresholds of detection for various active and passive sensor channel configurations and falling snow events over land surfaces and lakes. In this paper, cloud resolving model simulations of lake effect and synoptic snow events were used to determine the minimum amount of snow (threshold) that could be detected by the following instruments: the W-band radar of CloudSat, Global Precipitation Measurement (GPM) Dual-Frequency Precipitation Radar (DPR)Ku- and Ka-bands, and the GPM Microwave Imager. Eleven different nonspherical snowflake shapes were used in the analysis. Notable results include the following: 1) The W-band radar has detection thresholds more than an order of magnitude lower than the future GPM radars; 2) the cloud structure macrophysics influences the thresholds of detection for passive channels (e.g., snow events with larger ice water paths and thicker clouds are easier to detect); 3) the snowflake microphysics (mainly shape and density)plays a large role in the detection threshold for active and passive instruments; 4) with reasonable assumptions, the passive 166-GHz channel has detection threshold values comparable to those of the GPM DPR Ku- and Ka-band radars with approximately 0.05 g *m(exp -3) detected at the surface, or an approximately 0.5-1.0-mm * h(exp -1) melted snow rate. This paper provides information on the light snowfall events missed by the sensors and not captured in global estimates.

  18. A rapid screening of ancestry for genetic association studies in an admixed population from Pernambuco, Brazil.

    PubMed

    Coelho, A V C; Moura, R R; Cavalcanti, C A J; Guimarães, R L; Sandrin-Garcia, P; Crovella, S; Brandão, L A C

    2015-03-31

    Genetic association studies determine how genes influence traits. However, non-detected population substructure may bias the analysis, resulting in spurious results. One method to detect substructure is to genotype ancestry informative markers (AIMs) besides the candidate variants, quantifying how much ancestral populations contribute to the samples' genetic background. The present study aimed to use a minimum quantity of markers, while retaining full potential to estimate ancestries. We tested the feasibility of a subset of the 12 most informative markers from a previously established study to estimate influence from three ancestral populations: European, African and Amerindian. The results showed that in a sample with a diverse ethnicity (N = 822) derived from 1000 Genomes database, the 12 AIMs had the same capacity to estimate ancestries when compared to the original set of 128 AIMs, since estimates from the two panels were closely correlated. Thus, these 12 SNPs were used to estimate ancestry in a new sample (N = 192) from an admixed population in Recife, Northeast Brazil. The ancestry estimates from Recife subjects were in accordance with previous studies, showing that Northeastern Brazilian populations show great influence from European ancestry (59.7%), followed by African (23.0%) and Amerindian (17.3%) ancestries. Ethnicity self-classification according to skin-color was confirmed to be a poor indicator of population substructure in Brazilians, since ancestry estimates overlapped between classifications. Thus, our streamlined panel of 12 markers may substitute panels with more markers, while retaining the capacity to control for population substructure and admixture, thereby reducing sample processing time.

  19. Intracranial EEG potentials estimated from MEG sources: A new approach to correlate MEG and iEEG data in epilepsy.

    PubMed

    Grova, Christophe; Aiguabella, Maria; Zelmann, Rina; Lina, Jean-Marc; Hall, Jeffery A; Kobayashi, Eliane

    2016-05-01

    Detection of epileptic spikes in MagnetoEncephaloGraphy (MEG) requires synchronized neuronal activity over a minimum of 4cm2. We previously validated the Maximum Entropy on the Mean (MEM) as a source localization able to recover the spatial extent of the epileptic spike generators. The purpose of this study was to evaluate quantitatively, using intracranial EEG (iEEG), the spatial extent recovered from MEG sources by estimating iEEG potentials generated by these MEG sources. We evaluated five patients with focal epilepsy who had a pre-operative MEG acquisition and iEEG with MRI-compatible electrodes. Individual MEG epileptic spikes were localized along the cortical surface segmented from a pre-operative MRI, which was co-registered with the MRI obtained with iEEG electrodes in place for identification of iEEG contacts. An iEEG forward model estimated the influence of every dipolar source of the cortical surface on each iEEG contact. This iEEG forward model was applied to MEG sources to estimate iEEG potentials that would have been generated by these sources. MEG-estimated iEEG potentials were compared with measured iEEG potentials using four source localization methods: two variants of MEM and two standard methods equivalent to minimum norm and LORETA estimates. Our results demonstrated an excellent MEG/iEEG correspondence in the presumed focus for four out of five patients. In one patient, the deep generator identified in iEEG could not be localized in MEG. MEG-estimated iEEG potentials is a promising method to evaluate which MEG sources could be retrieved and validated with iEEG data, providing accurate results especially when applied to MEM localizations. Hum Brain Mapp 37:1661-1683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Channel estimation in few mode fiber mode division multiplexing transmission system

    NASA Astrophysics Data System (ADS)

    Hei, Yongqiang; Li, Li; Li, Wentao; Li, Xiaohui; Shi, Guangming

    2018-03-01

    It is abundantly clear that obtaining the channel state information (CSI) is of great importance for the equalization and detection in coherence receivers. However, to the best of the authors' knowledge, in most of the existing literatures, CSI is assumed to be perfectly known at the receiver. So far, few literature discusses the effects of imperfect CSI on MDM system performance caused by channel estimation. Motivated by that, in this paper, the channel estimation in few mode fiber (FMF) mode division multiplexing (MDM) system is investigated, in which two classical channel estimation methods, i.e., least square (LS) method and minimum mean square error (MMSE) method, are discussed with the assumption of the spatially white noise lumped at the receiver side of MDM system. Both the capacity and BER performance of MDM system affected by mode-dependent gain or loss (MDL) with different channel estimation errors have been studied. Simulation results show that the capacity and BER performance can be further deteriorated in MDM system by the channel estimation, and an 1e-3 variance of channel estimation error is acceptable in MDM system with 0-6 dB MDL values.

  1. Unmanned Aircraft Systems (UAS) Integration in the National Airspace System (NAS) Project: Detect and Avoid Display Evaluations in Support of SC-228 Minimum Operational Performance Standards Development

    NASA Technical Reports Server (NTRS)

    Fern, Lisa Carolynn

    2017-01-01

    The primary activity for the UAS-NAS Human Systems Integration (HSI) sub-project in Phase 1 was support of RTCA Special Committee 228 Minimum Operational Performance Standards (MOPS). We provide data on the effect of various Detect and Avoid (DAA) display features with respect to pilot performance of the remain well clear function in order to determine the minimum requirements for DAA displays.

  2. Robust Means and Covariance Matrices by the Minimum Volume Ellipsoid (MVE).

    ERIC Educational Resources Information Center

    Blankmeyer, Eric

    P. Rousseeuw and A. Leroy (1987) proposed a very robust alternative to classical estimates of mean vectors and covariance matrices, the Minimum Volume Ellipsoid (MVE). This paper describes the MVE technique and presents a BASIC program to implement it. The MVE is a "high breakdown" estimator, one that can cope with samples in which as…

  3. Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology

    PubMed Central

    Hayward, John

    2016-01-01

    The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world’s earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated. PMID:27579865

  4. Minimum target prices for production of direct-acting antivirals and associated diagnostics to combat hepatitis C virus

    PubMed Central

    van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew

    2015-01-01

    Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Conclusions: Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. (Hepatology 2015;61:1174–1182) PMID:25482139

  5. Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology.

    PubMed

    Ross, June; Westaway, Kira; Travers, Meg; Morwood, Michael J; Hayward, John

    2016-01-01

    The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world's earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated.

  6. Minimum target prices for production of direct-acting antivirals and associated diagnostics to combat hepatitis C virus.

    PubMed

    van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew

    2015-04-01

    Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.

  7. Long-term comparison of the climate extremes variability in different climate types located in coastal and inland regions of Iran

    NASA Astrophysics Data System (ADS)

    Ghiami-Shamami, Fereshteh; Sabziparvar, Ali Akbar; Shinoda, Seirou

    2018-06-01

    The present study examined annually and seasonally trends in climate-based and location-based indices after detection of artificial change points and application of homogenization. Thirteen temperature and eight precipitation indices were generated at 27 meteorological stations over Iran during 1961-2012. The Mann-Kendall test and Sen's slope estimator were applied for trend detection. Results revealed that almost all indices based on minimum temperature followed warmer conditions. Indicators based on minimum temperature showed less consistency with more cold and less warm events. Climate-based results for all extremes indicated semi-arid climate had the most warming events. Moreover, based on location-based results, inland areas showed the most signs of warming. Indices based on precipitation exhibited a negative trend in warm seasons, with the most changes in coastal areas and inland, respectively. Results provided evidence of warming and drying since the 1990s. Changes in precipitation indices were much weaker and less spatially coherent. Summer was found to be the most sensitive season, in comparison with winter. For arid and semi-arid regions, by increasing the latitude, less warm events occurred, while increasing the longitude led to more warming events. Overall, Iran is dominated by a significant increase in warm events, especially minimum temperature-based indices (nighttime). This result, in addition to fewer precipitation events, suggests a generally dryer regime for the future, which is more evident in the warm season of semi-arid sites. The results could provide beneficial references for water resources and eco-environmental policymakers.

  8. A practical algorithm to estimate soil thawing onset with the soil moisture active passive (SMAP) data

    NASA Astrophysics Data System (ADS)

    Chen, X.; Liu, L.

    2016-12-01

    The Soil Moisture Active Passive (SMAP) satellite simultaneously collected active and passive microwave data at L-band from April to July, 2015. The L-band radiometer brightness temperature (TB) data are strongly sensitive to the change of soil moisture, therefore, can be used to estimate freeze/thaw state of soil. We applied an edge detection method to detect the onset of thawing based on the SMAP level-1C TB data. This method convolves the first derivative of the Gaussian function as a kernel with the TB time series. When thawing occurs, soil moisture increases abruptly and leads to a decrease in TB. Therefore, a primary thaw event can be identified when the convolved signal reaches a local minimum. Considering the noise of the radiometer data, not all local minimums correspond to a thaw event. Therefore, we further applied a filter based on a priori or in situ soil temperature observation to eliminate false events. We compared the TB-based estimates with in situ measurements of soil temperature, moisture, and snow depth from April to June from 5 SNOTEL sites in Alaska. Our results show that at 4 out of the 5 sites the estimated thawing onsets and in-situ data agree within 5 to 10 days. However, we found a distinct inconsistency of 41 days at the fifth site. One possible reason is the mismatch in spatial coverage: one pixel of SMAP radiometer data has a size of 36 km, within which different areas may have different freeze/thaw states. The SMAP radar backscatter coefficient (σ0) data are also very sensitive to soil moisture, and has finer spatial resolution of 1 km, making it more directly comparable with the in situ measurements. We applied a seasonal threshold method to estimate thawing onset based on this data. Firstly, we set a thaw onset based on the in situ soil temperature and moisture measurements at 5 cm depth. Then we averaged σ0 observations from April 14th to 7 days before the thaw onset to represent the frozen soil, and used the mean value from 7 days after the thawing onset to June 1st as thawed reference. Next, the σ0-based freeze/thaw distribution within radiometer pixel can be obtained. Assuming TB and have a linear relationship in 36 km scale during a short time, SMAP provide a down scaling method to obtain 9 km resolution TB data. For further work, we plan to apply the edge detection method on this TB data to estimate the soil state in 9 km.

  9. An estimate of the number of tropical tree species.

    PubMed

    Slik, J W Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L; Bellingham, Peter J; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L M; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K; Chazdon, Robin L; Robin, Chazdon L; Clark, Connie; Clark, David B; Clark, Deborah A; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A O; Eisenlohr, Pedro V; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A; Joly, Carlos A; de Jong, Bernardus H J; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F; Lawes, Michael J; Amaral, Ieda Leao do; Letcher, Susan G; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H; Meilby, Henrik; Melo, Felipe P L; Metcalfe, Daniel J; Medjibe, Vincent P; Metzger, Jean Paul; Millet, Jerome; Mohandass, D; Montero, Juan C; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T F; Pitman, Nigel C A; Poorter, Lourens; Poulsen, Axel D; Poulsen, John; Powers, Jennifer; Prasad, Rama C; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; Dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A; Santos, Fernanda; Sarker, Swapan K; Satdichanh, Manichanh; Schmitt, Christine B; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I-Fang; Sunderland, Terry; Sunderand, Terry; Suresh, H S; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L C H; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A; Webb, Campbell O; Whitfeld, Timothy; Wich, Serge A; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C Yves; Yap, Sandra L; Yoneda, Tsuyoshi; Zahawi, Rakan A; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L; Garcia Luize, Bruno; Venticinque, Eduardo M

    2015-06-16

    The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼ 40,000 and ∼ 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼ 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼ 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.

  10. Estimating the sensitivity of passive surveillance for HPAI H5N1 in Bayelsa state, Nigeria.

    PubMed

    Ojimelukwe, Agatha E; Prakarnkamanant, Apisit; Rushton, Jonathan

    2016-07-01

    This study identified characteristics of poultry farming with a focus on practices that affect the detection of HPAI; and estimated the system sensitivity of passive surveillance for HPAI H5N1 in commercial and backyard chicken farms in Bayelsa-State, Nigeria. Field studies were carried out in Yenegoa and Ogbia local government areas in Bayelsa state. Willingness to report HPAI was highest in commercial poultry farms (13/13) than in Backyard farms (8/13). Poor means of dead bird disposal was common to both commercial and backyard farms. Administering some form of treatment to sick birds without prior consultation with a professional was higher in backyard farms (8/13) than in commercial farms (4/13). Consumption of sick birds was reported in 4/13 backyard farms and sale of dead birds was recorded in one commercial farm. The sensitivity of passive surveillance for HPAI was assessed using scenario tree modelling. A scenario tree model was developed and applied to estimate the sensitivity, i.e. the probability of detecting one or more infected chicken farms in Bayelsa state at different levels of disease prevalence. The model showed a median sensitivity of 100%, 67% and 23% for detecting HPAI by passive surveillance at a disease prevalence of 0.1%, a minimum of 10 and 3 infected poultry farms respectively. Passive surveillance system sensitivity at a design prevalence of 10 infected farms is increasable up to 86% when the disease detection in backyard chicken farms is enhanced. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. The transition from the open minimum to the ring minimum on the ground state and on the lowest excited state of like symmetry in ozone: A configuration interaction study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theis, Daniel; Windus, Theresa L.; Ruedenberg, Klaus

    The metastable ring structure of the ozone 1{sup 1}A{sub 1} ground state, which theoretical calculations have shown to exist, has so far eluded experimental detection. An accurate prediction for the energy difference between this isomer and the lower open structure is therefore of interest, as is a prediction for the isomerization barrier between them, which results from interactions between the lowest two {sup 1}A{sub 1} states. In the present work, valence correlated energies of the 1{sup 1}A{sub 1} state and the 2{sup 1}A{sub 1} state were calculated at the 1{sup 1}A{sub 1} open minimum, the 1{sup 1}A{sub 1} ring minimum,more » the transition state between these two minima, the minimum of the 2{sup 1}A{sub 1} state, and the conical intersection between the two states. The geometries were determined at the full-valence multi-configuration self-consistent-field level. Configuration interaction (CI) expansions up to quadruple excitations were calculated with triple-zeta atomic basis sets. The CI expansions based on eight different reference configuration spaces were explored. To obtain some of the quadruple excitation energies, the method of Correlation Energy Extrapolation by Intrinsic Scaling was generalized to the simultaneous extrapolation for two states. This extrapolation method was shown to be very accurate. On the other hand, none of the CI expansions were found to have converged to millihartree (mh) accuracy at the quadruple excitation level. The data suggest that convergence to mh accuracy is probably attained at the sextuple excitation level. On the 1{sup 1}A{sub 1} state, the present calculations yield the estimates of (ring minimum—open minimum) ∼45–50 mh and (transition state—open minimum) ∼85–90 mh. For the (2{sup 1}A{sub 1}–{sup 1}A{sub 1}) excitation energy, the estimate of ∼130–170 mh is found at the open minimum and 270–310 mh at the ring minimum. At the transition state, the difference (2{sup 1}A{sub 1}–{sup 1}A{sub 1}) is found to be between 1 and 10 mh. The geometry of the transition state on the 1{sup 1}A{sub 1} surface and that of the minimum on the 2{sup 1}A{sub 1} surface nearly coincide. More accurate predictions of the energy differences also require CI expansions to at least sextuple excitations with respect to the valence space. For every wave function considered, the omission of the correlations of the 2s oxygen orbitals, which is a widely used approximation, was found to cause errors of about ±10 mh with respect to the energy differences.« less

  12. A Minimum Fuel Based Estimator for Maneuver and Natrual Dynamics Reconstruction

    NASA Astrophysics Data System (ADS)

    Lubey, D.; Scheeres, D.

    2013-09-01

    The vast and growing population of objects in Earth orbit (active and defunct spacecraft, orbital debris, etc.) offers many unique challenges when it comes to tracking these objects and associating the resulting observations. Complicating these challenges are the inaccurate natural dynamical models of these objects, the active maneuvers of spacecraft that deviate them from their ballistic trajectories, and the fact that spacecraft are tracked and operated by separate agencies. Maneuver detection and reconstruction algorithms can help with each of these issues by estimating mismodeled and unmodeled dynamics through indirect observation of spacecraft. It also helps to verify the associations made by an object correlation algorithm or aid in making those associations, which is essential when tracking objects in orbit. The algorithm developed in this study applies an Optimal Control Problem (OCP) Distance Metric approach to the problems of Maneuver Reconstruction and Dynamics Estimation. This was first developed by Holzinger, Scheeres, and Alfriend (2011), with a subsequent study by Singh, Horwood, and Poore (2012). This method estimates the minimum fuel control policy rather than the state as a typical Kalman Filter would. This difference ensures that the states are connected through a given dynamical model and allows for automatic covariance manipulation, which can help to prevent filter saturation. Using a string of measurements (either verified or hypothesized to correlate with one another), the algorithm outputs a corresponding string of adjoint and state estimates with associated noise. Post-processing techniques are implemented, which when applied to the adjoint estimates can remove noise and expose unmodeled maneuvers and mismodeled natural dynamics. Specifically, the estimated controls are used to determine spacecraft dependent accelerations (atmospheric drag and solar radiation pressure) using an adapted form of the Optimal Control based natural dynamics estimation scheme developed by Lubey and Scheeres (2012). In order to allow for direct comparison, the estimator developed here was modeled after a typical Kalman Filter. The estimator forces the terminal state to lie on a manifold that satisfies the least squares with a priori information cost function, thus establishing a link with a typical Kalman filter. Terms are collected into a pseudo-Kalman Gain, which creates an equivalent form in the state estimates and covariances between the two estimators. While the two estimators share common roots, the inclusion of control in the Minimum Fuel Estimator gives it special properties. For instance, the inclusion of adjoint noise can help to automatically prevent filter saturation in a manner similar to a State Noise Compensation Algorithm. This property is quite important when considering dynamics mismodeling as filter saturation will cause estimate divergence for mismodeled systems. Additional properties and alternative forms of the estimator are also explored in this study. Several implementations of this estimator are given in this paper. It is applied to LEO, GEO, and GTO orbits with drag and SRP mismodeling. The inclusion of unmodeled maneuvers is also considered. These numerical simulations verify the mathematical properties of this estimator, and demonstrate the advantages that this estimator has over typical Kalman Filters.

  13. Atmospheric control on ground and space based early warning system for hazard linked to ash injection into the atmosphere

    NASA Astrophysics Data System (ADS)

    Caudron, Corentin; Taisne, Benoit; Whelley, Patrick; Garces, Milton; Le Pichon, Alexis

    2014-05-01

    Violent volcanic eruptions are common in the Southeast Asia which is bordered by active subduction zones with hundreds of active volcanoes. The physical conditions at the eruptive vent are difficult to estimate, especially when there are only a few sensors distributed around the volcano. New methods are therefore required to tackle this problem. Among them, satellite imagery and infrasound may rapidly provide information on strong eruptions triggered at volcanoes which are not closely monitored by on-site instruments. The deployment of an infrasonic array located at Singapore will increase the detection capability of the existing IMS network. In addition, the location of Singapore with respect to those volcanoes makes it the perfect site to identify erupting blasts based on the wavefront characteristics of the recorded signal. There are ~750 active or potentially active volcanoes within 4000 kilometers of Singapore. They have been combined into 23 volcanic zones that have clear azimuth with respect to Singapore. Each of those zones has been assessed for probabilities of eruptive styles, from moderate (Volcanic Explosivity Index of 3) to cataclysmic (VEI 8) based on remote morphologic analysis. Ash dispersal models have been run using wind velocity profiles from 2010 to 2012 and hypothetical eruption scenarios for a range of eruption explosivities. Results can be used to estimate the likelihood of volcanic ash at any location in SE Asia. Seasonal changes in atmospheric conditions will strongly affect the potential to detect small volcanic eruptions with infrasound and clouds can hide eruption plumes from satellites. We use the average cloud cover for each zone to estimate the probability of eruption detection from space, and atmospheric models to estimate the probability of eruption detection with infrasound. Using remote sensing in conjunction with infrasound improves detection capabilities as each method is capable of detecting eruptions when the other is 'blind' or 'defened' by adverse atmospheric conditions. According to its location, each volcanic zone will be associated with a threshold value (minimum VEI detectable) depending on the seasonality of the wind velocity profile in the region and the cloud cover.

  14. Recommended number of strides for automatic assessment of gait symmetry and regularity in above-knee amputees by means of accelerometry and autocorrelation analysis

    PubMed Central

    2012-01-01

    Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184

  15. CE separation of proteins and yeasts dynamically modified by PEG pyrenebutanoate with fluorescence detection.

    PubMed

    Horká, Marie; Růzicka, Filip; Holá, Veronika; Slais, Karel

    2007-07-01

    The optimized protocols of the bioanalytes separation, proteins and yeasts, dynamically modified by the nonionogenic tenside PEG pyrenebutanoate, were applied in CZE and CIEF with the acidic gradient in pH range 2-5.5, both with fluorescence detection. PEG pyrenebutanoate was used as a buffer additive for a dynamic modification of proteins and/or yeast samples. The narrow peaks of modified analytes were detected. The values of the pI's of the labeled proteins were calculated using new fluorescent pI markers in CIEF and they were found to be comparable with pI's of the native compounds. As an example of the possible use of the suggested CIEF technique, the mixed cultures of yeasts, Candida albicans, Candida glabrata, Candida kefyr, Candida krusei, Candida lusitaniae, Candida parapsilosis, Candida tropicalis, Candida zeylanoides, Geotrichum candidum, Saccharomyces cerevisiae, Trichosporon asahii and Yarrowia lipolytica, were reproducibly focused and separated with high sensitivity. Using UV excitation for the on-column fluorometric detection, the minimum detectable amounts of analytes, femtograms of proteins and down to ten cells injected on the separation capillary, were estimated.

  16. Observed changes in extremes of daily rainfall and temperature in Jemma Sub-Basin, Upper Blue Nile Basin, Ethiopia

    NASA Astrophysics Data System (ADS)

    Worku, Gebrekidan; Teferi, Ermias; Bantider, Amare; Dile, Yihun T.

    2018-02-01

    Climate variability has been a threat to the socio-economic development of Ethiopia. This paper examined the changes in rainfall, minimum, and maximum temperature extremes of Jemma Sub-Basin of the Upper Blue Nile Basin for the period of 1981 to 2014. The nonparametric Mann-Kendall, seasonal Mann-Kendall, and Sen's slope estimator were used to estimate annual trends. Ten rainfall and 12 temperature indices were used to study changes in rainfall and temperature extremes. The results showed an increasing trend of annual and summer rainfall in more than 78% of the stations and a decreasing trend of spring rainfall in most of the stations. An increase in rainfall extreme events was detected in the majority of the stations. Several rainfall extreme indices showed wetting trends in the sub-basin, whereas limited indices indicated dryness in most of the stations. Annual maximum and minimum temperature and extreme temperature indices showed warming trend in the sub-basin. Presence of extreme rainfall and a warming trend of extreme temperature indices may suggest signs of climate change in the Jemma Sub-Basin. This study, therefore, recommended the need for exploring climate induced risks and implementing appropriate climate change adaptation and mitigation strategies.

  17. Modeling Dengue Vector Dynamics under Imperfect Detection: Three Years of Site-Occupancy by Aedes aegypti and Aedes albopictus in Urban Amazonia

    PubMed Central

    Padilla-Torres, Samael D.; Ferraz, Gonçalo; Luz, Sergio L. B.; Zamora-Perea, Elvira; Abad-Franch, Fernando

    2013-01-01

    Aedes aegypti and Ae. albopictus are the vectors of dengue, the most important arboviral disease of humans. To date, Aedes ecology studies have assumed that the vectors are truly absent from sites where they are not detected; since no perfect detection method exists, this assumption is questionable. Imperfect detection may bias estimates of key vector surveillance/control parameters, including site-occupancy (infestation) rates and control intervention effects. We used a modeling approach that explicitly accounts for imperfect detection and a 38-month, 55-site detection/non-detection dataset to quantify the effects of municipality/state control interventions on Aedes site-occupancy dynamics, considering meteorological and dwelling-level covariates. Ae. aegypti site-occupancy estimates (mean 0.91; range 0.79–0.97) were much higher than reported by routine surveillance based on ‘rapid larval surveys’ (0.03; 0.02–0.11) and moderately higher than directly ascertained with oviposition traps (0.68; 0.50–0.91). Regular control campaigns based on breeding-site elimination had no measurable effects on the probabilities of dwelling infestation by dengue vectors. Site-occupancy fluctuated seasonally, mainly due to the negative effects of high maximum (Ae. aegypti) and minimum (Ae. albopictus) summer temperatures (June-September). Rainfall and dwelling-level covariates were poor predictors of occupancy. The marked contrast between our estimates of adult vector presence and the results from ‘rapid larval surveys’ suggests, together with the lack of effect of local control campaigns on infestation, that many Aedes breeding sites were overlooked by vector control agents in our study setting. Better sampling strategies are urgently needed, particularly for the reliable assessment of infestation rates in the context of control program management. The approach we present here, combining oviposition traps and site-occupancy models, could greatly contribute to that crucial aim. PMID:23472194

  18. Optimization of the K-edge imaging for vulnerable plaques using gold nanoparticles and energy-resolved photon counting detectors: a simulation study

    PubMed Central

    Alivov, Yahya; Baturin, Pavlo; Le, Huy Q.; Ducote, Justin; Molloi, Sabee

    2014-01-01

    We investigated the effect of different imaging parameters such as dose, beam energy, energy resolution, and number of energy bins on image quality of K-edge spectral computed tomography (CT) of gold nanoparticles (GNP) accumulated in an atherosclerotic plaque. Maximum likelihood technique was employed to estimate the concentration of GNP, which served as a targeted intravenous contrast material intended to detect the degree of plaque's inflammation. The simulations studies used a single slice parallel beam CT geometry with an X-ray beam energy ranging between 50 and 140 kVp. The synthetic phantoms included small (3 cm in diameter) cylinder and chest (33x24 cm2) phantom, where both phantoms contained tissue, calcium, and gold. In the simulation studies GNP quantification and background (calcium and tissue) suppression task were pursued. The X-ray detection sensor was represented by an energy resolved photon counting detector (e.g., CdZnTe) with adjustable energy bins. Both ideal and more realistic (12% FWHM energy resolution) implementations of photon counting detector were simulated. The simulations were performed for the CdZnTe detector with pixel pitch of 0.5-1 mm, which corresponds to the performance without significant charge sharing and cross-talk effects. The Rose model was employed to estimate the minimum detectable concentration of GNPs. A figure of merit (FOM) was used to optimize the X-ray beam energy (kVp) to achieve the highest signal-to-noise ratio (SNR) with respect to patient dose. As a result, the successful identification of gold and background suppression was demonstrated. The highest FOM was observed at 125 kVp X-ray beam energy. The minimum detectable GNP concentration was determined to be approximately 1.06 μmol/mL (0.21 mg/mL) for an ideal detector and about 2.5 μmol/mL (0.49 mg/mL) for more realistic (12% FWHM) detector. The studies show the optimal imaging parameters at lowest patient dose using an energy resolved photon counting detector to image GNP in an atherosclerotic plaque. PMID:24334301

  19. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-08-13

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  20. Balancing Score Adjusted Targeted Minimum Loss-based Estimation

    PubMed Central

    Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.

    2015-01-01

    Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539

  1. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  2. Forensic Entomology in Animal Cruelty Cases.

    PubMed

    Brundage, A; Byrd, J H

    2016-09-01

    Forensic entomology can be useful to the veterinary professional in cases of animal cruelty. A main application of forensic entomology is to determine the minimum postmortem interval by estimating the time of insect colonization, based on knowledge of the rate of development of pioneer colonizers and on insect species succession during decomposition of animal remains. Since insect development is temperature dependent, these estimates require documentation of the environmental conditions, including ambient temperature. It can also aid in the detection and recognition of wounds, as well as estimate the timing of periods of neglect. Knowledge of the geographic distribution of insects that colonize animal remains may suggest that there has been movement or concealment of the carcass or can create associations between a suspect, a victim, and a crime scene. In some instances, it can aid in the detection of drugs or toxins within decomposed or skeletonized remains. During animal cruelty investigations, it may become the responsibility of the veterinary professional to document and collect entomological evidence from live animals or during the necropsy. The applications of forensic entomology are discussed. A protocol is described for documenting and collecting entomological evidence at the scene and during the necropsy, with additional emphasis on recording geographic location, meteorological data, and collection and preservation of insect specimens. © The Author(s) 2016.

  3. Application of wavefield imaging to characterize scattering from artificial and impact damage in composite laminate panels

    NASA Astrophysics Data System (ADS)

    Williams, Westin B.; Michaels, Thomas E.; Michaels, Jennifer E.

    2018-04-01

    Composite materials used for aerospace applications are highly susceptible to impacts, which can result in barely visible delaminations. Reliable and fast detection of such damage is needed before structural failures occur. One approach is to use ultrasonic guided waves generated from a sparse array consisting of permanently mounted or embedded transducers for performing structural health monitoring. This array can detect introduction of damage after baseline subtraction, and also provide localization and characterization information via the minimum variance imaging algorithm. Imaging performance can vary considerably depending upon where damage is located with respect to the array; however, prior work has shown that knowledge of expected scattering can improve imaging consistency for artificial damage at various locations. In this study, anisotropic material attenuation and wave speed are estimated as a function of propagation angle using wavefield data recorded along radial lines at multiple angles with respect to an omnidirectional guided wave source. Additionally, full wavefield data are recorded before and after the introduction of artificial and impact damage so that wavefield baseline subtraction may be applied. 3-D filtering techniques are then used to reduce noise and isolate scattered waves. A model for estimating scattering of a circular defect is developed and scattering estimates for both artificial and impact damage are presented and compared.

  4. Spatial regression test for ensuring temperature data quality in southern Spain

    NASA Astrophysics Data System (ADS)

    Estévez, J.; Gavilán, P.; García-Marín, A. P.

    2018-01-01

    Quality assurance of meteorological data is crucial for ensuring the reliability of applications and models that use such data as input variables, especially in the field of environmental sciences. Spatial validation of meteorological data is based on the application of quality control procedures using data from neighbouring stations to assess the validity of data from a candidate station (the station of interest). These kinds of tests, which are referred to in the literature as spatial consistency tests, take data from neighbouring stations in order to estimate the corresponding measurement at the candidate station. These estimations can be made by weighting values according to the distance between the stations or to the coefficient of correlation, among other methods. The test applied in this study relies on statistical decision-making and uses a weighting based on the standard error of the estimate. This paper summarizes the results of the application of this test to maximum, minimum and mean temperature data from the Agroclimatic Information Network of Andalusia (southern Spain). This quality control procedure includes a decision based on a factor f, the fraction of potential outliers for each station across the region. Using GIS techniques, the geographic distribution of the errors detected has been also analysed. Finally, the performance of the test was assessed by evaluating its effectiveness in detecting known errors.

  5. Observations of the 63 micron forbidden OI emission line in the Orion and Omega Nebulae

    NASA Technical Reports Server (NTRS)

    Melnick, G.; Gull, G. E.; Harwit, M.

    1979-01-01

    Observations of 63-micron neutral oxygen emission from the Orion and Omega Nebulae are reported which were carried out from the NASA Lear Jet flying at an altitude of approximately 13.7 km. The best estimate for the 3 P 1 - 3 P 2 transition wavelength is shown to be 63.2 microns, and the detected fluxes are found to be extraordinarily high (amounting to approximately 600 suns in M42 at 0.5 kpc and to about 2900 suns in the line in M17 at 2 kpc). Attempts are made to estimate the minimum temperature and other parameters of the emitting region in Orion. It is concluded that conditions not too different from those permitted by some current models appear to provide fluxes that agree in order of magnitude with those observed.

  6. Temporal modulation transfer functions in auditory receptor fibres of the locust ( Locusta migratoria L.).

    PubMed

    Prinz, P; Ronacher, B

    2002-08-01

    The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.

  7. Minimum Wages and the Economic Well-Being of Single Mothers

    ERIC Educational Resources Information Center

    Sabia, Joseph J.

    2008-01-01

    Using pooled cross-sectional data from the 1992 to 2005 March Current Population Survey (CPS), this study examines the relationship between minimum wage increases and the economic well-being of single mothers. Estimation results show that minimum wage increases were ineffective at reducing poverty among single mothers. Most working single mothers…

  8. DIF Detection Using Multiple-Group Categorical CFA with Minimum Free Baseline Approach

    ERIC Educational Resources Information Center

    Chang, Yu-Wei; Huang, Wei-Kang; Tsai, Rung-Ching

    2015-01-01

    The aim of this study is to assess the efficiency of using the multiple-group categorical confirmatory factor analysis (MCCFA) and the robust chi-square difference test in differential item functioning (DIF) detection for polytomous items under the minimum free baseline strategy. While testing for DIF items, despite the strong assumption that all…

  9. Minimum number of measurements for evaluating Bertholletia excelsa.

    PubMed

    Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E

    2017-09-27

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.

  10. Predicting the performance of local seismic networks using Matlab and Google Earth.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chael, Eric Paul

    2009-11-01

    We have used Matlab and Google Earth to construct a prototype application for modeling the performance of local seismic networks for monitoring small, contained explosions. Published equations based on refraction experiments provide estimates of peak ground velocities as a function of event distance and charge weight. Matlab routines implement these relations to calculate the amplitudes across a network of stations from sources distributed over a geographic grid. The amplitudes are then compared to ambient noise levels at the stations, and scaled to determine the smallest yield that could be detected at each source location by a specified minimum number ofmore » stations. We use Google Earth as the primary user interface, both for positioning the stations of a hypothetical local network, and for displaying the resulting detection threshold contours.« less

  11. A comparative study of optimum and suboptimum direct-detection laser ranging receivers

    NASA Technical Reports Server (NTRS)

    Abshire, J. B.

    1978-01-01

    A summary of previously proposed receiver strategies for direct-detection laser ranging receivers is presented. Computer simulations are used to compare performance of candidate implementation strategies in the 1- to 100-photoelectron region. Under the condition of no background radiation, the maximum-likelihood and minimum mean-square error estimators were found to give the same performance for both bell-shaped and rectangular optical-pulse shapes. For signal energies greater than 100 photoelectrons, the root-mean-square range error is shown to decrease as Q to the -1/2 power for bell-shaped pulses and Q to the -1 power for rectangular pulses, where Q represents the average pulse energy. Of several receiver implementations presented, the matched-filter peak detector was found to be preferable. A similar configuration, using a constant-fraction discriminator, exhibited a signal-level dependent time bias.

  12. How Dusty Is Alpha Centauri? Excess or Non-excess over the Infrared Photospheres of Main-sequence Stars

    NASA Technical Reports Server (NTRS)

    Wiegert, J.; Liseau, R.; Thebault, P.; Olofsson, G.; Mora, A.; Bryden, G.; Marshall, J. P.; Eiroa, C.; Montesinos, B.; Ardila, D.; hide

    2014-01-01

    Context. Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby, solar-type binary Centauri have metallicities that are higher than solar, which is thought to promote giant planet formation. Aims. We aim to determine the level of emission from debris around the stars in the Cen system. This requires knowledge of their photospheres.Having already detected the temperature minimum, Tmin, of CenA at far-infrared wavelengths, we here attempt to do the same for the moreactive companion Cen B. Using the Cen stars as templates, we study the possible eects that Tmin may have on the detectability of unresolveddust discs around other stars. Methods.We used Herschel-PACS, Herschel-SPIRE, and APEX-LABOCA photometry to determine the stellar spectral energy distributions in thefar infrared and submillimetre. In addition, we used APEX-SHeFI observations for spectral line mapping to study the complex background around Cen seen in the photometric images. Models of stellar atmospheres and of particulate discs, based on particle simulations and in conjunctionwith radiative transfer calculations, were used to estimate the amount of debris around these stars. Results. For solar-type stars more distant than Cen, a fractional dust luminosity fd LdustLstar 2 107 could account for SEDs that do not exhibit the Tmin eect. This is comparable to estimates of fd for the Edgeworth-Kuiper belt of the solar system. In contrast to the far infrared,slight excesses at the 2:5 level are observed at 24 m for both CenA and B, which, if interpreted as due to zodiacal-type dust emission, wouldcorrespond to fd (13) 105, i.e. some 102 times that of the local zodiacal cloud. Assuming simple power-law size distributions of the dustgrains, dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the Cen stars, viz.4106 M$ of 4 to 1000 msize grains, distributed according to n(a) a3:5. Similarly, for filled-in Tmin emission, corresponding Edgeworth-Kuiper belts could account for103 M$ of dust. Conclusions. Our far-infrared observations lead to estimates of upper limits to the amount of circumstellar dust around the stars CenA and B.Light scattered andor thermally emitted by exo-Zodi discs will have profound implications for future spectroscopic missions designed to searchfor biomarkers in the atmospheres of Earth-like planets. The far-infrared spectral energy distribution of Cen B is marginally consistent with thepresence of a minimum temperature region in the upper atmosphere of the star. We also show that an Cen A-like temperature minimum mayresult in an erroneous apprehension about the presence of dust around other, more distant stars.

  13. New approaches to removing cloud shadows and evaluating the 380 nm surface reflectance for improved aerosol optical thickness retrievals from the GOSAT/TANSO-Cloud and Aerosol Imager

    NASA Astrophysics Data System (ADS)

    Fukuda, Satoru; Nakajima, Teruyuki; Takenaka, Hideaki; Higurashi, Akiko; Kikuchi, Nobuyuki; Nakajima, Takashi Y.; Ishida, Haruma

    2013-12-01

    satellite aerosol retrieval algorithm was developed to utilize a near-ultraviolet band of the Greenhouse gases Observing SATellite/Thermal And Near infrared Sensor for carbon Observation (GOSAT/TANSO)-Cloud and Aerosol Imager (CAI). At near-ultraviolet wavelengths, the surface reflectance over land is smaller than that at visible wavelengths. Therefore, it is thought possible to reduce retrieval error by using the near-ultraviolet spectral region. In the present study, we first developed a cloud shadow detection algorithm that uses first and second minimum reflectances of 380 nm and 680 nm based on the difference in Rayleigh scattering contribution for these two bands. Then, we developed a new surface reflectance correction algorithm, the modified Kaufman method, which uses minimum reflectance data at 680 nm and the NDVI to estimate the surface reflectance at 380 nm. This algorithm was found to be particularly effective at reducing the aerosol effect remaining in the 380 nm minimum reflectance; this effect has previously proven difficult to remove owing to the infrequent sampling rate associated with the three-day recursion period of GOSAT and the narrow CAI swath of 1000 km. Finally, we applied these two algorithms to retrieve aerosol optical thicknesses over a land area. Our results exhibited better agreement with sun-sky radiometer observations than results obtained using a simple surface reflectance correction technique using minimum radiances.

  14. The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.

    PubMed

    Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar

    2018-03-01

    This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility (< 10 cycles per minute), and the difference between near and distance phoria (> 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near phoria, and monocular accommodative facility yield good sensitivity and specificity for diagnosis of NSBVAs in a community set-up. © 2017 Optometry Australia.

  15. Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images

    PubMed Central

    Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.

    2014-01-01

    Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042

  16. An estimate of the number of tropical tree species

    PubMed Central

    Slik, J. W. Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F.; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L.; Bellingham, Peter J.; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q.; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L. M.; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K.; Chazdon, Robin L.; Clark, Connie; Clark, David B.; Clark, Deborah A.; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S.; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J.; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A. O.; Eisenlohr, Pedro V.; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J.; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T.; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M.; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A.; Joly, Carlos A.; de Jong, Bernardus H. J.; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L.; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F.; Lawes, Michael J.; do Amaral, Ieda Leao; Letcher, Susan G.; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H.; Meilby, Henrik; Melo, Felipe P. L.; Metcalfe, Daniel J.; Medjibe, Vincent P.; Metzger, Jean Paul; Millet, Jerome; Mohandass, D.; Montero, Juan C.; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T. F.; Pitman, Nigel C. A.; Poorter, Lourens; Poulsen, Axel D.; Poulsen, John; Powers, Jennifer; Prasad, Rama C.; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A.; Santos, Fernanda; Sarker, Swapan K.; Satdichanh, Manichanh; Schmitt, Christine B.; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S.; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I.-Fang; Sunderland, Terry; Suresh, H. S.; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W.; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L. C. H.; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A.; Webb, Campbell O.; Whitfeld, Timothy; Wich, Serge A.; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C. Yves; Yap, Sandra L.; Yoneda, Tsuyoshi; Zahawi, Rakan A.; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L.; Garcia Luize, Bruno; Venticinque, Eduardo M.

    2015-01-01

    The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher’s alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼40,000 and ∼53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼19,000–25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼4,500–6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa. PMID:26034279

  17. A physical mechanism for the prediction of the sunspot number during solar cycle 21. [graphs (charts)

    NASA Technical Reports Server (NTRS)

    Schatten, K. H.; Scherrer, P. H.; Svalgaard, L.; Wilcox, J. M.

    1978-01-01

    On physical grounds it is suggested that the sun's polar field strength near a solar minimum is closely related to the following cycle's solar activity. Four methods of estimating the sun's polar magnetic field strength near solar minimum are employed to provide an estimate of cycle 21's yearly mean sunspot number at solar maximum of 140 plus or minus 20. This estimate is considered to be a first order attempt to predict the cycle's activity using one parameter of physical importance.

  18. Stormwater plume detection by MODIS imagery in the southern California coastal ocean

    USGS Publications Warehouse

    Nezlin, N.P.; DiGiacomo, P.M.; Diehl, D.W.; Jones, B.H.; Johnson, S.C.; Mengel, M.J.; Reifel, K.M.; Warrick, J.A.; Wang, M.

    2008-01-01

    Stormwater plumes in the southern California coastal ocean were detected by MODIS-Aqua satellite imagery and compared to ship-based data on surface salinity and fecal indicator bacterial (FIB) counts collected during the Bight'03 Regional Water Quality Program surveys in February-March of 2004 and 2005. MODIS imagery was processed using a combined near-infrared/shortwave-infrared (NIR-SWIR) atmospheric correction method, which substantially improved normalized water-leaving radiation (nLw) optical spectra in coastal waters with high turbidity. Plumes were detected using a minimum-distance supervised classification method based on nLw spectra averaged within the training areas, defined as circular zones of 1.5-5.0-km radii around field stations with a surface salinity of S 33.0 ('ocean'). The plume optical signatures (i.e., the nLw differences between 'plume' and 'ocean') were most evident during the first 2 days after the rainstorms. To assess the accuracy of plume detection, stations were classified into 'plume' and 'ocean' using two criteria: (1) 'plume' included the stations with salinity below a certain threshold estimated from the maximum accuracy of plume detection; and (2) FIB counts in 'plume' exceeded the California State Water Board standards. The salinity threshold between 'plume' and 'ocean' was estimated as 32.2. The total accuracy of plume detection in terms of surface salinity was not high (68% on average), seemingly because of imperfect correlation between plume salinity and ocean color. The accuracy of plume detection in terms of FIB exceedances was even lower (64% on average), resulting from low correlation between ocean color and bacterial contamination. Nevertheless, satellite imagery was shown to be a useful tool for the estimation of the extent of potentially polluted plumes, which was hardly achievable by direct sampling methods (in particular, because the grids of ship-based stations covered only small parts of the plumes detected via synoptic MODIS imagery). In most southern California coastal areas, the zones of bacterial contamination were much smaller than the areas of turbid plumes; an exception was the plume of the Tijuana River, where the zone of bacterial contamination was comparable with the zone of plume detected by ocean color. ?? 2008 Elsevier Ltd.

  19. Stormwater plume detection by MODIS imagery in the southern California coastal ocean

    NASA Astrophysics Data System (ADS)

    Nezlin, Nikolay P.; DiGiacomo, Paul M.; Diehl, Dario W.; Jones, Burton H.; Johnson, Scott C.; Mengel, Michael J.; Reifel, Kristen M.; Warrick, Jonathan A.; Wang, Menghua

    2008-10-01

    Stormwater plumes in the southern California coastal ocean were detected by MODIS-Aqua satellite imagery and compared to ship-based data on surface salinity and fecal indicator bacterial (FIB) counts collected during the Bight'03 Regional Water Quality Program surveys in February-March of 2004 and 2005. MODIS imagery was processed using a combined near-infrared/shortwave-infrared (NIR-SWIR) atmospheric correction method, which substantially improved normalized water-leaving radiation (nLw) optical spectra in coastal waters with high turbidity. Plumes were detected using a minimum-distance supervised classification method based on nLw spectra averaged within the training areas, defined as circular zones of 1.5-5.0-km radii around field stations with a surface salinity of S < 32.0 ("plume") and S > 33.0 ("ocean"). The plume optical signatures (i.e., the nLw differences between "plume" and "ocean") were most evident during the first 2 days after the rainstorms. To assess the accuracy of plume detection, stations were classified into "plume" and "ocean" using two criteria: (1) "plume" included the stations with salinity below a certain threshold estimated from the maximum accuracy of plume detection; and (2) FIB counts in "plume" exceeded the California State Water Board standards. The salinity threshold between "plume" and "ocean" was estimated as 32.2. The total accuracy of plume detection in terms of surface salinity was not high (68% on average), seemingly because of imperfect correlation between plume salinity and ocean color. The accuracy of plume detection in terms of FIB exceedances was even lower (64% on average), resulting from low correlation between ocean color and bacterial contamination. Nevertheless, satellite imagery was shown to be a useful tool for the estimation of the extent of potentially polluted plumes, which was hardly achievable by direct sampling methods (in particular, because the grids of ship-based stations covered only small parts of the plumes detected via synoptic MODIS imagery). In most southern California coastal areas, the zones of bacterial contamination were much smaller than the areas of turbid plumes; an exception was the plume of the Tijuana River, where the zone of bacterial contamination was comparable with the zone of plume detected by ocean color.

  20. Experimental demonstration of all-optical weak magnetic field detection using beam-deflection of single-mode fiber coated with cobalt-doped nickel ferrite nanoparticles.

    PubMed

    Pradhan, Somarpita; Chaudhuri, Partha Roy

    2015-07-10

    We experimentally demonstrate single-mode optical-fiber-beam-deflection configuration for weak magnetic-field-detection using an optimized (low coercive-field) composition of cobalt-doped nickel ferrite nanoparticles. Devising a fiber-double-slit type experiment, we measure the surrounding magnetic field through precisely measuring interference-fringe yielding a minimum detectable field ∼100  mT and we procure magnetization data of the sample that fairly predicts SQUID measurement. To improve sensitivity, we incorporate etched single-mode fiber in double-slit arrangement and recorded a minimum detectable field, ∼30  mT. To further improve, we redefine the experiment as modulating fiber-to-fiber light-transmission and demonstrate the minimum field as 2.0 mT. The device will be uniquely suited for electrical or otherwise hazardous environments.

  1. Small Negative Cloud-to-Ground Lightning Reports at the KSC-ER

    NASA Technical Reports Server (NTRS)

    Wilson, Jennifer G.; Cummins, Kenneth L.; Krider, E. Philip

    2009-01-01

    '1he NASA Kennedy Space Center (KSC) and Air Force Eastern Range (ER) use data from two cloud-to-ground (CG) lightning detection networks, the CGLSS and the NLDN, and a volumetric lightning mapping array, LDAR, to monitor and characterize lightning that is potentially hazardous to ground or launch operations. Data obtained from these systems during June-August 2006 have been examined to check the classification of small, negative CGLSS reports that have an estimated peak current, [I(sup p)] less than 7 kA, and to determine the smallest values of I(sup p), that are produced by first strokes, by subsequent strokes that create a new ground contact (NGC), and by subsequent strokes that remain in a pre-existing channel (PEC). The results show that within 20 km of the KSC-ER, 21% of the low-amplitude negative CGLSS reports were produced by first strokes, with a minimum I(sup p) of-2.9 kA; 31% were by NGCs, with a minimum I(sup p) of-2.0 kA; and 14% were by PECs, with a minimum I(sup p) of -2.2 kA. The remaining 34% were produced by cloud pulses or lightning events that we were not able to classify.

  2. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  3. Hypnosis control based on the minimum concentration of anesthetic drug for maintaining appropriate hypnosis.

    PubMed

    Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro

    2013-01-01

    This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.

  4. Application of statistical process control to qualitative molecular diagnostic assays.

    PubMed

    O'Brien, Cathal P; Finn, Stephen P

    2014-01-01

    Modern pathology laboratories and in particular high throughput laboratories such as clinical chemistry have developed a reliable system for statistical process control (SPC). Such a system is absent from the majority of molecular laboratories and where present is confined to quantitative assays. As the inability to apply SPC to an assay is an obvious disadvantage this study aimed to solve this problem by using a frequency estimate coupled with a confidence interval calculation to detect deviations from an expected mutation frequency. The results of this study demonstrate the strengths and weaknesses of this approach and highlight minimum sample number requirements. Notably, assays with low mutation frequencies and detection of small deviations from an expected value require greater sample numbers to mitigate a protracted time to detection. Modeled laboratory data was also used to highlight how this approach might be applied in a routine molecular laboratory. This article is the first to describe the application of SPC to qualitative laboratory data.

  5. Multi-thresholds for fault isolation in the presence of uncertainties.

    PubMed

    Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel

    2016-05-01

    Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Estimating Species Richness and Modelling Habitat Preferences of Tropical Forest Mammals from Camera Trap Data

    PubMed Central

    Rovero, Francesco; Martin, Emanuel; Rosa, Melissa; Ahumada, Jorge A.; Spitale, Daniel

    2014-01-01

    Medium-to-large mammals within tropical forests represent a rich and functionally diversified component of this biome; however, they continue to be threatened by hunting and habitat loss. Assessing these communities implies studying species’ richness and composition, and determining a state variable of species abundance in order to infer changes in species distribution and habitat associations. The Tropical Ecology, Assessment and Monitoring (TEAM) network fills a chronic gap in standardized data collection by implementing a systematic monitoring framework of biodiversity, including mammal communities, across several sites. In this study, we used TEAM camera trap data collected in the Udzungwa Mountains of Tanzania, an area of exceptional importance for mammal diversity, to propose an example of a baseline assessment of species’ occupancy. We used 60 camera trap locations and cumulated 1,818 camera days in 2009. Sampling yielded 10,647 images of 26 species of mammals. We estimated that a minimum of 32 species are in fact present, matching available knowledge from other sources. Estimated species richness at camera sites did not vary with a suite of habitat covariates derived from remote sensing, however the detection probability varied with functional guilds, with herbivores being more detectable than other guilds. Species-specific occupancy modelling revealed novel ecological knowledge for the 11 most detected species, highlighting patterns such as ‘montane forest dwellers’, e.g. the endemic Sanje mangabey (Cercocebus sanjei), and ‘lowland forest dwellers’, e.g. suni antelope (Neotragus moschatus). Our results show that the analysis of camera trap data with account for imperfect detection can provide a solid ecological assessment of mammal communities that can be systematically replicated across sites. PMID:25054806

  7. Long-term trends in daily temperature extremes in Iraq

    NASA Astrophysics Data System (ADS)

    Salman, Saleem A.; Shahid, Shamsuddin; Ismail, Tarmizi; Chung, Eun-Sung; Al-Abadi, Alaa M.

    2017-12-01

    The existence of long-term persistence (LTP) in hydro-climatic time series can lead to considerable change in significance of trends. Therefore, past findings of climatic trend studies that did not consider LTP became a disputable issue. A study has been conducted to assess the trends in temperature and temperature extremes in Iraq in recent years (1965-2015) using both ordinary Mann-Kendal (MK) test; and the modified Mann-Kendall (m-MK) test, which can differentiate the multi-decadal oscillatory variations from secular trends. Trends in annual and seasonal minimum and maximum temperatures, diurnal temperature range (DTR), and 14 temperature-related extremes were assessed. MK test detected the significant increases in minimum and maximum temperature at all stations, where m-MK test detected at 86% and 80% of all stations, respectively. The temperature in Iraq is increasing 2 to 7 times faster than global temperature rise. The minimum temperature is increasing more (0.48-1.17 °C/decade) than maximum temperature (0.25-1.01 °C/decade). Temperature rise is higher in northern Iraq and in summer. The hot extremes particularly warm nights are increasing all over Iraq at a rate of 2.92-10.69 days/decade, respectively. On the other hand, numbers of cold days are decreasing at some stations at a rate of - 2.65 to - 8.40 days/decade. The use of m-MK test along with MK test confirms the significant increase in temperature and some of the temperature extremes in Iraq. This study suggests that trends in many temperature extremes in the region estimated in previous studies using MK test may be due to natural variability of climate, which empathizes the need for validation of the trends by considering LTP in time series.

  8. Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.

    PubMed

    Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L

    2017-05-31

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.

  9. Infrared upconversion for astronomical applications. [laser applications to astronomical spectroscopy of infrared spectra

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Kostiuk, T.; Ogilvie, K. W.

    1975-01-01

    The performance of an upconversion system is examined for observation of astronomical sources in the low to middle infrared spectral range. Theoretical values for the performance parameters of an upconversion system for astronomical observations are evaluated in view of the conversion efficiencies, spectral resolution, field of view, minimum detectable source brightness and source flux. Experimental results of blackbody measurements and molecular absorption spectrum measurements using a lithium niobate upconverter with an argon-ion laser as the pump are presented. Estimates of the expected optimum sensitivity of an upconversion device which may be built with the presently available components are given.

  10. Study of wear between piston ring and cylinder housing of an internal combustion engine by thin layer activation technique

    NASA Astrophysics Data System (ADS)

    Chowdhury, D. P.; Chaudhuri, Jayanta; Raju, V. S.; Das, S. K.; Bhattacharjee, B. B.; Gangadharan, S.

    1989-07-01

    The wear analysis of a compression ring and cylinder housing of an Internal Combustion Engine by thin layer activation (TLA) with 40 MeV α-particles from the Variable Energy Cyclotron at Calcutta is reported. The calibration curves have been obtained for Fe and Ni using stacked foil activation technique for determining the absolute wear in these machine parts. It has been possible to determine the pattern of wear on the points along the surface of machine components. The minimum detectable depth in this wear study has been estimated at 0.11 ± 0.04 μm.

  11. Minimum Wage Increases and the Working Poor. Changing Domestic Priorities Discussion Paper.

    ERIC Educational Resources Information Center

    Mincy, Ronald B.

    Most economists agree that the difficulties of targeting minimum wage increases to low-income families make such increases ineffective tools for reducing poverty. This paper provides estimates of the impact of minimum wage increases on the poverty gap and the number of poor families, and shows which factors are barriers to decreasing poverty…

  12. Minimum Wages and School Enrollment of Teenagers: A Look at the 1990's.

    ERIC Educational Resources Information Center

    Chaplin, Duncan D.; Turner, Mark D.; Pape, Andreas, D.

    2003-01-01

    Estimates the effects of higher minimum wages on school enrollment using the Common Core of Data. Controlling for local labor market conditions and state and year fixed effects, finds some evidence that higher minimum wages reduce teen school enrollment in states where students drop out before age 18. (23 references) (Author/PKP)

  13. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  14. Retrospective determination of the contamination in the HML's counting chambers.

    PubMed

    Kramer, Gary H; Hauck, Barry; Capello, Kevin; Phan, Quoc

    2008-09-01

    The original documentation surrounding the purchase of the Human Monitoring Laboratory's (HML) counting chambers clearly showed that the steel contained low levels of radioactivity, presumably as a result of A-bomb fallout or perhaps to the inadvertent mixing of radioactive sources with scrap steel. Monte Carlo simulations have been combined with experimental measurements to estimate the level of contamination in the steel of the HML's whole body counting chamber. A 24-h empty chamber background count showed the presence of 137Cs and 60Co. The estimated activity of 137Cs in the 51 tons of steel was 2.7 kBq in 2007 (51.3 microBq g(-1) steel) which would have been 8 kBq at the time of manufacture. The 60Co that was found in the background spectrum is postulated to be contained in the bed-frame. The estimated amount in 2007 was 5 Bq and its origin is likely to be contaminated scrap metal entering the steel production cycle sometime in the past. The estimated activities are 10 to 25 times higher than the estimated minimum detectable activity for this measurement. These amounts have no impact on the usefulness of the whole body counter.

  15. New reporting procedures based on long-term method detection levels and some considerations for interpretations of water-quality data provided by the U.S. Geological Survey National Water Quality Laboratory

    USGS Publications Warehouse

    Childress, Carolyn J. Oblinger; Foreman, William T.; Connor, Brooke F.; Maloney, Thomas J.

    1999-01-01

    This report describes the U.S. Geological Survey National Water Quality Laboratory?s approach for determining long-term method detection levels and establishing reporting levels, details relevant new reporting conventions, and provides preliminary guidance on interpreting data reported with the new conventions. At the long-term method detection level concentration, the risk of a false positive detection (analyte reported present at the long-term method detection level when not in sample) is no more than 1 percent. However, at the long-term method detection level, the risk of a false negative occurrence (analyte reported not present when present at the long-term method detection level concentration) is up to 50 percent. Because this false negative rate is too high for use as a default 'less than' reporting level, a more reliable laboratory reporting level is set at twice the determined long-term method detection level. For all methods, concentrations measured between the laboratory reporting level and the long-term method detection level will be reported as estimated concentrations. Non-detections will be censored to the laboratory reporting level. Adoption of the new reporting conventions requires a full understanding of how low-concentration data can be used and interpreted and places responsibility for using and presenting final data with the user rather than with the laboratory. Users must consider that (1) new laboratory reporting levels may differ from previously established minimum reporting levels, (2) long-term method detection levels and laboratory reporting levels may change over time, and (3) estimated concentrations are less certain than concentrations reported above the laboratory reporting level. The availability of uncensored but qualified low-concentration data for interpretation and statistical analysis is a substantial benefit to the user. A decision to censor data after they are reported from the laboratory may still be made by the user, if merited, on the basis of the intended use of the data.

  16. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    NASA Astrophysics Data System (ADS)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  17. Ultra-Low Background Measurements Of Decayed Aerosol Filters

    NASA Astrophysics Data System (ADS)

    Miley, H.

    2009-04-01

    To experimentally evaluate the opportunity to apply ultra-low background measurement methods to samples collected, for instance, by the Comprehensive Test Ban Treaty International Monitoring System (IMS), aerosol samples collected on filter media were measured using HPGe spectrometers of varying low-background technology approaches. In this way, realistic estimates of the impact of low-background methodology can be assessed on the Minimum Detectable Activities obtained in systems such as the IMS. The current measurement requirement of stations in the IMS is 30 microBq per cubic meter of air for 140Ba, or about 106 fissions per daily sample. Importantly, this is for a fresh aerosol filter. Decay varying form 3 days to one week reduce the intrinsic background from radon daughters in the sample. Computational estimates of the improvement factor for these decayed filters for underground-based HPGe in clean shielding materials are orders of magnitude less, even when the decay of the isotopes of interest is included.

  18. Improvements of low-level radioxenon detection sensitivity by a state-of-the art coincidence setup.

    PubMed

    Cagniant, A; Le Petit, G; Gross, P; Douysset, G; Richard-Bressand, H; Fontaine, J-P

    2014-05-01

    The ability to quantify isotopic ratios of 135, 133 m, 133 and 131 m radioxenon is essential for the verification of the Comprehensive Nuclear-Test Ban Treaty (CTBT). In order to improve detection limits, CEA has developed a new on-site setup using photon/electron coincidence (Le Petit et al., 2013. J. Radioanal. Nucl. Chem., DOI : 10.1007/s 10697-013-2525-8.). Alternatively, the electron detection cell equipped with large silicon chips (PIPS) can be used with HPGe detector for laboratory analysis purpose. This setup allows the measurement of β/γ coincidences for the detection of (133)Xe and (135)Xe; and K-shell Conversion Electrons (K-CE)/X-ray coincidences for the detection of (131m)Xe, (133m)Xe and (133)Xe as well. Good energy resolution of 11 keV at 130 keV and low energy threshold of 29 keV for the electron detection were obtained. This provides direct discrimination between K-CE from (133)Xe, (133m)Xe and (131m)Xe. Estimation of Minimum Detectable Activity (MDA) for (131m)Xe is in the order of 1mBq over a 4 day measurement. An analysis of an environmental radioxenon sample using this method is shown. © 2013 The Authors. Published by Elsevier Ltd All rights reserved.

  19. The Effect of Minimum Wages on Youth Employment in Canada: A Panel Study.

    ERIC Educational Resources Information Center

    Yuen, Terence

    2003-01-01

    Canadian panel data 1988-90 were used to compare estimates of minimum-wage effects based on a low-wage/high-worker sample and a low-wage-only sample. Minimum-wage effect for the latter is nearly zero. Different results for low-wage subgroups suggest a significant effect for those with longer low-wage histories. (Contains 26 references.) (SK)

  20. Historical space weather monitoring of prolonged aurora activities in Japan and in China

    NASA Astrophysics Data System (ADS)

    Kataoka, Ryuho; Isobe, Hiroaki; Hayakawa, Hisashi; Tamazawa, Harufumi; Kawamura, Akito Davis; Miyahara, Hiroko; Iwahashi, Kiyomi; Yamamoto, Kazuaki; Takei, Masako; Terashima, Tsuneyo; Suzuki, Hidehiko; Fujiwara, Yasunori; Nakamura, Takuji

    2017-02-01

    Great magnetic storms are recorded as aurora sightings in historical documents. The earliest known example of "prolonged" aurora sightings, with aurora persistent for two or more nights within a 7 day interval at low latitudes, in Japan was documented on 21-23 February 1204 in Meigetsuki, when a big sunspot was also recorded in China. We have searched for prolonged events over the 600 year interval since 620 in Japan based on the catalogue of Kanda and over the 700 year interval since 581 in China based on the catalogues of Tamazawa et al. (2017) and Hayakawa et al. (2015). Before the Meigetsuki event, a significant fraction of the 200 possible aurora sightings in Sòng dynasty (960-1279) of China was detected at least twice within a 7 day interval and sometimes recurred with approximately the solar rotation period of 27 days. The majority of prolonged aurora activity events occurred around the maximum phase of solar cycles rather than around the minimum, as estimated from the 14C analysis of tree rings. They were not reported during the Oort Minimum (1010-1050). We hypothesize that the prolonged aurora sightings are associated with great magnetic storms resulting from multiple coronal mass ejections from the same active region. The historical documents therefore provide useful information to support estimation of great magnetic storm frequency, which are often associated with power outages and other societal concerns.

  1. Detection of the agents of human ehrlichioses in ixodid ticks from California.

    PubMed

    Kramer, V L; Randolph, M P; Hui, L T; Irwin, W E; Gutierrez, A G; Vugia, D J

    1999-01-01

    A study was conducted in northern California to estimate the prevalence and distribution in ixodid ticks of the rickettsial agents of human monocytic (HME) and human granulocytic (HGE) ehrlichioses. More than 650 ixodid ticks were collected from 17 sites in six California counties over a 15-month period. Ehrlichia chaffeensis, the causative agent of HME, was detected by a nested polymerase chain reaction (PCR) in Ixodes pacificus (minimum infection rate [MIR] = 13.3%) and Dermacentor variabilis (infection rate=20.0%) from a municipal park in Santa Cruz County. The HGE agent was detected by nested PCR in I. pacificus adults from a heavily used recreational area in Alameda County (MIR = 4.7%) and a semirural community in Sonoma County (MIR = 6.7%). Evidence of infection with Ehrlichia spp. was not detected in D. occidentalis adults or I. pacificus nymphs. This study represents the first detection of E. chaffeensis in California ticks and the first report of infection in Ixodes spp. The competency of I. pacificus to be coinfected with and to transmit multiple disease agents, including those of human ehrlichioses and Lyme disease, has yet to be determined.

  2. Event-related potential measures of gap detection threshold during natural sleep.

    PubMed

    Muller-Gass, Alexandra; Campbell, Kenneth

    2014-08-01

    The minimum time interval between two stimuli that can be reliably detected is called the gap detection threshold. The present study examines whether an unconscious state, natural sleep affects the gap detection threshold. Event-related potentials were recorded in 10 young adults while awake and during all-night sleep to provide an objective estimate of this threshold. These subjects were presented with 2, 4, 8 or 16ms gaps occurring in 1.5 duration white noise. During wakefulness, a significant N1 was elicited for the 8 and 16ms gaps. N1 was difficult to observe during stage N2 sleep, even for the longest gap. A large P2 was however elicited and was significant for the 8 and 16ms gaps. Also, a later, very large N350 was elicited by the 16ms gap. An N1 and P2 was significant only for the 16ms gap during REM sleep. ERPs to gaps occurring in noise segments can therefore be successfully elicited during natural sleep. The gap detection threshold is similar in the waking and sleeping states. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Reducing uncertainty in dust monitoring to detect aeolian sediment transport responses to land cover change

    NASA Astrophysics Data System (ADS)

    Webb, N.; Chappell, A.; Van Zee, J.; Toledo, D.; Duniway, M.; Billings, B.; Tedela, N.

    2017-12-01

    Anthropogenic land use and land cover change (LULCC) influence global rates of wind erosion and dust emission, yet our understanding of the magnitude of the responses remains poor. Field measurements and monitoring provide essential data to resolve aeolian sediment transport patterns and assess the impacts of human land use and management intensity. Data collected in the field are also required for dust model calibration and testing, as models have become the primary tool for assessing LULCC-dust cycle interactions. However, there is considerable uncertainty in estimates of dust emission due to the spatial variability of sediment transport. Field sampling designs are currently rudimentary and considerable opportunities are available to reduce the uncertainty. Establishing the minimum detectable change is critical for measuring spatial and temporal patterns of sediment transport, detecting potential impacts of LULCC and land management, and for quantifying the uncertainty of dust model estimates. Here, we evaluate the effectiveness of common sampling designs (e.g., simple random sampling, systematic sampling) used to measure and monitor aeolian sediment transport rates. Using data from the US National Wind Erosion Research Network across diverse rangeland and cropland cover types, we demonstrate how only large changes in sediment mass flux (of the order 200% to 800%) can be detected when small sample sizes are used, crude sampling designs are implemented, or when the spatial variation is large. We then show how statistical rigour and the straightforward application of a sampling design can reduce the uncertainty and detect change in sediment transport over time and between land use and land cover types.

  4. Investigation of levels in ambient air near sources of Polychlorinated Biphenyls (PCBs) in Kanpur, India, and risk assessment due to inhalation.

    PubMed

    Goel, Anubha; Upadhyay, Kritika; Chakraborty, Mrinmoy

    2016-05-01

    Polychlorinated biphenyls (PCBs) are a class of organic compounds listed as persistent organic pollutant and have been banned for use under Stockholm Convention (1972). They were used primarily in transformers and capacitors, paint, flame retardants, plasticizers, and lubricants. PCBs can be emitted through the primary and secondary sources into the atmosphere, undergo long-range atmospheric transport, and hence have been detected worldwide. Reported levels in ambient air are generally higher in urban areas. Active sampling of ambient air was conducted in Kanpur, a densely populated and industrialized city in the Indo-Gangetic Plain, for detection of 32 priority PCBs, with the aim to determine the concentration in gas/particle phase and assess exposure risk. More than 50 % of PCBs were detected in air. Occurrence in particles was dominated by heavier congeners, and levels in gas phase were below detection. Levels determined in this study are lower than the levels in Coastal areas of India but are at par with other Asian countries where majority of sites chosen for sampling were urban industrial areas. Human health risk estimates through air inhalation pathway were made in terms of lifetime average daily dose (LADD) and incremental lifetime cancer risks (ILCR). The study found lower concentrations of PCBs than guideline values and low health risk estimates through inhalation within acceptable levels, indicating a minimum risk to the adults due to exposure to PCBs present in ambient air in Kanpur.

  5. PATTERNS OF PERSISTENT GENITAL HUMAN PAPILLOMAVIRUS INFECTION AMONG WOMEN WORLDWIDE: A LITERATURE REVIEW AND META-ANALYSIS

    PubMed Central

    Rositch, Anne F.; Koshiol, Jill; Hudgens, Michael; Razzaghi, Hilda; Backes, Danielle M.; Pimenta, Jeanne M.; Franco, Eduardo L.; Poole, Charles; Smith, Jennifer S.

    2013-01-01

    Persistent high-risk human papillomavirus (HR-HPV) infection is the strongest risk factor for high-grade cervical precancer. We performed a systematic review and meta-analysis of HPV persistence patterns worldwide. Medline and ISI Web of Science were searched through January 1, 2010 for articles estimating HPV persistence or duration of detection. Descriptive and meta-regression techniques were used to summarize variability and the influence of study definitions and characteristics on duration and persistence of cervical HPV infections in women. Among 86 studies providing data on over 100,000 women, 73% defined persistence as HPV positivity at a minimum of two time points. Persistence varied notably across studies and was largely mediated by study region and HPV type, with HPV-16, 31, 33 and 52 being most persistent. Weighted median duration of any-HPV detection was 9.8 months. HR-HPV (9.3 months) persisted longer than low-risk HPV (8.4 months), and HPV-16 (12.4 months) persisted longer than HPV-18 (9.8 months). Among populations of HPV positive women with normal cytology, the median duration of any-HPV detection was 11.5 and HR-HPV detection was10.9 months. In conclusion, we estimated that approximately half of HPV infections persist past 6–12 months. Repeat HPV testing at 12 month intervals could identify women at increased risk of high-grade cervical precancer due to persistent HPV infections. PMID:22961444

  6. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    PubMed

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  7. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  8. Application of a hybrid model to reduce bias and improve precision in population estimates for elk (Cervus elaphus) inhabiting a cold desert ecosystem

    USGS Publications Warehouse

    Schoenecker, Kathryn A.; Lubow, Bruce C.

    2016-01-01

    Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.

  9. Resolution Limits of Nanoimprinted Patterns by Fluorescence Microscopy

    NASA Astrophysics Data System (ADS)

    Kubo, Shoichi; Tomioka, Tatsuya; Nakagawa, Masaru

    2013-06-01

    The authors investigated optical resolution limits to identify minimum distances between convex lines of fluorescent dye-doped nanoimprinted resist patterns by fluorescence microscopy. Fluorescent ultraviolet (UV)-curable resin and thermoplastic resin films were transformed into line-and-space patterns by UV nanoimprinting and thermal nanoimprinting, respectively. Fluorescence immersion observation needed an immersion medium immiscible to the resist films, and an ionic liquid of triisobutyl methylphosphonium tosylate was appropriate for soluble thermoplastic polystyrene patterns. Observation with various numerical aperture (NA) values and two detection wavelength ranges showed that the resolution limits were smaller than the values estimated by the Sparrow criterion. The space width to identify line patterns became narrower as the line width increased. The space width of 100 nm was demonstrated to be sufficient to resolve 300-nm-wide lines in the detection wavelength range of 575-625 nm using an objective lens of NA= 1.40.

  10. The study of in vivo quantification of aluminum (Al) in human bone with a compact DD generator-based neutron activation analysis (NAA) system.

    PubMed

    Byrne, Patrick; Mostafaei, Farshad; Liu, Yingzi; Blake, Scott P; Koltick, David; Nie, Linda H

    2016-05-01

    The feasibility and methodology of using a compact DD generator-based neutron activation analysis system to measure aluminum in hand bone has been investigated. Monte Carlo simulations were used to simulate the moderator, reflector, and shielding assembly and to estimate the radiation dose. A high purity germanium (HPGe) detector was used to detect the Al gamma ray signals. The minimum detectable limit (MDL) was found to be 11.13 μg g(-1) dry bone (ppm). An additional HPGe detector would improve the MDL by a factor of 1.4, to 7.9 ppm. The equivalent dose delivered to the irradiated hand was calculated by Monte Carlo to be 11.9 mSv. In vivo bone aluminum measurement with the DD generator was found to be feasible among general population with an acceptable dose to the subject.

  11. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part I: humidity

    NASA Astrophysics Data System (ADS)

    Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.

    2017-07-01

    This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.

  12. First images of a digital autoradiography system based on a Medipix2 hybrid silicon pixel detector.

    PubMed

    Mettivier, Giovanni; Montesi, Maria Cristina; Russo, Paolo

    2003-06-21

    We present the first images of beta autoradiography obtained with the high-resolution hybrid pixel detector consisting of the Medipix2 single photon counting read-out chip bump-bonded to a 300 microm thick silicon pixel detector. This room temperature system has 256 x 256 square pixels of 55 microm pitch (total sensitive area of 14 x 14 mm2), with a double threshold discriminator and a 13-bit counter in each pixel. It is read out via a dedicated electronic interface and control software, also developed in the framework of the European Medipix2 Collaboration. Digital beta autoradiograms of 14C microscale standard strips (containing separate bands of increasing specific activity in the range 0.0038-32.9 kBq g(-1)) indicate system linearity down to a total background noise of 1.8 x 10(-3) counts mm(-2) s(-1). The minimum detectable activity is estimated to be 0.012 Bq for 36,000 s exposure and 0.023 Bq for 10,800 s exposure. The measured minimum detection threshold is less than 1600 electrons (equivalent to about 6 keV Si). This real-time system for beta autoradiography offers lower pixel pitch and higher sensitive area than the previous Medipix1-based system. It has a 14C sensitivity better than that of micro channel plate based systems, which, however, shows higher spatial resolution and sensitive area.

  13. Temporal resolution in children.

    PubMed

    Wightman, F; Allen, P; Dolan, T; Kistler, D; Jamieson, D

    1989-06-01

    The auditory temporal resolving power of young children was measured using an adaptive forced-choice psychophysical paradigm that was disguised as a video game. 20 children between 3 and 7 years of age and 5 adults were asked to detect the presence of a temporal gap in a burst of half-octave-band noise at band center frequencies of 400 and 2,000 Hz. The minimum detectable gap (gap threshold) was estimated adaptively in 20-trial runs. The mean gap thresholds in the 400-Hz condition were higher for the younger children than for the adults, with the 3-year-old children producing the highest thresholds. Gap thresholds in the 2,000-Hz condition were generally lower than in the 400-Hz condition and showed a similar age effect. All the individual adaptive runs were "adult-like," suggesting that the children were generally attentive to the task during each run. However, the variability of threshold estimates from run to run was substantial, especially in the 3-5-year-old children. Computer simulations suggested that this large within-subjects variability could have resulted from frequent, momentary lapses of attention, which would lead to "guessing" on a substantial portion of the trials.

  14. Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.

    PubMed

    Waheed, Khuram; Salem, Fathi M

    2005-07-01

    Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).

  15. The weak-line T Tauri star V410 Tau. I. A multi-wavelength study of variability

    NASA Astrophysics Data System (ADS)

    Stelzer, B.; Fernández, M.; Costa, V. M.; Gameiro, J. F.; Grankin, K.; Henden, A.; Guenther, E.; Mohanty, S.; Flaccomio, E.; Burwitz, V.; Jayawardhana, R.; Predehl, P.; Durisen, R. H.

    2003-12-01

    We present the results of an intensive coordinated monitoring campaign in the optical and X-ray wavelength ranges of the low-mass, pre-main sequence star V410 Tau carried out in November 2001. The aim of this project was to study the relation between various indicators for magnetic activity that probe different emitting regions and would allow us to obtain clues on the interplay of the different atmospheric layers: optical photometric star spot (rotation) cycle, chromospheric Hα emission, and coronal X-rays. Our optical photometric monitoring has allowed us to measure the time of the minimum of the lightcurve with high precision. Joining the result with previous data we provide a new estimate for the dominant periodicity of V410 Tau (1.871970 +/- 0.000010 d). This updated value removes systematic offsets of the time of minimum observed in data taken over the last decade. The recurrence of the minimum in the optical lightcurve over such a long timescale emphasizes the extraordinary stability of the largest spot. This is confirmed by radial velocity measurements: data from 1993 and 2001 fit almost exactly onto each other when folded with the new period. The combination of the new data from November 2001 with published measurements taken during the last decade allows us to examine long-term changes in the mean light level of the photometry of V410 Tau. A variation on the timescale of 5.4 yr is suggested. Assuming that this behavior is truly cyclic V410 Tau is the first pre-main sequence star on which an activity cycle is detected. Two X-ray pointings were carried out with the Chandra satellite simultaneously with the optical observations, and centered near the maximum and minimum levels of the optical lightcurve. A relation of their different count levels to the rotation period of the dominating spot is not confirmed by a third Chandra observation carried out some months later, during another minimum of the 1.87 d cycle. Similarly we find no indications for a correlation of the Hα emission with the spots' rotational phase. The lack of detected rotational modulation in two important activity diagnostics seems to argue against a direct association of chromospheric and coronal emission with the spot distribution.

  16. From the field: Efficacy of detecting Chronic Wasting Disease via sampling hunter-killed white-tailed deer

    USGS Publications Warehouse

    Diefenbach, D.R.; Rosenberry, C.S.; Boyd, Robert C.

    2004-01-01

    Surveillance programs for Chronic Wasting Disease (CWD) in free-ranging cervids often use a standard of being able to detect 1% prevalence when determining minimum sample sizes. However, 1% prevalence may represent >10,000 infected animals in a population of 1 million, and most wildlife managers would prefer to detect the presence of CWD when far fewer infected animals exist. We wanted to detect the presence of CWD in white-tailed deer (Odocoileus virginianus) in Pennsylvania when the disease was present in only 1 of 21 wildlife management units (WMUs) statewide. We used computer simulation to estimate the probability of detecting CWD based on a sampling design to detect the presence of CWD at 0.1% and 1.0% prevalence (23-76 and 225-762 infected deer, respectively) using tissue samples collected from hunter-killed deer. The probability of detection at 0.1% prevalence was <30% with sample sizes of ???6,000 deer, and the probability of detection at 1.0% prevalence was 46-72% with statewide sample sizes of 2,000-6,000 deer. We believe that testing of hunter-killed deer is an essential part of any surveillance program for CWD, but our results demonstrated the importance of a multifaceted surveillance approach for CWD detection rather than sole reliance on testing hunter-killed deer.

  17. Determination of Minimum Training Sample Size for Microarray-Based Cancer Outcome Prediction–An Empirical Assessment

    PubMed Central

    Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu

    2013-01-01

    The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920

  18. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE PAGES

    Chylek, Petr; Augustine, John A.; Klett, James D.; ...

    2017-09-30

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  19. Cross-scale modeling of surface temperature and tree seedling establishment inmountain landscapes

    USGS Publications Warehouse

    Dingman, John; Sweet, Lynn C.; McCullough, Ian M.; Davis, Frank W.; Flint, Alan L.; Franklin, Janet; Flint, Lorraine E.

    2013-01-01

    Abstract: Introduction: Estimating surface temperature from above-ground field measurements is important for understanding the complex landscape patterns of plant seedling survival and establishment, processes which occur at heights of only several centimeters. Currently, future climate models predict temperature at 2 m above ground, leaving ground-surface microclimate not well characterized. Methods: Using a network of field temperature sensors and climate models, a ground-surface temperature method was used to estimate microclimate variability of minimum and maximum temperature. Temperature lapse rates were derived from field temperature sensors and distributed across the landscape capturing differences in solar radiation and cold air drainages modeled at a 30-m spatial resolution. Results: The surface temperature estimation method used for this analysis successfully estimated minimum surface temperatures on north-facing, south-facing, valley, and ridgeline topographic settings, and when compared to measured temperatures yielded an R2 of 0.88, 0.80, 0.88, and 0.80, respectively. Maximum surface temperatures generally had slightly more spatial variability than minimum surface temperatures, resulting in R2 values of 0.86, 0.77, 0.72, and 0.79 for north-facing, south-facing, valley, and ridgeline topographic settings. Quasi-Poisson regressions predicting recruitment of Quercus kelloggii (black oak) seedlings from temperature variables were significantly improved using these estimates of surface temperature compared to air temperature modeled at 2 m. Conclusion: Predicting minimum and maximum ground-surface temperatures using a downscaled climate model coupled with temperature lapse rates estimated from field measurements provides a method for modeling temperature effects on plant recruitment. Such methods could be applied to improve projections of species’ range shifts under climate change. Areas of complex topography can provide intricate microclimates that may allow species to redistribute locally as climate changes.

  20. Daily mean temperature estimate at the US SUFRAD stations as an average of the maximum and minimum temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chylek, Petr; Augustine, John A.; Klett, James D.

    At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less

  1. Soil Carbon Variability and Change Detection in the Forest Inventory Analysis Database of the United States

    NASA Astrophysics Data System (ADS)

    Wu, A. M.; Nater, E. A.; Dalzell, B. J.; Perry, C. H.

    2014-12-01

    The USDA Forest Service's Forest Inventory Analysis (FIA) program is a national effort assessing current forest resources to ensure sustainable management practices, to assist planning activities, and to report critical status and trends. For example, estimates of carbon stocks and stock change in FIA are reported as the official United States submission to the United Nations Framework Convention on Climate Change. While the main effort in FIA has been focused on aboveground biomass, soil is a critical component of this system. FIA sampled forest soils in the early 2000s and has remeasurement now underway. However, soil sampling is repeated on a 10-year interval (or longer), and it is uncertain what magnitude of changes in soil organic carbon (SOC) may be detectable with the current sampling protocol. We aim to identify the sensitivity and variability of SOC in the FIA database, and to determine the amount of SOC change that can be detected with the current sampling scheme. For this analysis, we attempt to answer the following questions: 1) What is the sensitivity (power) of SOC data in the current FIA database? 2) How does the minimum detectable change in forest SOC respond to changes in sampling intervals and/or sample point density? Soil samples in the FIA database represent 0-10 cm and 10-20 cm depth increments with a 10-year sampling interval. We are investigating the variability of SOC and its change over time for composite soil data in each FIA region (Pacific Northwest, Interior West, Northern, and Southern). To guide future sampling efforts, we are employing statistical power analysis to examine the minimum detectable change in SOC storage. We are also investigating the sensitivity of SOC storage changes under various scenarios of sample size and/or sample frequency. This research will inform the design of future FIA soil sampling schemes and improve the information available to international policy makers, university and industry partners, and the public.

  2. Detecting Hardware-assisted Hypervisor Rootkits within Nested Virtualized Environments

    DTIC Science & Technology

    2012-06-14

    least the minimum required for the guest OS and click “Next”. For 64-bit Windows 7 the minimum required is 2048 MB (Figure 66). Figure 66. Memory...prompted for Memory, allocate at least the minimum required for the guest OS, for 64-bit Windows 7 the minimum required is 2048 MB (Figure 79...130 21. Within the virtual disk creation wizard, select VDI for the file type (Figure 81). Figure 81. Select File Type 22. Select Dynamically

  3. The SME gauge sector with minimum length

    NASA Astrophysics Data System (ADS)

    Belich, H.; Louzada, H. L. C.

    2017-12-01

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.

  4. Economic policy and the double burden of malnutrition: cross-national longitudinal analysis of minimum wage and women's underweight and obesity.

    PubMed

    Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody

    2018-04-01

    To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.

  5. Lithology-dependent minimum horizontal stress and in-situ stress estimate

    NASA Astrophysics Data System (ADS)

    Zhang, Yushuai; Zhang, Jincai

    2017-04-01

    Based on the generalized Hooke's law with coupling stresses and pore pressure, the minimum horizontal stress is solved with assumption that the vertical, minimum and maximum horizontal stresses are in equilibrium in the subsurface formations. From this derivation, we find that the uniaxial strain method is the minimum value or lower bound of the minimum stress. Using Anderson's faulting theory and this lower bound of the minimum horizontal stress, the coefficient of friction of the fault is derived. It shows that the coefficient of friction may have a much smaller value than what it is commonly assumed (e.g., μf = 0.6-0.7) for in-situ stress estimate. Using the derived coefficient of friction, an improved stress polygon is drawn, which can reduce the uncertainty of in-situ stress calculation by narrowing the area of the conventional stress polygon. It also shows that the coefficient of friction of the fault is dependent on lithology. For example, if the formation in the fault is composed of weak shales, then the coefficient of friction of the fault may be small (as low as μf = 0.2). This implies that this fault is weaker and more likely to have shear failures than the fault composed of sandstones. To avoid the weak fault from shear sliding, it needs to have a higher minimum stress and a lower shear stress. That is, the critically stressed weak fault maintains a higher minimum stress, which explains why a low shear stress appears in the frictionally weak fault.

  6. Characterization of Fissile Assemblies Using Low-Efficiency Detection Systems

    DOE PAGES

    Chapline, George F.; Verbeke, Jerome M.

    2017-02-02

    Here, we have investigated the possibility that the amount, chemical form, multiplication, and shape of the fissile material in an assembly can be passively assayed using scintillator detection systems by only measuring the fast neutron pulse height distribution and distribution of time intervals Δt between fast neutrons. We have previously demonstrated that the alpha-ratio can be obtained from the observed pulse height distribution for fast neutrons. In this paper we report that we report that when the distribution of time intervals is plotted as a function of logΔt, the position of the correlated neutron peak is nearly independent of detectormore » efficiency and determines the internal relaxation rate for fast neutrons. If this information is combined with knowledge of the alpha-ratio, then the position of the minimum between the correlated and uncorrelated peaks can be used to rapidly estimate the mass, multiplication, and shape of fissile material. This method does not require a priori knowledge of either the efficiency for neutron detection or the alpha-ratio. Although our method neglects 3-neutron correlations, we have used previously obtained experimental data for metallic and oxide forms of Pu to demonstrate that our method yields good estimates for multiplications as large as 2, and that the only constraint on detector efficiency/observation time is that a peak in the interval time distribution due to correlated neutrons is visible.« less

  7. Survey of carbamate and organophosphorous pesticide export from a south Florida (U.S.A.) agricultural watershed: implications of sampling frequency on ecological risk estimation.

    PubMed

    Wilsont, P Chris; Foos, Jane Ferguson

    2006-11-01

    The objectives of the present study were to characterize the presence of selected carbamate and organophosphorous pesticides in Ten Mile Creek (Fort Pierce, FL, U.S.A.) and to evaluate the implications of sampling frequency on ecological risk estimates. Ten Mile Creek originates in a predominately agricultural watershed that is drained by an extensive network of cross-linked canals. Water samples were collected daily or every other day and were analyzed for azinphos-methyl, chlorpyrifos, diazinon, dimethoate, ethion, fenamiphos, malathion, methidathion, carbaryl, carbofuran, 3-hydroxycarbofuran, methiocarb, methomyl, oxamyl, and propoxur. A total of 457 samples were analyzed for the carbamate suite, and a total of 332 samples were analyzed for the organophosphorous suite. Carbaryl was detected in eight samples; half of these detections occurred on four consecutive days (October 26-29, 2001) at concentrations ranging from 0.33 to 0.95 microg/L. Methomyl was detected in samples collected on five consecutive days (March 30-April 3, 2002) at concentrations ranging from 1.0 to 2.2 microg/L. Oxamyl was detected in four samples, three of which occurred on three consecutive days (February 17-19, 2002) at concentrations ranging from 6.2 to 6.8 microg/L. The carbamates propoxur, 3-hydroxycarbofuran, carbofuran, and methiocarb were not detected. Diazinon and ethion were the only organophosphorous pesticides detected. Diazinon was detected at 0.9 and 0.7 microg/L on January 5, 2002, and on January 6, 2002, respectively. Ethion was detected in 18 consecutive samples (August 3-20, 2001). The mean, maximum, minimum, and median detected concentrations were 0.38, 0.61, 0.30, and 0.33 microg/L, respectively. Results indicate that frequent sampling is necessary to characterize the presence of these pesticides in this intensively drained watershed. This conclusion also may apply to similar canalized watersheds.

  8. Evaluation of quality-control data collected by the U.S. Geological Survey for routine water-quality activities at the Idaho National Laboratory and vicinity, southeastern Idaho, 2002-08

    USGS Publications Warehouse

    Rattray, Gordon W.

    2014-01-01

    Quality-control (QC) samples were collected from 2002 through 2008 by the U.S. Geological Survey, in cooperation with the U.S. Department of Energy, to ensure data robustness by documenting the variability and bias of water-quality data collected at surface-water and groundwater sites at and near the Idaho National Laboratory. QC samples consisted of 139 replicates and 22 blanks (approximately 11 percent of the number of environmental samples collected). Measurements from replicates were used to estimate variability (from field and laboratory procedures and sample heterogeneity), as reproducibility and reliability, of water-quality measurements of radiochemical, inorganic, and organic constituents. Measurements from blanks were used to estimate the potential contamination bias of selected radiochemical and inorganic constituents in water-quality samples, with an emphasis on identifying any cross contamination of samples collected with portable sampling equipment. The reproducibility of water-quality measurements was estimated with calculations of normalized absolute difference for radiochemical constituents and relative standard deviation (RSD) for inorganic and organic constituents. The reliability of water-quality measurements was estimated with pooled RSDs for all constituents. Reproducibility was acceptable for all constituents except dissolved aluminum and total organic carbon. Pooled RSDs were equal to or less than 14 percent for all constituents except for total organic carbon, which had pooled RSDs of 70 percent for the low concentration range and 4.4 percent for the high concentration range. Source-solution and equipment blanks were measured for concentrations of tritium, strontium-90, cesium-137, sodium, chloride, sulfate, and dissolved chromium. Field blanks were measured for the concentration of iodide. No detectable concentrations were measured from the blanks except for strontium-90 in one source solution and one equipment blank collected in September and October 2004, respectively. The detectable concentrations of strontium-90 in the blanks probably were from a small source of strontium-90 contamination or large measurement variability, or both. Order statistics and the binomial probability distribution were used to estimate the magnitude and extent of any potential contamination bias of tritium, strontium-90, cesium-137, sodium, chloride, sulfate, dissolved chromium, and iodide in water-quality samples. These statistical methods indicated that, with (1) 87 percent confidence, contamination bias of cesium-137 and sodium in 60 percent of water-quality samples was less than the minimum detectable concentration or reporting level; (2) 92‒94 percent confidence, contamination bias of tritium, strontium-90, chloride, sulfate, and dissolved chromium in 70 percent of water-quality samples was less than the minimum detectable concentration or reporting level; and (3) 75 percent confidence, contamination bias of iodide in 50 percent of water-quality samples was less than the reporting level for iodide. These results support the conclusion that contamination bias of water-quality samples from sample processing, storage, shipping, and analysis was insignificant and that cross-contamination of perched groundwater samples collected with bailers during 2002–08 was insignificant.

  9. Marine-target craters on Mars? An assessment study

    USGS Publications Warehouse

    Ormo, J.; Dohm, J.M.; Ferris, J.C.; Lepinette, A.; Fairen, A.G.

    2004-01-01

    Observations of impact craters on Earth show that a water column at the target strongly influences lithology and morphology of the resultant crater. The degree of influence varies with the target water depth and impactor diameter. Morphological features detectable in satellite imagery include a concentric shape with an inner crater inset within a shallower outer crater, which is cut by gullies excavated by the resurge of water. In this study, we show that if oceans, large seas, and lakes existed on Mars for periods of time, marine-target craters must have formed. We make an assessment of the minimum and maximum amounts of such craters based on published data on water depths, extent, and duration of putative oceans within "contacts 1 and 2," cratering rate during the different oceanic phases, and computer modeling of minimum impactor diameters required to form long-lasting craters in the seafloor of the oceans. We also discuss the influence of erosion and sedimentation on the preservation and exposure of the craters. For an ocean within the smaller "contact 2" with a duration of 100,000 yr and the low present crater formation rate, only ???1-2 detectable marine-target craters would have formed. In a maximum estimate with a duration of 0.8 Gyr, as many as 1400 craters may have formed. An ocean within the larger "contact 1-Meridiani," with a duration of 100,000 yr, would not have received any seafloor craters despite the higher crater formation rate estimated before 3.5 Gyr. On the other hand, with a maximum duration of 0.8 Gyr, about 160 seafloor craters may have formed. However, terrestrial examples show that most marine-target craters may be covered by thick sediments. Ground penetrating radar surveys planned for the ESA Mars Express and NASA 2005 missions may reveal buried craters, though it is uncertain if the resolution will allow the detection of diagnostic features of marine-target craters. The implications regarding the discovery of marine-target craters on Mars is not without significance, as such discoveries would help address the ongoing debate of whether large water bodies occupied the northern plains of Mars and would help constrain future paleoclimatic reconstructions. ?? Meteoritical Society, 2004.

  10. DC-9/JT8D refan, Phase 1. [technical and economic feasibility of retrofitting DC-9 aircraft with refan engine to achieve desired acoustic levels

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Analyses and design studies were conducted on the technical and economic feasibility of installing the JT8D-109 refan engine on the DC-9 aircraft. Design criteria included minimum change to the airframe to achieve desired acoustic levels. Several acoustic configurations were studied with two selected for detailed investigations. The minimum selected acoustic treatment configuration results in an estimated aircraft weight increase of 608 kg (1,342 lb) and the maximum selected acoustic treatment configuration results in an estimated aircraft weight increase of 809 kg (1,784 lb). The range loss for the minimum and maximum selected acoustic treatment configurations based on long range cruise at 10 668 m (35,000 ft) altitude with a typical payload of 6 804 kg (15,000 lb) amounts to 54 km (86 n. mi.) respectively. Estimated reduction in EPNL's for minimum selected treatment show 8 EPNdB at approach, 12 EPNdB for takeoff with power cutback, 15 EPNdB for takeoff without power cutback and 12 EPNdB for sideline using FAR Part 36. Little difference was estimated in EPNL between minimum and maximum treatments due to reduced performance of maximum treatment. No major technical problems were encountered in the study. The refan concept for the DC-9 appears technically feasible and economically viable at approximately $1,000,000 per airplane. An additional study of the installation of JT3D-9 refan engine on the DC-8-50/61 and DC-8-62/63 aircraft is included. Three levels of acoustic treatment were suggested for DC-8-50/61 and two levels for DC-8-62/63. Results indicate the DC-8 technically can be retrofitted with refan engines for approximately $2,500,000 per airplane.

  11. Detection of all four dengue serotypes in Aedes aegypti female mosquitoes collected in a rural area in Colombia

    PubMed Central

    Pérez-Castro, Rosalía; Castellanos, Jaime E; Olano, Víctor A; Matiz, María Inés; Jaramillo, Juan F; Vargas, Sandra L; Sarmiento, Diana M; Stenström, Thor Axel; Overgaard, Hans J

    2016-01-01

    The Aedes aegypti vector for dengue virus (DENV) has been reported in urban and periurban areas. The information about DENV circulation in mosquitoes in Colombian rural areas is limited, so we aimed to evaluate the presence of DENV in Ae. aegypti females caught in rural locations of two Colombian municipalities, Anapoima and La Mesa. Mosquitoes from 497 rural households in 44 different rural settlements were collected. Pools of about 20 Ae. aegypti females were processed for DENV serotype detection. DENV in mosquitoes was detected in 74% of the analysed settlements with a pool positivity rate of 62%. The estimated individual mosquito infection rate was 4.12% and the minimum infection rate was 33.3/1,000 mosquitoes. All four serotypes were detected; the most frequent being DENV-2 (50%) and DENV-1 (35%). Two-three serotypes were detected simultaneously in separate pools. This is the first report on the co-occurrence of natural DENV infection of mosquitoes in Colombian rural areas. The findings are important for understanding dengue transmission and planning control strategies. A potential latent virus reservoir in rural areas could spill over to urban areas during population movements. Detecting DENV in wild-caught adult mosquitoes should be included in the development of dengue epidemic forecasting models. PMID:27074252

  12. Detection of all four dengue serotypes in Aedes aegypti female mosquitoes collected in a rural area in Colombia.

    PubMed

    Pérez-Castro, Rosalía; Castellanos, Jaime E; Olano, Víctor A; Matiz, María Inés; Jaramillo, Juan F; Vargas, Sandra L; Sarmiento, Diana M; Stenström, Thor Axel; Overgaard, Hans J

    2016-04-01

    The Aedes aegypti vector for dengue virus (DENV) has been reported in urban and periurban areas. The information about DENV circulation in mosquitoes in Colombian rural areas is limited, so we aimed to evaluate the presence of DENV in Ae. aegypti females caught in rural locations of two Colombian municipalities, Anapoima and La Mesa. Mosquitoes from 497 rural households in 44 different rural settlements were collected. Pools of about 20 Ae. aegypti females were processed for DENV serotype detection. DENV in mosquitoes was detected in 74% of the analysed settlements with a pool positivity rate of 62%. The estimated individual mosquito infection rate was 4.12% and the minimum infection rate was 33.3/1,000 mosquitoes. All four serotypes were detected; the most frequent being DENV-2 (50%) and DENV-1 (35%). Two-three serotypes were detected simultaneously in separate pools. This is the first report on the co-occurrence of natural DENV infection of mosquitoes in Colombian rural areas. The findings are important for understanding dengue transmission and planning control strategies. A potential latent virus reservoir in rural areas could spill over to urban areas during population movements. Detecting DENV in wild-caught adult mosquitoes should be included in the development of dengue epidemic forecasting models.

  13. Response of stream benthic macroinvertebrates to current water management in Alpine catchments massively developed for hydropower.

    PubMed

    Quadroni, Silvia; Crosa, Giuseppe; Gentili, Gaetano; Espa, Paolo

    2017-12-31

    The present work focuses on evaluating the ecological effects of hydropower-induced streamflow alteration within four catchments in the central Italian Alps. Downstream from the water diversions, minimum flows are released as an environmental protection measure, ranging approximately from 5 to 10% of the mean annual natural flow estimated at the intake section. Benthic macroinvertebrates as well as daily averaged streamflow were monitored for five years at twenty regulated stream reaches, and possible relationships between benthos-based stream quality metrics and environmental variables were investigated. Despite the non-negligible inter-site differences in basic streamflow metrics, benthic macroinvertebrate communities were generally dominated by few highly resilient taxa. The highest level of diversity was detected at sites where upstream minimum flow exceedance is higher and further anthropogenic pressures (other than hydropower) are lower. However, according to the current Italian normative index, the ecological quality was good/high on average at all of the investigated reaches, thus complying the Water Framework Directive standards. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Optimization of K-edge imaging for vulnerable plaques using gold nanoparticles and energy resolved photon counting detectors: a simulation study.

    PubMed

    Alivov, Yahya; Baturin, Pavlo; Le, Huy Q; Ducote, Justin; Molloi, Sabee

    2014-01-06

    We investigated the effect of different imaging parameters, such as dose, beam energy, energy resolution and the number of energy bins, on the image quality of K-edge spectral computed tomography (CT) of gold nanoparticles (GNP) accumulated in an atherosclerotic plaque. A maximum likelihood technique was employed to estimate the concentration of GNP, which served as a targeted intravenous contrast material intended to detect the degree of the plaque's inflammation. The simulation studies used a single-slice parallel beam CT geometry with an x-ray beam energy ranging between 50 and 140 kVp. The synthetic phantoms included small (3 cm in diameter) cylinder and chest (33 × 24 cm(2)) phantoms, where both phantoms contained tissue, calcium and gold. In the simulation studies, GNP quantification and background (calcium and tissue) suppression tasks were pursued. The x-ray detection sensor was represented by an energy resolved photon counting detector (e.g., CdZnTe) with adjustable energy bins. Both ideal and more realistic (12% full width at half maximum (FWHM) energy resolution) implementations of the photon counting detector were simulated. The simulations were performed for the CdZnTe detector with a pixel pitch of 0.5-1 mm, which corresponds to a performance without significant charge sharing and cross-talk effects. The Rose model was employed to estimate the minimum detectable concentration of GNPs. A figure of merit (FOM) was used to optimize the x-ray beam energy (kVp) to achieve the highest signal-to-noise ratio with respect to the patient dose. As a result, the successful identification of gold and background suppression was demonstrated. The highest FOM was observed at the 125 kVp x-ray beam energy. The minimum detectable GNP concentration was determined to be approximately 1.06 µmol mL(-1) (0.21 mg mL(-1)) for an ideal detector and about 2.5 µmol mL(-1) (0.49 mg mL(-1)) for a more realistic (12% FWHM) detector. The studies show the optimal imaging parameters at the lowest patient dose using an energy resolved photon counting detector to image GNP in an atherosclerotic plaque.

  15. Binoculars with mil scale as a training aid for estimating form class

    Treesearch

    H.W. Camp, J.R.; C.A. Bickford

    1949-01-01

    In an extensive forest inventory, estimates involving personal judgment cannot be eliminated. However, every means should be taken to keep these estimates to a minimum and to provide on-the-job training that is adequate for obtaining the best estimates possible.

  16. Physiological and biochemical responses of Prorocentrum minimum to high light stress

    NASA Astrophysics Data System (ADS)

    Park, So Yun; Choi, Eun Seok; Hwang, Jinik; Kim, Donggiun; Ryu, Tae Kwon; Lee, Taek-Kyun

    2009-12-01

    Prorocentrum minimum is a common bloomforming photosynthetic dinoflagellate found along the southern coast of Korea. To investigate the adaptive responses of P. minimum to high light stress, we measured growth rate, and generation of reactive oxidative species (ROS), superoxide dismutase (SOD), catalase (CAT), and malondialdehyde (MDA) in cultures exposed to normal (NL) and high light levels (HL). The results showed that HL (800 μmol m-2 s-1) inhibited growth of P. minimum, with maximal inhibition after 7-9 days. HL also increased the amount of ROS and MDA, suggesting that HL stress leads to oxidative damage and lipid peroxidation in this species. Under HL, we first detected superoxide on day 4 and H2O2 on day 5. We also detected SOD activity on day 5 and CAT activity on day 6. The level of lipid peroxidation, an indicator of cell death, was high on day 8. Addition of diphenyleneiodonium (DPI), an NAD(P)H inhibitor, decreased the levels of superoxide generation and lipid peroxidation. Our results indicate that the production of ROS which results from HL stress in P. minimum also induces antioxidative enzymes that counteract oxidative damage and allow P. minimum to survive.

  17. Efficient scalable solid-state neutron detector.

    PubMed

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a (6)Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m(2), is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  18. Of Detection Limits and Effective Mitigation: The Use of Infrared Cameras for Methane Leak Detection

    NASA Astrophysics Data System (ADS)

    Ravikumar, A. P.; Wang, J.; McGuire, M.; Bell, C.; Brandt, A. R.

    2017-12-01

    Mitigating methane emissions, a short-lived and potent greenhouse gas, is critical to limiting global temperature rise to two degree Celsius as outlined in the Paris Agreement. A major source of anthropogenic methane emissions in the United States is the oil and gas sector. To this effect, state and federal governments have recommended the use of optical gas imaging systems in periodic leak detection and repair (LDAR) surveys to detect for fugitive emissions or leaks. The most commonly used optical gas imaging systems (OGI) are infrared cameras. In this work, we systematically evaluate the limits of infrared (IR) camera based OGI system for use in methane leak detection programs. We analyze the effect of various parameters that influence the minimum detectable leak rates of infrared cameras. Blind leak detection tests were carried out at the Department of Energy's MONITOR natural gas test-facility in Fort Collins, CO. Leak sources included natural gas wellheads, separators, and tanks. With an EPA mandated 60 g/hr leak detection threshold for IR cameras, we test leak rates ranging from 4 g/hr to over 350 g/hr at imaging distances between 5 ft and 70 ft from the leak source. We perform these experiments over the course of a week, encompassing a wide range of wind and weather conditions. Using repeated measurements at a given leak rate and imaging distance, we generate detection probability curves as a function of leak-size for various imaging distances, and measurement conditions. In addition, we estimate the median detection threshold - leak-size at which the probability of detection is 50% - under various scenarios to reduce uncertainty in mitigation effectiveness. Preliminary analysis shows that the median detection threshold varies from 3 g/hr at an imaging distance of 5 ft to over 150 g/hr at 50 ft (ambient temperature: 80 F, winds < 4 m/s). Results from this study can be directly used to improve OGI based LDAR protocols and reduce uncertainty in estimated mitigation effectiveness. Furthermore, detection limits determined in this study can be used as standards to compare new detection technologies.

  19. Space shuttle engineering and operations support. Orbiter to spacelab electrical power interface. Avionics system engineering

    NASA Technical Reports Server (NTRS)

    Emmons, T. E.

    1976-01-01

    The results are presented of an investigation of the factors which affect the determination of Spacelab (S/L) minimum interface main dc voltage and available power from the orbiter. The dedicated fuel cell mode of powering the S/L is examined along with the minimum S/L interface voltage and available power using the predicted fuel cell power plant performance curves. The values obtained are slightly lower than current estimates and represent a more marginal operating condition than previously estimated.

  20. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  1. The Application of Speaker Recognition Techniques in the Detection of Tsunamigenic Earthquakes

    NASA Astrophysics Data System (ADS)

    Gorbatov, A.; O'Connell, J.; Paliwal, K.

    2015-12-01

    Tsunami warning procedures adopted by national tsunami warning centres largely rely on the classical approach of earthquake location, magnitude determination, and the consequent modelling of tsunami waves. Although this approach is based on known physics theories of earthquake and tsunami generation processes, this may be the main shortcoming due to the need to satisfy minimum seismic data requirement to estimate those physical parameters. At least four seismic stations are necessary to locate the earthquake and a minimum of approximately 10 minutes of seismic waveform observation to reliably estimate the magnitude of a large earthquake similar to the 2004 Indian Ocean Tsunami Earthquake of M9.2. Consequently the total time to tsunami warning could be more than half an hour. In attempt to reduce the time of tsunami alert a new approach is proposed based on the classification of tsunamigenic and non tsunamigenic earthquakes using speaker recognition techniques. A Tsunamigenic Dataset (TGDS) was compiled to promote the development of machine learning techniques for application to seismic trace analysis and, in particular, tsunamigenic event detection, and compare them to existing seismological methods. The TGDS contains 227 off shore events (87 tsunamigenic and 140 non-tsunamigenic earthquakes with M≥6) from Jan 2000 to Dec 2011, inclusive. A Support Vector Machine classifier using a radial-basis function kernel was applied to spectral features derived from 400 sec frames of 3-comp. 1-Hz broadband seismometer data. Ten-fold cross-validation was used during training to choose classifier parameters. Voting was applied to the classifier predictions provided from each station to form an overall prediction for an event. The F1 score (harmonic mean of precision and recall) was chosen to rate each classifier as it provides a compromise between type-I and type-II errors, and due to the imbalance between the representative number of events in the tsunamigenic and non-tsunamigenic classes. The described classifier achieved an F1 score of 0.923, with tsunamigenic classification precision and recall/sensitivity of 0.928 and 0.919 respectively. The system requires a minimum of 3 stations with ~400 seconds of data each to make a prediction. The accuracy improves as further stations and data become available.

  2. Estimating abundance of mountain lions from unstructured spatial sampling

    USGS Publications Warehouse

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.

  3. MMSE Estimator for Children’s Speech with Car and Weather Noise

    NASA Astrophysics Data System (ADS)

    Sayuthi, V.

    2018-04-01

    Previous research mentioned that most people need and use vehicles for various purposes, in this recent time and future, as a means of traveling. Many ways can be done in a vehicle, such as for enjoying entertainment, and doing work, so vehicles not just only as a means of traveling. In this study, we will examine the children’s speech from a girl in the vehicle that affected by noise disturbances from the sound source of car noise and the weather sound noise around it, in this case, the rainy weather noise. Vehicle sounds may be from car engine or car air conditioner. The minimum mean square error (MMSE) estimator is used as an attempt to obtain or detect the children’s clear speech by representing simulation research as random process signal that factored by the autocorrelation of both the child’s voice and the disturbance noise signal. This MMSE estimator can be considered as wiener filter as the clear sound are reconstructed again. We expected that the results of this study can help as the basis for development of entertainment or communication technology for passengers of vehicles in the future, particularly using MMSE estimators.

  4. Application of Twin Beams in Mach-Zehnder Interferometer

    NASA Technical Reports Server (NTRS)

    Zhang, J. X.; Xie, C. D.; Peng, K. C.

    1996-01-01

    Using the twin beams generated from parametric amplifier to drive the two port of a Mach-Zehnder interferometer, it is shown that the minimum detectable optical phase shift can be largly reduced to the Heisenberg limit(1/n) which is far below the Shot Noise Limit(1/square root of n) the large gain limit. The dependence of the minimum detectable phase shift on parametric gain and the inefficient photodetectors has been discussed.

  5. Analysis of Aircraft Fuels and Related Materials

    DTIC Science & Technology

    1982-09-01

    content by the Karl Fischer method . Each 2040 solvent sample represented a different step in a clean-up procedure conducted by Aero Propulsion...izes a potentiometric titration with alcoholic silver nitrate. This method has a minimum detectability of 1 ppm. It has a re- peatability of 0.1 ppm... Method 163-80, which util- izes a potentiometric titration with alcoholic silver nitrate. This method has a minimum detectability of 1 ppm and has a

  6. Methods for the preparation and analysis of solids and suspended solids for methylmercury

    USGS Publications Warehouse

    DeWild, John F.; Olund, Shane D.; Olson, Mark L.; Tate, Michael T.

    2004-01-01

    This report presents the methods and method performance data for the determination of methylmercury concentrations in solids and suspended solids. Using the methods outlined here, the U.S. Geological Survey's Wisconsin District Mercury Laboratory can consistently detect methylmercury in solids and suspended solids at environmentally relevant concentrations. Solids can be analyzed wet or freeze dried with a minimum detection limit of 0.08 ng/g (as-processed). Suspended solids must first be isolated from aqueous matrices by filtration. The minimum detection limit for suspended solids is 0.01 ng per filter resulting in a minimum reporting limit ranging from 0.2 ng/L for a 0.05 L filtered volume to 0.01 ng/L for a 1.0 L filtered volume. Maximum concentrations for both matrices can be extended to cover nearly any amount of methylmercury by limiting sample size.

  7. The Einstein-Hilbert gravitation with minimum length

    NASA Astrophysics Data System (ADS)

    Louzada, H. L. C.

    2018-05-01

    We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.

  8. Minimum Alcohol Prices and Outlet Densities in British Columbia, Canada: Estimated Impacts on Alcohol-Attributable Hospital Admissions

    PubMed Central

    Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane

    2013-01-01

    Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383

  9. Kalman filter for statistical monitoring of forest cover across sub-continental regions [Symposium

    Treesearch

    Raymond L. Czaplewski

    1991-01-01

    The Kalman filter is a generalization of the composite estimator. The univariate composite estimate combines 2 prior estimates of population parameter with a weighted average where the scalar weight is inversely proportional to the variances. The composite estimator is a minimum variance estimator that requires no distributional assumptions other than estimates of the...

  10. A comparison of behavioural (Landolt C) and anatomical estimates of visual acuity in archerfish (Toxotes chatareus).

    PubMed

    Temple, S E; Manietta, D; Collin, S P

    2013-05-03

    Archerfish forage by shooting jets of water at insects above the water's surface. The challenge of detecting small prey items against a complex background suggests that they have good visual acuity, but to date this has never been tested, despite archerfish becoming an increasingly important model species for vertebrate vision. We used a modified Landolt C test to measure visual acuity behaviourally, and compared the results to their predicted minimum separable angle based on both photoreceptor and ganglion cell spacing in the retina. Both measures yielded similar estimates of visual acuity; between 3.23 and 3.57 cycles per degree (0.155-0.140° of visual arc). Such a close match between behavioural and anatomical estimates of visual acuity in fishes is unusual and may be due to our use of an ecologically relevant task that measured the resolving power of the part of the retina that has the highest photoreceptor density and that is used in aligning their spitting angle with potential targets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Innovative trend analysis of annual and seasonal air temperature and rainfall in the Yangtze River Basin, China during 1960-2015

    NASA Astrophysics Data System (ADS)

    Cui, Lifang; Wang, Lunche; Lai, Zhongping; Tian, Qing; Liu, Wen; Li, Jun

    2017-11-01

    The variation characteristics of air temperature and precipitation in the Yangtze River Basin (YRB), China during 1960-2015 were analysed using a linear regression (LR) analysis, a Mann-Kendall (MK) test with Sen's slope estimator and Sen's innovative trend analysis (ITA). The results showed that the annual maximum, minimum and mean temperature significantly increased at the rate of 0.15°C/10yr, 0.23°C/10yr and 0.19°C/10yr, respectively, over the whole study area during 1960-2015. The warming magnitudes for the above variables during 1980-2015 were much higher than those during 1960-2015:0.38°C/10yr, 0.35°C/10yr and 0.36°C/10yr, respectively. The seasonal maximum, minimum and mean temperature significantly increased in the spring, autumn and winter seasons during 1960-2015. Although the summer temperatures also increased at some extent, only the minimum temperature showed a significant increasing trend. Meanwhile, the highest rate of increase of seasonal mean temperature occurred in winter (0.24°C/10yr) during 1960-2015 and spring (0.50°C/10yr) during 1980-2015, which indicated that the significant warming trend for the whole YRB could be attributed to the remarkable temperature increases in winter and spring months. However, both the annual and seasonal warming magnitudes showed large regional differences, and a higher warming rate was detected in the eastern YRB and the western source region of the Yangtze River on the Qinghai-Tibetan Plateau (QTP). Additionally, annual precipitation increased by approximately 12.02 mm/10yr during 1960-2015 but decreased at the rate of 19.63 mm/10yr during 1980-2015. There were decreasing trends for precipitation in all four seasons since 1980 in the YRB, and a significant increasing trend was only detected in summer since 1960 (12.37 mm/10yr). Overall, a warming-wetting trend was detected in the south-eastern and north-western YRB, while there was a warming-drying trend in middle regions.

  12. Statistical and population genetics issues of two Hungarian datasets from the aspect of DNA evidence interpretation.

    PubMed

    Szabolcsi, Zoltán; Farkas, Zsuzsa; Borbély, Andrea; Bárány, Gusztáv; Varga, Dániel; Heinrich, Attila; Völgyi, Antónia; Pamjav, Horolma

    2015-11-01

    When the DNA profile from a crime-scene matches that of a suspect, the weight of DNA evidence depends on the unbiased estimation of the match probability of the profiles. For this reason, it is required to establish and expand the databases that reflect the actual allele frequencies in the population applied. 21,473 complete DNA profiles from Databank samples were used to establish the allele frequency database to represent the population of Hungarian suspects. We used fifteen STR loci (PowerPlex ESI16) including five, new ESS loci. The aim was to calculate the statistical, forensic efficiency parameters for the Databank samples and compare the newly detected data to the earlier report. The population substructure caused by relatedness may influence the frequency of profiles estimated. As our Databank profiles were considered non-random samples, possible relationships between the suspects can be assumed. Therefore, population inbreeding effect was estimated using the FIS calculation. The overall inbreeding parameter was found to be 0.0106. Furthermore, we tested the impact of the two allele frequency datasets on 101 randomly chosen STR profiles, including full and partial profiles. The 95% confidence interval estimates for the profile frequencies (pM) resulted in a tighter range when we used the new dataset compared to the previously published ones. We found that the FIS had less effect on frequency values in the 21,473 samples than the application of minimum allele frequency. No genetic substructure was detected by STRUCTURE analysis. Due to the low level of inbreeding effect and the high number of samples, the new dataset provides unbiased and precise estimates of LR for statistical interpretation of forensic casework and allows us to use lower allele frequencies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Spectral factorization of wavefields and wave operators

    NASA Astrophysics Data System (ADS)

    Rickett, James Edward

    Spectral factorization is the problem of finding a minimum-phase function with a given power spectrum. Minimum phase functions have the property that they are causal with a causal (stable) inverse. In this thesis, I factor multidimensional systems into their minimum-phase components. Helical boundary conditions resolve any ambiguities over causality, allowing me to factor multi-dimensional systems with conventional one-dimensional spectral factorization algorithms. In the first part, I factor passive seismic wavefields recorded in two-dimensional spatial arrays. The result provides an estimate of the acoustic impulse response of the medium that has higher bandwidth than autocorrelation-derived estimates. Also, the function's minimum-phase nature mimics the physics of the system better than the zero-phase autocorrelation model. I demonstrate this on helioseismic data recorded by the satellite-based Michelson Doppler Imager (MDI) instrument, and shallow seismic data recorded at Long Beach, California. In the second part of this thesis, I take advantage of the stable-inverse property of minimum-phase functions to solve wave-equation partial differential equations. By factoring multi-dimensional finite-difference stencils into minimum-phase components, I can invert them efficiently, facilitating rapid implicit extrapolation without the azimuthal anisotropy that is observed with splitting approximations. The final part of this thesis describes how to calculate diagonal weighting functions that approximate the combined operation of seismic modeling and migration. These weighting functions capture the effects of irregular subsurface illumination, which can be the result of either the surface-recording geometry, or focusing and defocusing of the seismic wavefield as it propagates through the earth. Since they are diagonal, they can be easily both factored and inverted to compensate for uneven subsurface illumination in migrated images. Experimental results show that applying these weighting functions after migration leads to significantly improved estimates of seismic reflectivity.

  14. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas

    We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequencemore » (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less

  15. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    DOE PAGES

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas; ...

    2017-08-08

    Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less

  16. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas

    Here, we present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a MetagenomeAssembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Genemore » Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.« less

  17. A genome-wide signature of positive selection in ancient and recent invasive expansions of the honey bee Apis mellifera

    PubMed Central

    Zayed, Amro; Whitfield, Charles W.

    2008-01-01

    Apis mellifera originated in Africa and extended its range into Eurasia in two or more ancient expansions. In 1956, honey bees of African origin were introduced into South America, their descendents admixing with previously introduced European bees, giving rise to the highly invasive and economically devastating “Africanized” honey bee. Here we ask whether the honey bee's out-of-Africa expansions, both ancient and recent (invasive), were associated with a genome-wide signature of positive selection, detected by contrasting genetic differentiation estimates (FST) between coding and noncoding SNPs. In native populations, SNPs in protein-coding regions had significantly higher FST estimates than those in noncoding regions, indicating adaptive evolution in the genome driven by positive selection. This signal of selection was associated with the expansion of honey bees from Africa into Western and Northern Europe, perhaps reflecting adaptation to temperate environments. We estimate that positive selection acted on a minimum of 852–1,371 genes or ≈10% of the bee's coding genome. We also detected positive selection associated with the invasion of African-derived honey bees in the New World. We found that introgression of European-derived alleles into Africanized bees was significantly greater for coding than noncoding regions. Our findings demonstrate that Africanized bees exploited the genetic diversity present from preexisting introductions in an adaptive way. Finally, we found a significant negative correlation between FST estimates and the local GC content surrounding coding SNPs, suggesting that AT-rich genes play an important role in adaptive evolution in the honey bee. PMID:18299560

  18. A genome-wide signature of positive selection in ancient and recent invasive expansions of the honey bee Apis mellifera.

    PubMed

    Zayed, Amro; Whitfield, Charles W

    2008-03-04

    Apis mellifera originated in Africa and extended its range into Eurasia in two or more ancient expansions. In 1956, honey bees of African origin were introduced into South America, their descendents admixing with previously introduced European bees, giving rise to the highly invasive and economically devastating "Africanized" honey bee. Here we ask whether the honey bee's out-of-Africa expansions, both ancient and recent (invasive), were associated with a genome-wide signature of positive selection, detected by contrasting genetic differentiation estimates (F(ST)) between coding and noncoding SNPs. In native populations, SNPs in protein-coding regions had significantly higher F(ST) estimates than those in noncoding regions, indicating adaptive evolution in the genome driven by positive selection. This signal of selection was associated with the expansion of honey bees from Africa into Western and Northern Europe, perhaps reflecting adaptation to temperate environments. We estimate that positive selection acted on a minimum of 852-1,371 genes or approximately 10% of the bee's coding genome. We also detected positive selection associated with the invasion of African-derived honey bees in the New World. We found that introgression of European-derived alleles into Africanized bees was significantly greater for coding than noncoding regions. Our findings demonstrate that Africanized bees exploited the genetic diversity present from preexisting introductions in an adaptive way. Finally, we found a significant negative correlation between F(ST) estimates and the local GC content surrounding coding SNPs, suggesting that AT-rich genes play an important role in adaptive evolution in the honey bee.

  19. Robust linear discriminant models to solve financial crisis in banking sectors

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni

    2014-12-01

    Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.

  20. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  1. The minimum follow-up required for radial head arthroplasty: a meta-analysis.

    PubMed

    Laumonerie, P; Reina, N; Kerezoudis, P; Declaux, S; Tibbo, M E; Bonnevialle, N; Mansat, P

    2017-12-01

    The primary aim of this study was to define the standard minimum follow-up required to produce a reliable estimate of the rate of re-operation after radial head arthroplasty (RHA). The secondary objective was to define the leading reasons for re-operation. Four electronic databases, between January 2000 and March 2017 were searched. Articles reporting reasons for re-operation (Group I) and results (Group II) after RHA were included. In Group I, a meta-analysis was performed to obtain the standard minimum follow-up, the mean time to re-operation and the reason for failure. In Group II, the minimum follow-up for each study was compared with the standard minimum follow-up. A total of 40 studies were analysed: three were Group I and included 80 implants and 37 were Group II and included 1192 implants. In Group I, the mean time to re-operation was 1.37 years (0 to 11.25), the standard minimum follow-up was 3.25 years; painful loosening was the main indication for re-operation. In Group II, 33 Group II articles (89.2%) reported a minimum follow-up of < 3.25 years. The literature does not provide a reliable estimate of the rate of re-operation after RHA. The reproducibility of results would be improved by using a minimum follow-up of three years combined with a consensus of the definition of the reasons for failure after RHA. Cite this article: Bone Joint J 2017;99-B:1561-70. ©2017 The British Editorial Society of Bone & Joint Surgery.

  2. Infrared imaging based hyperventilation monitoring through respiration rate estimation

    NASA Astrophysics Data System (ADS)

    Basu, Anushree; Routray, Aurobinda; Mukherjee, Rashmi; Shit, Suprosanna

    2016-07-01

    A change in the skin temperature is used as an indicator of physical illness which can be detected through infrared thermography. Thermograms or thermal images can be used as an effective diagnostic tool for monitoring and diagnosis of various diseases. This paper describes an infrared thermography based approach for detecting hyperventilation caused due to stress and anxiety in human beings by computing their respiration rates. The work employs computer vision techniques for tracking the region of interest from thermal video to compute the breath rate. Experiments have been performed on 30 subjects. Corner feature extraction using Minimum Eigenvalue (Shi-Tomasi) algorithm and registration using Kanade Lucas-Tomasi algorithm has been used here. Thermal signature around the extracted region is detected and subsequently filtered through a band pass filter to compute the respiration profile of an individual. If the respiration profile shows unusual pattern and exceeds the threshold we conclude that the person is stressed and tending to hyperventilate. Results obtained are compared with standard contact based methods which have shown significant correlations. It is envisaged that the thermal image based approach not only will help in detecting hyperventilation but can assist in regular stress monitoring as it is non-invasive method.

  3. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  4. Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme

    NASA Technical Reports Server (NTRS)

    Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong

    2011-01-01

    A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred

  5. Ion-neutral Coupling During Deep Solar Minimum

    NASA Technical Reports Server (NTRS)

    Huang, Cheryl Y.; Roddy, Patrick A.; Sutton, Eric K.; Stoneback, Russell; Pfaff, Robert F.; Gentile, Louise C.; Delay, Susan H.

    2013-01-01

    The equatorial ionosphere under conditions of deep solar minimum exhibits structuring due to tidal forces. Data from instruments carried by the Communication Navigation Outage Forecasting System (CNOFS) which was launched in April 2008 have been analyzed for the first 2 years following launch. The Planar Langmuir Probe (PLP), Ion Velocity Meter (IVM) and Vector Electric Field Investigation (VEFI) all detect periodic structures during the 20082010 period which appear to be tides. However when the tidal features detected by these instruments are compared, there are distinctive and significant differences between the observations. Tides in neutral densities measured by the Gravity Recovery and Climate Experiment (GRACE) satellite were also observed during June 2008. In addition, Broad Plasma Decreases (BPDs) appear as a deep absolute minimum in the plasma and neutral density tidal pattern. These are co-located with regions of large downward-directed ion meridional velocities and minima in the zonal drifts, all on the nightside. The region in which BPDs occur coincides with a peak in occurrence rate of dawn depletions in plasma density observed on the Defense Meterological Satellite Program (DMSP) spacecraft, as well as a minimum in radiance detected by UV imagers on the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) and IMAGE satellites

  6. Characterization of Terahertz Bi-Material Sensors with Integrated Metamaterial Absorbers

    DTIC Science & Technology

    2013-09-01

    Kumar, Qing Hu, and J. L. Reno, “Real-time imaging using a 4.3-THz quantum cascade laser and a 320x240 microbolometer focal-plane array ,” IEEE...responsivity, the speed of operation and the minimum detected incident power were measured using a quantum cascade laser (QCL), operating at 3.8 THz...of operation and the minimum detected incident power were measured using a quantum cascade laser (QCL), operating at 3.8 THz. The measured

  7. Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT)

    DTIC Science & Technology

    2011-01-01

    Minimum Error Bounded Efficient `1 Tracker with Occlusion Detection Xue Mei\\ ∗ Haibin Ling† Yi Wu†[ Erik Blasch‡ Li Bai] \\Assembly Test Technology...proposed BPR-L1 tracker is tested on several challenging benchmark sequences involving chal- lenges such as occlusion and illumination changes. In all...point method de - pends on the value of the regularization parameter λ. In the experiments, we found that the total number of PCG is a few hundred. The

  8. Cloud-to-Ground Lightning Estimates Derived from SSMI Microwave Remote Sensing and NLDN

    NASA Technical Reports Server (NTRS)

    Winesett, Thomas; Magi, Brian; Cecil, Daniel

    2015-01-01

    Lightning observations are collected using ground-based and satellite-based sensors. The National Lightning Detection Network (NLDN) in the United States uses multiple ground sensors to triangulate the electromagnetic signals created when lightning strikes the Earth's surface. Satellite-based lightning observations have been made from 1998 to present using the Lightning Imaging Sensor (LIS) on the NASA Tropical Rainfall Measuring Mission (TRMM) satellite, and from 1995 to 2000 using the Optical Transient Detector (OTD) on the Microlab-1 satellite. Both LIS and OTD are staring imagers that detect lightning as momentary changes in an optical scene. Passive microwave remote sensing (85 and 37 GHz brightness temperatures) from the TRMM Microwave Imager (TMI) has also been used to quantify characteristics of thunderstorms related to lightning. Each lightning detection system has fundamental limitations. TRMM satellite coverage is limited to the tropics and subtropics between 38 deg N and 38 deg S, so lightning at the higher latitudes of the northern and southern hemispheres is not observed. The detection efficiency of NLDN sensors exceeds 95%, but the sensors are only located in the USA. Even if data from other ground-based lightning sensors (World Wide Lightning Location Network, the European Cooperation for Lightning Detection, and Canadian Lightning Detection Network) were combined with TRMM and NLDN, there would be enormous spatial gaps in present-day coverage of lightning. In addition, a globally-complete time history of observed lightning activity is currently not available either, with network coverage and detection efficiencies varying through the years. Previous research using the TRMM LIS and Microwave Imager (TMI) showed that there is a statistically significant correlation between lightning flash rates and passive microwave brightness temperatures. The physical basis for this correlation emerges because lightning in a thunderstorm occurs where ice is first present in the cloud and electric charge separation occurs. These ice particles efficiently scatter the microwave radiation at the 85 and 37 GHz frequencies, thus leading to large brightness temperature depressions. Lightning flash rate is related to the total amount of ice passing through the convective updraft regions of thunderstorms. Confirmation of this relationship using TRMM LIS and TMI data, however, remains constrained to TRMM observational limits of the tropics and subtropics. Satellites from the Defense Meteorology Satellite Program (DMSP) have global coverage and are equipped with passive microwave imagers that, like TMI, observe brightness temperatures at 85 and 37 GHz. Unlike the TRMM satellite, however, DMSP satellites do not have a lightning sensor, and the DMSP microwave data has never been used to derive global lightning. In this presentation, a relationship between DMSP Special Sensor Microwave Imager (SSMI) data and ground-based cloud-to-ground (CG) lightning data from NLDN is investigated to derive a spatially complete time history of CG lightning for the USA study area. This relationship is analogous to the established using TRMM LIS and TMI data. NLDN has the most spatially and temporally complete CG lightning data for the USA, and therefore provides the best opportunity to find geospatially coincident observations with SSMI sensors. The strongest thunderstorms generally have minimum 85 GHz Polarized Corrected brightness Temperatures (PCT) less than 150 K. Archived radar data was used to resolve the spatial extent of the individual storms. NLDN data for that storm spatial extent defined by radar data was used to calculate the CG flash rate for the storm. Similar to results using TRMM sensors, a linear model best explained the relationship between storm-specific CG flash rates and minimum 85 GHz PCT. However, the results in this study apply only to CG lightning. To extend the results to weaker storms, the probability of CG lightning (instead of the flash rate) was calculated for storms having 85 GHz PCT greater than 150 K. NLDN data was used to determine if a CG strike occurred for a storm. This probability of CG lightning was plotted as a function of minimum 85 GHz PCT and minimum 37 GHz PCT. These probabilities were used in conjunction with the linear model to estimate the CG flash rate for weaker storms with minimum 85 GHz PCTs greater than 150 K. Results from the investigation of CG lightning and passive microwave radiation signals agree with the previous research investigating total lightning and brightness temperature. Future work will take the established relationships and apply them to the decades of available DMSP data for the USA to derive a map of CG lightning flash rates. Validation of this method and uncertainty analysis will be done by comparing the derived maps of CG lightning flash rates against existing NLDN maps of CG lightning flash rates.

  9. Demonstration of a portable near-infrared CH4 detection sensor based on tunable diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Zheng, Chuan-Tao; Huang, Jian-Qiang; Ye, Wei-Lin; Lv, Mo; Dang, Jing-Min; Cao, Tian-Shu; Chen, Chen; Wang, Yi-Ding

    2013-11-01

    A portable near-infrared (NIR) CH4 detection sensor based on a distributed feedback (DFB) laser modulated at 1.654 μm is experimentally demonstrated. Intelligent temperature controller with an accuracy of -0.07 to +0.09 °C as well as a scan and modulation module generating saw-wave and cosine-wave signals are developed to drive the DFB laser, and a cost effective lock-in amplifier used to extract the second harmonic signal is integrated. Thorough experiments are carried out to obtain detection performances, including detection range, accuracy, stability and the minimum detection limit (MDL). Measurement results show that the absolute detection error relative to the standard value is less than 7% within the range of 0-100%, and the MDL is estimated to be about 11 ppm under an absorption length of 0.2 m and a noise level of 2 mVpp. Twenty-four hours monitoring on two gas samples (0.1% and 20%) indicates that the absolute errors are less than 7% and 2.5%, respectively, suggesting good long term stability. The sensor reveals competitive characteristics compared with other reported portable or handheld sensors. The developed sensor can also be used for the detection of other gases by adopting other DFB lasers with different center-wavelength using the same hardware and slightly modified software.

  10. Estimation of Surface Air Temperature from MODIS 1km Resolution Land Surface Temperature Over Northern China

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina

    2010-01-01

    Surface air temperature is a critical variable to describe the energy and water cycle of the Earth-atmosphere system and is a key input element for hydrology and land surface models. It is a very important variable in agricultural applications and climate change studies. This is a preliminary study to examine statistical relationships between ground meteorological station measured surface daily maximum/minimum air temperature and satellite remotely sensed land surface temperature from MODIS over the dry and semiarid regions of northern China. Studies were conducted for both MODIS-Terra and MODIS-Aqua by using year 2009 data. Results indicate that the relationships between surface air temperature and remotely sensed land surface temperature are statistically significant. The relationships between the maximum air temperature and daytime land surface temperature depends significantly on land surface types and vegetation index, but the minimum air temperature and nighttime land surface temperature has little dependence on the surface conditions. Based on linear regression relationship between surface air temperature and MODIS land surface temperature, surface maximum and minimum air temperatures are estimated from 1km MODIS land surface temperature under clear sky conditions. The statistical errors (sigma) of the estimated daily maximum (minimum) air temperature is about 3.8 C(3.7 C).

  11. An Empirical Study of Design Parameters for Assessing Differential Impacts for Students in Group Randomized Trials.

    PubMed

    Jaciw, Andrew P; Lin, Li; Ma, Boya

    2016-10-18

    Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined. Student outcomes from six CRTs are analyzed. Achievement in math, science, reading, and writing. The ratio of between-cluster variation in the slope of the moderator divided by total variance-the "moderator gap variance ratio"-is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size. Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step. © The Author(s) 2016.

  12. Low Streamflow Forcasting using Minimum Relative Entropy

    NASA Astrophysics Data System (ADS)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  13. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  14. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  15. Minimum area thresholds for rattlesnakes and colubrid snakes on islands in the Gulf of California, Mexico.

    PubMed

    Meik, Jesse M; Makowsky, Robert

    2018-01-01

    We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.

  16. Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.

    PubMed

    Cadena, Brian C

    2014-03-01

    This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.

  17. The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.

    PubMed

    Bullinger, Lindsey Rose

    2017-03-01

    To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.

  18. A High-Frequency Linear Ultrasonic Array Utilizing an Interdigitally Bonded 2-2 Piezo-Composite

    PubMed Central

    Cannata, Jonathan M.; Williams, Jay A.; Zhang, Lequan; Hu, Chang-Hong; Shung, K. Kirk

    2011-01-01

    This paper describes the development of a high-frequency 256-element linear ultrasonic array utilizing an interdigitally bonded (IB) piezo-composite. Several IB composites were fabricated with different commercial and experimental piezoelectric ceramics and evaluated to determine a suitable formulation for use in high-frequency linear arrays. It was found that the fabricated fine-scale 2–2 IB composites outperformed 1–3 IB composites with identical pillar- and kerf-widths. This result was not expected and lead to the conclusion that dicing damage was likely the cause of the discrepancy. Ultimately, a 2–2 composite fabricated using a fine-grain piezoelectric ceramic was chosen for the array. The composite was manufactured using one IB operation in the azimuth direction to produce approximately 19-μm-wide pillars separated by 6-μm-wide kerfs. The array had a 50 μm (one wavelength in water) azimuth pitch, two matching layers, and 2 mm elevation length focused to 7.3 mm using a polymethylpentene (TPX) lens. The measured pulse-echo center frequency for a representative array element was 28 MHz and −6-dB band-width was 61%. The measured single-element transmit −6-dB directivity was estimated to be 50°. The measured insertion loss was 19 dB after compensating for the effects of attenuation and diffraction in the water bath. A fine-wire phantom was used to assess the lateral and axial resolution of the array when paired with a prototype system utilizing a 64-channel analog beamformer. The −6-dB lateral and axial resolutions were estimated to be 125 and 68 μm, respectively. An anechoic cyst phantom was also imaged to determine the minimum detectable spherical inclusion, and thus the 3-D resolution of the array and beamformer. The minimum anechoic cyst detected was approximately 300 μm in diameter. PMID:21989884

  19. Evaluating the U.S. Food Safety Modernization Act Produce Safety Rule Standard for Microbial Quality of Agricultural Water for Growing Produce.

    PubMed

    Havelaar, Arie H; Vazquez, Kathleen M; Topalcengiz, Zeynal; Muñoz-Carpena, Rafael; Danyluk, Michelle D

    2017-10-09

    The U.S. Food and Drug Administration (FDA) has defined standards for the microbial quality of agricultural surface water used for irrigation. According to the FDA produce safety rule (PSR), a microbial water quality profile requires analysis of a minimum of 20 samples for Escherichia coli over 2 to 4 years. The geometric mean (GM) level of E. coli should not exceed 126 CFU/100 mL, and the statistical threshold value (STV) should not exceed 410 CFU/100 mL. The water quality profile should be updated by analysis of a minimum of five samples per year. We used an extensive set of data on levels of E. coli and other fecal indicator organisms, the presence or absence of Salmonella, and physicochemical parameters in six agricultural irrigation ponds in West Central Florida to evaluate the empirical and theoretical basis of this PSR. We found highly variable log-transformed E. coli levels, with standard deviations exceeding those assumed in the PSR by up to threefold. Lognormal distributions provided an acceptable fit to the data in most cases but may underestimate extreme levels. Replacing censored data with the detection limit of the microbial tests underestimated the true variability, leading to biased estimates of GM and STV. Maximum likelihood estimation using truncated lognormal distributions is recommended. Twenty samples are not sufficient to characterize the bacteriological quality of irrigation ponds, and a rolling data set of five samples per year used to update GM and STV values results in highly uncertain results and delays in detecting a shift in water quality. In these ponds, E. coli was an adequate predictor of the presence of Salmonella in 150-mL samples, and turbidity was a second significant variable. The variability in levels of E. coli in agricultural water was higher than that anticipated when the PSR was finalized, and more detailed information based on mechanistic modeling is necessary to develop targeted risk management strategies.

  20. Rapid Characterization of Large Earthquakes in Chile

    NASA Astrophysics Data System (ADS)

    Barrientos, S. E.; Team, C.

    2015-12-01

    Chile, along 3000 km of it 4200 km long coast, is regularly affected by very large earthquakes (up to magnitude 9.5) resulting from the convergence and subduction of the Nazca plate beneath the South American plate. These megathrust earthquakes exhibit long rupture regions reaching several hundreds of km with fault displacements of several tens of meters. Minimum delay characterization of these giant events to establish their rupture extent and slip distribution is of the utmost importance for rapid estimations of the shaking area and their corresponding tsunami-genic potential evaluation, particularly when there are only few minutes to warn the coastal population for immediate actions. The task of a rapid evaluation of large earthquakes is accomplished in Chile through a network of sensors being implemented by the National Seismological Center of the University of Chile. The network is mainly composed approximately by one hundred broad-band and strong motion instruments and 130 GNSS devices; all will be connected in real time. Forty units present an optional RTX capability, where satellite orbits and clock corrections are sent to the field device producing a 1-Hz stream at 4-cm level. Tests are being conducted to stream the real-time raw data to be later processed at the central facility. Hypocentral locations and magnitudes are estimated after few minutes by automatic processing software based on wave arrival; for magnitudes less than 7.0 the rapid estimation works within acceptable bounds. For larger events, we are currently developing automatic detectors and amplitude estimators of displacement coming out from the real time GNSS streams. This software has been tested for several cases showing that, for plate interface events, the minimum magnitude threshold detectability reaches values within 6.2 and 6.5 (1-2 cm coastal displacement), providing an excellent tool for earthquake early characterization from a tsunamigenic perspective.

  1. Tritium as an indicator of ground-water age in Central Wisconsin

    USGS Publications Warehouse

    Bradbury, Kenneth R.

    1991-01-01

    In regions where ground water is generally younger than about 30 years, developing the tritium input history of an area for comparison with the current tritium content of ground water allows quantitative estimates of minimum ground-water age. The tritium input history for central Wisconsin has been constructed using precipitation tritium measured at Madison, Wisconsin and elsewhere. Weighted tritium inputs to ground water reached a peak of over 2,000 TU in 1964, and have declined since that time to about 20-30 TU at present. In the Buena Vista basin in central Wisconsin, most ground-water samples contained elevated levels of tritium, and estimated minimum ground-water ages in the basin ranged from less than one year to over 33 years. Ground water in mapped recharge areas was generally younger than ground water in discharge areas, and estimated ground-water ages were consistent with flow system interpretations based on other data. Estimated minimum ground-water ages increased with depth in areas of downward ground-water movement. However, water recharging through thick moraine sediments was older than water in other recharge areas, reflecting slower infiltration through the sandy till of the moraine.

  2. Minimum requirements for adequate nighttime conspicuity of highway signs

    DOT National Transportation Integrated Search

    1988-02-01

    A laboratory and field study were conducted to assess the minimum luminance levels of signs to ensure that they will be detected and identified at adequate distances under nighttime driving conditions. A total of 30 subjects participated in the field...

  3. Mobile measurement of methane emissions from natural gas developments in northeastern British Columbia, Canada

    NASA Astrophysics Data System (ADS)

    Atherton, Emmaline; Risk, David; Fougère, Chelsea; Lavoie, Martin; Marshall, Alex; Werring, John; Williams, James P.; Minions, Christina

    2017-10-01

    North American leaders recently committed to reducing methane emissions from the oil and gas sector, but information on current emissions from upstream oil and gas developments in Canada are lacking. This study examined the occurrence of methane plumes in an area of unconventional natural gas development in northwestern Canada. In August to September 2015 we completed almost 8000 km of vehicle-based survey campaigns on public roads dissecting oil and gas infrastructure, such as well pads and processing facilities. We surveyed six routes 3-6 times each, which brought us past over 1600 unique well pads and facilities managed by more than 50 different operators. To attribute on-road plumes to oil- and gas-related sources we used gas signatures of residual excess concentrations (anomalies above background) less than 500 m downwind from potential oil and gas emission sources. All results represent emissions greater than our minimum detection limit of 0.59 g s-1 at our average detection distance (319 m). Unlike many other oil and gas developments in the US for which methane measurements have been reported recently, the methane concentrations we measured were close to normal atmospheric levels, except inside natural gas plumes. Roughly 47 % of active wells emitted methane-rich plumes above our minimum detection limit. Multiple sites that pre-date the recent unconventional natural gas development were found to be emitting, and we observed that the majority of these older wells were associated with emissions on all survey repeats. We also observed emissions from gas processing facilities that were highly repeatable. Emission patterns in this area were best explained by infrastructure age and type. Extrapolating our results across all oil and gas infrastructure in the Montney area, we estimate that the emission sources we located (emitting at a rate > 0.59 g s-1) contribute more than 111 800 t of methane annually to the atmosphere. This value exceeds reported bottom-up estimates of 78 000 t of methane for all oil and gas sector sources in British Columbia. Current bottom-up methods for estimating methane emissions do not normally calculate the fraction of emitting oil and gas infrastructure with thorough on-ground measurements. However, this study demonstrates that mobile surveys could provide a more accurate representation of the number of emission sources in an oil and gas development. This study presents the first mobile collection of methane emissions from oil and gas infrastructure in British Columbia, and these results can be used to inform policy development in an era of methane emission reduction efforts.

  4. Automatic detection of kidney in 3D pediatric ultrasound images using deep neural networks

    NASA Astrophysics Data System (ADS)

    Tabrizi, Pooneh R.; Mansoor, Awais; Biggs, Elijah; Jago, James; Linguraru, Marius George

    2018-02-01

    Ultrasound (US) imaging is the routine and safe diagnostic modality for detecting pediatric urology problems, such as hydronephrosis in the kidney. Hydronephrosis is the swelling of one or both kidneys because of the build-up of urine. Early detection of hydronephrosis can lead to a substantial improvement in kidney health outcomes. Generally, US imaging is a challenging modality for the evaluation of pediatric kidneys with different shape, size, and texture characteristics. The aim of this study is to present an automatic detection method to help kidney analysis in pediatric 3DUS images. The method localizes the kidney based on its minimum volume oriented bounding box) using deep neural networks. Separate deep neural networks are trained to estimate the kidney position, orientation, and scale, making the method computationally efficient by avoiding full parameter training. The performance of the method was evaluated using a dataset of 45 kidneys (18 normal and 27 diseased kidneys diagnosed with hydronephrosis) through the leave-one-out cross validation method. Quantitative results show the proposed detection method could extract the kidney position, orientation, and scale ratio with root mean square values of 1.3 +/- 0.9 mm, 6.34 +/- 4.32 degrees, and 1.73 +/- 0.04, respectively. This method could be helpful in automating kidney segmentation for routine clinical evaluation.

  5. Proof of concept and dose estimation with binary responses under model uncertainty.

    PubMed

    Klingenberg, B

    2009-01-30

    This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.

  6. Tritium concentrations in flow from selected springs that discharge to the Snake River, Twin Falls-Hagerman area, Idaho

    USGS Publications Warehouse

    Mann, L.J.

    1989-01-01

    Concern has been expressed that some of the approximately 30,900 curies of tritium disposed to the Snake River Plain aquifer from 1952 to 1988 at the INEL (Idaho National Engineering Laboratory) have migrated to springs discharging to the Snake River in the Twin Falls-Hagerman area. To document tritium concentrations in springflow, 17 springs were sampled in November 1988 and 19 springs were sampled in March 1989. Tritium concentrations were less than the minimum detectable concentration of 0.5 pCi/mL (picocuries/mL) in November 1988 and less than the minimum detectable concentration of 0.2 pCi/mL in March 1989; the minimum detectable concentration was smaller in March 1989 owing to a longer counting time in the liquid scintillation system. The maximum contaminant level of tritium in drinking water as established by the U.S. Environmental Protection Agency is 20 pCi/mL. U.S. Environmental Protection Agency sample analyses indicate that the tritium concentration has decreased in the Snake River near Buhl since the 1970's. In 1974-79, tritium concentrations were less than 0.3 +/-0.2 pCi/mL in 3 of 20 samples; in 1983-88, 17 of 23 samples contained less than 0.3 +/-0.2 pCi/mL of tritium; the minimum detectable concentration is 0.2 pCi/mL. On the basis of decreasing tritium concentrations in the Snake River, their correlation to cessation of atmospheric weapons tests tritium concentrations in springflow less than the minimum detectable concentration, and the distribution of tritium in groundwater at the INEL, aqueous disposal of tritium at the INEL has had no measurable effect on tritium concentrations in springflow from the Snake River Plain aquifer and in the Snake River near Buhl. (USGS)

  7. Updating estimates of low streamflow statistics to account for possible trends

    NASA Astrophysics Data System (ADS)

    Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.

    2017-12-01

    Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.

  8. Hydrocarbon profiles throughout adult Calliphoridae aging: A promising tool for forensic entomology.

    PubMed

    Pechal, Jennifer L; Moore, Hannah; Drijfhout, Falko; Benbow, M Eric

    2014-12-01

    Blow flies (Diptera: Calliphoridae) are typically the first insects to arrive at human remains and carrion. Predictable succession patterns and known larval development of necrophagous insects on vertebrate remains can assist a forensic entomologist with estimates of a minimum post-mortem interval (PMImin) range. However, adult blow flies are infrequently used to estimate the PMImin, but rather are used for a confirmation of larval species identification. Cuticular hydrocarbons have demonstrated potential for estimating adult blow fly age, as hydrocarbons are present throughout blow fly development, from egg to adult, and are stable structures. The goal of this study was to identify hydrocarbon profiles associated with the adults of a North American native blow fly species, Cochliomyia macellaria (Fabricius) and a North American invasive species, Chrysomya rufifacies (Macquart). Flies were reared at a constant temperature (25°C), a photoperiod of 14:10 (L:D) (h), and were provided water, sugar and powdered milk ad libitum. Ten adult females from each species were collected at day 1, 5, 10, 20, and 30 post-emergence. Hydrocarbon compounds were extracted and then identified using gas chromatography-mass spectrometry (GC-MS) analysis. A total of 37 and 35 compounds were detected from C. macellaria and Ch. rufifacies, respectively. There were 24 and 23 n-alkene and methyl-branched alkane hydrocarbons from C. macellaria and Ch. rufifacies, respectively (10 compounds were shared between species), used for statistical analysis. Non-metric multidimensional scaling analysis and permutational multivariate analysis of variance were used to analyze the hydrocarbon profiles with significant differences (P<0.001) detected among post-emergence age cohorts for each species, and unique hydrocarbon profiles detected as each adult blow fly species aged. This work provides empirical data that serve as a foundation for future research into improving PMImin estimates made by forensic practitioners and potentially increase the use of adult insects during death investigations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Generic Sensor Modeling Using Pulse Method

    NASA Technical Reports Server (NTRS)

    Helder, Dennis L.; Choi, Taeyoung

    2005-01-01

    Recent development of high spatial resolution satellites such as IKONOS, Quickbird and Orbview enable observation of the Earth's surface with sub-meter resolution. Compared to the 30 meter resolution of Landsat 5 TM, the amount of information in the output image was dramatically increased. In this era of high spatial resolution, the estimation of spatial quality of images is gaining attention. Historically, the Modulation Transfer Function (MTF) concept has been used to estimate an imaging system's spatial quality. Sometimes classified by target shapes, various methods were developed in laboratory environment utilizing sinusoidal inputs, periodic bar patterns and narrow slits. On-orbit sensor MTF estimation was performed on 30-meter GSD Landsat4 Thematic Mapper (TM) data from the bridge pulse target as a pulse input . Because of a high resolution sensor s small Ground Sampling Distance (GSD), reasonably sized man-made edge, pulse, and impulse targets can be deployed on a uniform grassy area with accurate control of ground targets using tarps and convex mirrors. All the previous work cited calculated MTF without testing the MTF estimator's performance. In previous report, a numerical generic sensor model had been developed to simulate and improve the performance of on-orbit MTF estimating techniques. Results from the previous sensor modeling report that have been incorporated into standard MTF estimation work include Fermi edge detection and the newly developed 4th order modified Savitzky-Golay (MSG) interpolation technique. Noise sensitivity had been studied by performing simulations on known noise sources and a sensor model. Extensive investigation was done to characterize multi-resolution ground noise. Finally, angle simulation was tested by using synthetic pulse targets with angles from 2 to 15 degrees, several brightness levels, and different noise levels from both ground targets and imaging system. As a continuing research activity using the developed sensor model, this report was dedicated to MTF estimation via pulse input method characterization using the Fermi edge detection and 4th order MSG interpolation method. The relationship between pulse width and MTF value at Nyquist was studied including error detection and correction schemes. Pulse target angle sensitivity was studied by using synthetic targets angled from 2 to 12 degrees. In this report, from the ground and system noise simulation, a minimum SNR value was suggested for a stable MTF value at Nyquist for the pulse method. Target width error detection and adjustment technique based on a smooth transition of MTF profile is presented, which is specifically applicable only to the pulse method with 3 pixel wide targets.

  10. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  11. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Jon

    2009-06-15

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  12. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  13. Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment.

    ERIC Educational Resources Information Center

    Brown, Charles; And Others

    1983-01-01

    The study finds that a 10 percent increase in the federal minimum wage (or the coverage rate) would reduce teenage (16-19) employment by about one percent, which is at the lower end of the range of estimates from previous studies. (Author/SSH)

  14. Performance measures for lower gastrointestinal endoscopy: a European Society of Gastrointestinal Endoscopy (ESGE) quality improvement initiative

    PubMed Central

    Thomas-Gibson, Siwan; Bugajski, Marek; Bretthauer, Michael; Rees, Colin J; Dekker, Evelien; Hoff, Geir; Jover, Rodrigo; Suchanek, Stepan; Ferlitsch, Monika; Anderson, John; Roesch, Thomas; Hultcranz, Rolf; Racz, Istvan; Kuipers, Ernst J; Garborg, Kjetil; East, James E; Rupinski, Maciej; Seip, Birgitte; Bennett, Cathy; Senore, Carlo; Minozzi, Silvia; Bisschops, Raf; Domagk, Dirk; Valori, Roland; Spada, Cristiano; Hassan, Cesare; Dinis-Ribeiro, Mario; Rutter, Matthew D

    2017-01-01

    The European Society of Gastrointestinal Endoscopy and United European Gastroenterology present a short list of key performance measures for lower gastrointestinal endoscopy. We recommend that endoscopy services across Europe adopt the following seven key performance measures for lower gastrointestinal endoscopy for measurement and evaluation in daily practice at a center and endoscopist level: 1 rate of adequate bowel preparation (minimum standard 90%); 2 cecal intubation rate (minimum standard 90%); 3 adenoma detection rate (minimum standard 25%); 4 appropriate polypectomy technique (minimum standard 80%); 5 complication rate (minimum standard not set); 6 patient experience (minimum standard not set); 7 appropriate post-polypectomy surveillance recommendations (minimum standard not set). Other identified performance measures have been listed as less relevant based on an assessment of their importance, scientific acceptability, feasibility, usability, and comparison to competing measures. PMID:28507745

  15. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  16. A comparison of minimum distance and maximum likelihood techniques for proportion estimation

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Schucany, W. R.; Lindsey, H.; Gray, H. L.

    1982-01-01

    The estimation of mixing proportions P sub 1, P sub 2,...P sub m in the mixture density f(x) = the sum of the series P sub i F sub i(X) with i = 1 to M is often encountered in agricultural remote sensing problems in which case the p sub i's usually represent crop proportions. In these remote sensing applications, component densities f sub i(x) have typically been assumed to be normally distributed, and parameter estimation has been accomplished using maximum likelihood (ML) techniques. Minimum distance (MD) estimation is examined as an alternative to ML where, in this investigation, both procedures are based upon normal components. Results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD estimation provides better estimates than ML under symmetric departures from normality. When component distributions are not symmetric, however, it is seen that neither of these normal based techniques provides satisfactory results.

  17. Development and Application of an Objective Tracking Algorithm for Tropical Cyclones over the North-West Pacific purely based on Wind Speeds

    NASA Astrophysics Data System (ADS)

    Befort, Daniel J.; Kruschke, Tim; Leckebusch, Gregor C.

    2017-04-01

    Tropical Cyclones over East Asia have huge socio-economic impacts due to their strong wind fields and large rainfall amounts. Especially, the most severe events are associated with huge economic losses, e.g. Typhoon Herb in 1996 is related to overall losses exceeding 5 billion US (Munich Re, 2016). In this study, an objective tracking algorithm is applied to JRA55 reanalysis data from 1979 to 2014 over the Western North Pacific. For this purpose, a purely wind based algorithm, formerly used to identify extra-tropical wind storms, has been further developed. The algorithm is based on the exceedance of the local 98th percentile to define strong wind fields in gridded climate data. To be detected as a tropical cyclone candidate, the following criteria must be fulfilled: 1) the wind storm must exist for at least eight 6-hourly time steps and 2) the wind field must exceed a minimum size of 130.000km2 for each time step. The usage of wind information is motivated to focus on damage related events, however, a pre-selection based on the affected region is necessary to remove events of extra-tropical nature. Using IBTrACS Best Tracks for validation, it is found that about 62% of all detected tropical cyclone events in JRA55 reanalysis can be matched to an observed best track. As expected the relative amount of matched tracks increases with the wind intensity of the event, with a hit rate of about 98% for Violent Typhoons, above 90% for Very Strong Typhoons and about 75% for Typhoons. Overall these results are encouraging as the parameters used to detect tropical cyclones in JRA55, e.g. minimum area, are also suitable to detect TCs in most CMIP5 simulations and will thus allow estimates of potential future changes.

  18. Estimation of daily minimum land surface air temperature using MODIS data in southern Iran

    NASA Astrophysics Data System (ADS)

    Didari, Shohreh; Norouzi, Hamidreza; Zand-Parsa, Shahrokh; Khanbilvardi, Reza

    2017-11-01

    Land surface air temperature (LSAT) is a key variable in agricultural, climatological, hydrological, and environmental studies. Many of their processes are affected by LSAT at about 5 cm from the ground surface (LSAT5cm). Most of the previous studies tried to find statistical models to estimate LSAT at 2 m height (LSAT2m) which is considered as a standardized height, and there is not enough study for LSAT5cm estimation models. Accurate measurements of LSAT5cm are generally acquired from meteorological stations, which are sparse in remote areas. Nonetheless, remote sensing data by providing rather extensive spatial coverage can complement the spatiotemporal shortcomings of meteorological stations. The main objective of this study was to find a statistical model from the previous day to accurately estimate spatial daily minimum LSAT5cm, which is very important in agricultural frost, in Fars province in southern Iran. Land surface temperature (LST) data were obtained using the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Aqua and Terra satellites at daytime and nighttime periods with normalized difference vegetation index (NDVI) data. These data along with geometric temperature and elevation information were used in a stepwise linear model to estimate minimum LSAT5cm during 2003-2011. The results revealed that utilization of MODIS Aqua nighttime data of previous day provides the most applicable and accurate model. According to the validation results, the accuracy of the proposed model was suitable during 2012 (root mean square difference ( RMSD) = 3.07 °C, {R}_{adj}^2 = 87 %). The model underestimated (overestimated) high (low) minimum LSAT5cm. The accuracy of estimation in the winter time was found to be lower than the other seasons ( RMSD = 3.55 °C), and in summer and winter, the errors were larger than in the remaining seasons.

  19. Concentration of 3H in ground water and estimation of committed effective dose due to ground water ingestion in some places in the Maharashtra state, India.

    PubMed

    Reddy, P J; Bhade, S P D; Kolekar, R V; Singh, Rajvir; Pradeepkumar, K S

    2014-01-01

    The measurement of tritium in environmental samples requires highest possible sensitivity. In the present study, the authors have optimised the counting window for the analysis of (3)H in environmental samples using the recently installed Ultra Low Level Quantulus 1220 Liquid Scintillation Counting at BARC to improve the detection limit of the system. The optimised counting window corresponding to the highest figure of merit of 883.8 was found to be 20-162 channels. Different brands of packaged drinking waters were analysed to select a blank that would define the system background. The minimum detectable activity (MDA) achieved was 1.5 Bq l(-1) for a total counting time of 500 min. The concentration of tritium in well and bore well water samples collected from the villages of Pune, villages located at 1.8 km from Tarapur Atomic Power Station, Kolhapur and Ratnagiri, was analysed. The activity concentration ranged from 0.55 to 3.66 Bq l(-1). The associated age-dependant dose from water ingestion in the study area was estimated. The effective committed dose recorded for different age classes is negligible compared with World Health Organization and US Environmental Protection Agency dose guidelines.

  20. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    PubMed

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  1. Colonial waterbird predation on Lost River and Shortnose suckers in the Upper Klamath Basin

    USGS Publications Warehouse

    Evans, Allen F.; Hewitt, David A.; Payton, Quinn; Cramer, Bradley M.; Collis, Ken; Roby, Daniel D.

    2016-01-01

    We evaluated predation on Lost River Suckers Deltistes luxatus and Shortnose Suckers Chasmistes brevirostris by American white pelicans Pelecanus erythrorhynchos and double-crested cormorants Phalacrocorax auritus nesting at mixed-species colonies in the Upper Klamath Basin of Oregon and California during 2009–2014. Predation was evaluated by recovering (detecting) PIT tags from tagged fish on bird colonies and calculating minimum predation rates, as the percentage of available suckers consumed, adjusted for PIT tag detection probabilities but not deposition probabilities (i.e., probability an egested tag was deposited on- or off-colony). Results indicate that impacts of avian predation varied by sucker species, age-class (adult, juvenile), bird colony location, and year, demonstrating dynamic predator–prey interactions. Tagged suckers ranging in size from 72 to 730 mm were susceptible to cormorant or pelican predation; all but the largest Lost River Suckers were susceptible to bird predation. Minimum predation rate estimates ranged annually from <0.1% to 4.6% of the available PIT-tagged Lost River Suckers and from <0.1% to 4.2% of the available Shortnose Suckers, and predation rates were consistently higher on suckers in Clear Lake Reservoir, California, than on suckers in Upper Klamath Lake, Oregon. There was evidence that bird predation on juvenile suckers (species unknown) in Upper Klamath Lake was higher than on adult suckers in Upper Klamath Lake, where minimum predation rates ranged annually from 5.7% to 8.4% of available juveniles. Results suggest that avian predation is a factor limiting the recovery of populations of Lost River and Shortnose suckers, particularly juvenile suckers in Upper Klamath Lake and adult suckers in Clear Lake Reservoir. Additional research is needed to measure predator-specific PIT tag deposition probabilities (which, based on other published studies, could increase predation rates presented herein by a factor of roughly 2.0) and to better understand biotic and abiotic factors that regulate sucker susceptibility to bird predation.

  2. Mesospheric temperature estimation from meteor decay times during Geminids meteor shower

    NASA Astrophysics Data System (ADS)

    Kozlovsky, Alexander; Lukianova, Renata; Shalimov, Sergey; Lester, Mark

    2016-02-01

    Meteor radar observations at the Sodankylä Geophysical Observatory (67° 22'N, 26° 38'E, Finland) indicate that the mesospheric temperature derived from meteor decay times is systematically underestimated by 20-50 K during the Geminids meteor shower which has peak on 13 December. A very good coincidence of the minimum of routinely calculated temperature and maximum of meteor flux (the number of meteors detected per day) was observed regularly on that day in December 2008-2014. These observations are for a specific height-lifetime distribution of the Geminids meteor trails and indicate a larger percentage of overdense trails compared to that for sporadic meteors. A consequence of this is that the routine estimates of mesospheric temperature during the Geminids are in fact underestimates. The observations do, however, indicate unusual properties (e.g., mass, speed, or chemical composition) of the Geminids meteoroids. Similar properties were found also for Quadrantids in January 2009-2015, which like the Geminids has as a parent body an asteroid, but not for other meteor showers.

  3. Weak hydrogen bond topology in 1,1-difluoroethane dimer: A rotational study

    NASA Astrophysics Data System (ADS)

    Chen, Junhua; Zheng, Yang; Wang, Juan; Feng, Gang; Xia, Zhining; Gou, Qian

    2017-09-01

    The rotational spectrum of the 1,1-difluoroethane dimer has been investigated by pulsed-jet Fourier transform microwave spectroscopy. Two most stable isomers have been detected, which are both stabilized by a network of three C—H⋯F—C weak hydrogen bonds: in the most stable isomer, two difluoromethyl C—H groups and one methyl C—H group act as the weak proton donors whilst in the second isomer, two methyl C—H groups and one difluoromethyl C—H group act as the weak proton donors. For the global minimum, the measurements have also been extended to its four 13C isotopologues in natural abundance, allowing a precise, although partial, structural determination. Relative intensity measurements on a set of μa-type transitions allowed estimating the relative population ratio of the two isomers as NI/NII ˜ 6/1 in the pulsed jet, indicating a much larger energy gap between these two isomers than that expected from ab initio calculation, consistent with the result from pseudo-diatomic dissociation energies estimation.

  4. Estimation of Surface Air Temperature Over Central and Eastern Eurasia from MODIS Land Surface Temperature

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.

    2011-01-01

    Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.

  5. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    DOE PAGES

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; ...

    2017-01-13

    The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newermore » algorithms can be better than the ones currently used.« less

  6. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian

    2017-01-13

    The aim of this paper is to compare radioxenon beta-gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the Minimum Detectable Counts (MDC) for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Our results show that existing algorithms can be improved and some newermore » algorithms can be better than the currently used ones.« less

  7. Nearest Neighbor Averaging and its Effect on the Critical Level and Minimum Detectable Concentration for Scanning Radiological Survey Instruments that Perform Facility Release Surveys.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L

    2014-08-01

    Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less

  8. Deriving the number of jobs in proximity services from the number of inhabitants in French rural municipalities.

    PubMed

    Lenormand, Maxime; Huet, Sylvie; Deffuant, Guillaume

    2012-01-01

    We use a minimum requirement approach to derive the number of jobs in proximity services per inhabitant in French rural municipalities. We first classify the municipalities according to their time distance in minutes by car to the municipality where the inhabitants go the most frequently to get services (called MFM). For each set corresponding to a range of time distance to MFM, we perform a quantile regression estimating the minimum number of service jobs per inhabitant that we interpret as an estimation of the number of proximity jobs per inhabitant. We observe that the minimum number of service jobs per inhabitant is smaller in small municipalities. Moreover, for municipalities of similar sizes, when the distance to the MFM increases, the number of jobs of proximity services per inhabitant increases.

  9. Passive blast pressure sensor

    DOEpatents

    King, Michael J.; Sanchez, Roberto J.; Moss, William C.

    2013-03-19

    A passive blast pressure sensor for detecting blast overpressures of at least a predetermined minimum threshold pressure. The blast pressure sensor includes a piston-cylinder arrangement with one end of the piston having a detection surface exposed to a blast event monitored medium through one end of the cylinder and the other end of the piston having a striker surface positioned to impact a contact stress sensitive film that is positioned against a strike surface of a rigid body, such as a backing plate. The contact stress sensitive film is of a type which changes color in response to at least a predetermined minimum contact stress which is defined as a product of the predetermined minimum threshold pressure and an amplification factor of the piston. In this manner, a color change in the film arising from impact of the piston accelerated by a blast event provides visual indication that a blast overpressure encountered from the blast event was not less than the predetermined minimum threshold pressure.

  10. Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*

    PubMed Central

    Cadena, Brian C.

    2014-01-01

    This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288

  11. Continuity vs. the Crowd-Tradeoffs Between Continuous and Intermittent Citizen Hydrology Streamflow Observations.

    PubMed

    Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine

    2017-07-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.

  12. Comparison of irrigation pumpage with change in ground-water storage in the High Plains aquifer in Chase, Dundy, and Perkins counties, Nebraska, 1975-83

    USGS Publications Warehouse

    Heimes, F.J.; Ferrigno, C.F.; Gutentag, E.D.; Lucky, R.R.; Stephens, D.M.; Weeks, J.B.

    1987-01-01

    The relation between pumpage and change in storage was evaluated for most of a three-county area in southwestern Nebraska from 1975 through 1983. Initial comparison of the 1975-83 pumpage with change in storage in the study area indicated that the 1 ,042,300 acre-ft of change in storage was only about 30% of the 3,425,000 acre-ft of pumpage. An evaluation of the data used to calculate pumpage and change in storage indicated that there was a relatively large potential for error in estimates of specific yield. As a result, minimum and maximum values of specific yield were estimated and used to recalculate change in storage. Estimates also were derived for the minimum and maximum amounts of recharge that could occur as a result of cultivation practices. The minimum and maximum estimates for specific yield and for recharge from cultivation practices were used to compute a range of values for the potential amount of additional recharge that occurred as a result of irrigation. The minimum and maximum amounts of recharge that could be caused by irrigation in the study area were 953,200 acre-ft (28% of pumpage) and 2,611,200 acre-ft (76% of pumpage), respectively. These values indicate that a substantial percentage of the water pumped from the aquifer is resupplied to storage in the aquifer as a result of a combination of irrigation return flow and enhanced recharge from precipitation that results from cultivation and irrigation practices. (Author 's abstract)

  13. Simultaneous use of mark-recapture and radiotelemetry to estimate survival, movement, and capture rates

    USGS Publications Warehouse

    Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.

    2000-01-01

    Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide adequate data for useful inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.

  14. Software simulations of the detection of rapidly moving asteroids by a charge-coupled device

    NASA Astrophysics Data System (ADS)

    McMillan, R. S.; Stoll, C. P.

    1982-10-01

    A rendezvous of an unmanned probe to an earth-approaching asteroid has been given a high priority in the planning of interplanetary missions for the 1990s. Even without a space mission, much could be learned about the history of asteroids and comet nuclei if more information were available concerning asteroids with orbits which cross or approach the orbit of earth. It is estimated that the total number of earth-crossers accessible to ground-based survey telescopes should be approximately 1000. However, in connection with the small size and rapid angular motion expected of many of these objects an average of only one object is discovered per year. Attention is given to the development of the software necessary to distinguish such rapidly moving asteroids from stars and noise in continuously scanned CCD exposures of the night sky. Model and input parameters are considered along with detector sensitivity, aspects of minimum detectable displacement, and the point-spread function of the CCD.

  15. Assessment of age-dependent uranium intake due to drinking water in Hyderabad, India.

    PubMed

    Balbudhe, A Y; Srivastava, S K; Vishwaprasad, K; Srivastava, G K; Tripathi, R M; Puranik, V D

    2012-03-01

    A study has been done to assess the uranium intake through drinking water. The area of study is twin cities of Hyderabad and Secunderabad, India. Uranium concentration in water samples was analysed by laser-induced fluorimetry. The associated age-dependent uranium intake was estimated by taking the prescribed water intake values. The concentration of uranium varies from below detectable level (minimum detectable level = 0.20 ± 0.02 μg l(-1)) to 2.50 ± 0.18 μg l(-1), with the geometric mean (GM) of 0.67 μg l(-1) in tap water, whereas in ground water, the range is 0.60 ± 0.05 to 82 ± 7.1 µg l(-1) with GM of 10.07 µg l(-1). The daily intake of uranium by drinking water pathway through tap water for various age groups is found to vary from 0.14 to 9.50 µg d(-1) with mean of 1.55 µg d(-1).

  16. Fiber-optic annular detector array for large depth of field photoacoustic macroscopy.

    PubMed

    Bauer-Marschallinger, Johannes; Höllinger, Astrid; Jakoby, Bernhard; Burgholzer, Peter; Berer, Thomas

    2017-03-01

    We report on a novel imaging system for large depth of field photoacoustic scanning macroscopy. Instead of commonly used piezoelectric transducers, fiber-optic based ultrasound detection is applied. The optical fibers are shaped into rings and mainly receive ultrasonic signals stemming from the ring symmetry axes. Four concentric fiber-optic rings with varying diameters are used in order to increase the image quality. Imaging artifacts, originating from the off-axis sensitivity of the rings, are reduced by coherence weighting. We discuss the working principle of the system and present experimental results on tissue mimicking phantoms. The lateral resolution is estimated to be below 200 μm at a depth of 1.5 cm and below 230 μm at a depth of 4.5 cm. The minimum detectable pressure is in the order of 3 Pa. The introduced method has the potential to provide larger imaging depths than acoustic resolution photoacoustic microscopy and an imaging resolution similar to that of photoacoustic computed tomography.

  17. Near-infrared diode laser hydrogen fluoride monitor for dielectric etch

    NASA Astrophysics Data System (ADS)

    Xu, Ning; Pirkle, David R.; Jeffries, Jay B.; McMillin, Brian; Hanson, Ronald K.

    2004-11-01

    A hydrogen fluoride (HF) monitor, using a tunable diode laser, is designed and used to detect the etch endpoints for dielectric film etching in a commercial plasma reactor. The reactor plasma contains HF, a reaction product of feedstock gas CF4 and the hydrogen-containing films (photoresist, SiOCH) on the substrate. A near-infrared diode laser is used to scan the P(3) transition in the first overtone of HF near 1.31 μm to monitor changes in the level of HF concentration in the plasma. Using 200 ms averaging and a signal modulation technique, we estimate a minimum detectable HF absorbance of 6×10-5 in the etch plasma, corresponding to an HF partial pressure of 0.03 mTorr. The sensor could indicate, in situ, the SiOCH over tetraethoxysilane oxide (TEOS) trench endpoint, which was not readily discerned by optical emission. These measurements demonstrate the feasibility of a real-time diode laser-based sensor for etch endpoint monitoring and a potential for process control.

  18. Relative dynamics and motion control of nanosatellite formation flying

    NASA Astrophysics Data System (ADS)

    Pimnoo, Ammarin; Hiraki, Koju

    2016-04-01

    Orbit selection is a necessary factor in nanosatellite formation mission design/meanwhile, to keep the formation, it is necessary to consume fuel. Therefore, the best orbit design for nanosatellite formation flying should be one that requires the minimum fuel consumption. The purpose of this paper is to analyse orbit selection with respect to the minimum fuel consumption, to provide a convenient way to estimate the fuel consumption for keeping nanosatellite formation flying and to present a simplified method of formation control. The formation structure is disturbed by J2 gravitational perturbation and other perturbing accelerations such as atmospheric drag. First, Gauss' Variation Equations (GVE) are used to estimate the essential ΔV due to the J2 perturbation and atmospheric drag. The essential ΔV presents information on which orbit is good with respect to the minimum fuel consumption. Then, the linear equations which account for J2 gravitational perturbation of Schweighart-Sedwick are presented and used to estimate the fuel consumption to maintain the formation structure. Finally, the relative dynamics motion is presented as well as a simplified motion control of formation structure by using GVE.

  19. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error☆☆☆

    PubMed Central

    Stenroos, Matti; Hauk, Olaf

    2013-01-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259

  20. Design and Implementation of a C++ Multithreaded Operational Tool for the Generation of Detection Time Grids in 2D for P- and S-waves taking into Consideration Seismic Network Topology and Data Latency

    NASA Astrophysics Data System (ADS)

    Sardina, V.

    2017-12-01

    The Pacific Tsunami Warning Center's round the clock operations rely on the rapid determination of the source parameters of earthquakes occurring around the world. To rapidly estimate source parameters such as earthquake location and magnitude the PTWC analyzes data streams ingested in near-real time from a global network of more than 700 seismic stations. Both the density of this network and the data latency of its member stations at any given time have a direct impact on the speed at which the PTWC scientists on duty can locate an earthquake and estimate its magnitude. In this context, it turns operationally advantageous to have the ability of assessing how quickly the PTWC operational system can reasonably detect and locate and earthquake, estimate its magnitude, and send the corresponding tsunami message whenever appropriate. For this purpose, we designed and implemented a multithreaded C++ software package to generate detection time grids for both P- and S-waves after taking into consideration the seismic network topology and the data latency of its member stations. We first encapsulate all the parameters of interest at a given geographic point, such as geographic coordinates, P- and S-waves detection time in at least a minimum number of stations, and maximum allowed azimuth gap into a DetectionTimePoint class. Then we apply composition and inheritance to define a DetectionTimeLine class that handles a vector of DetectionTimePoint objects along a given latitude. A DetectionTimesGrid class in turn handles the dynamic allocation of new TravelTimeLine objects and assigning the calculation of the corresponding P- and S-waves' detection times to new threads. Finally, we added a GUI that allows the user to interactively set all initial calculation parameters and output options. Initial testing in an eight core system shows that generation of a global 2D grid at 1 degree resolution setting detection on at least 5 stations and no azimuth gap restriction takes under 25 seconds. Under the same initial conditions, generation of a 2D grid at 0.1 degree resolution (2.6 million grid points) takes no more than 22 minutes. This preliminary results show a significant gain in grid generation speed when compared to other implementation via either scripts, or previous versions of the C++ code that did not implement multithreading.

  1. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  2. The size of the irregular migrant population in the European Union – counting the uncountable?

    PubMed

    Vogel, Dita; Kovacheva, Vesela; Prescott, Hannah

    2011-01-01

    It is difficult to estimate the size of the irregular migrant population in a specific city or country, and even more difficult to arrive at estimates at the European level. A review of past attempts at European-level estimates reveals that they rely on rough and outdated rules-of-thumb. In this paper, we present our own European level estimates for 2002, 2005, and 2008. We aggregate country-specific information, aiming at approximate comparability by consistent use of minimum and maximum estimates and by adjusting for obvious differences in definition and timescale. While the aggregated estimates are not considered highly reliable, they do -- for the first time -- provide transparency. The provision of more systematic medium quality estimates is shown to be the most promising way for improvement. The presented estimate indicates a minimum of 1.9 million and a maximum of 3.8 million irregular foreign residents in the 27 member states of the European Union (2008). Unlike rules-of-thumb, the aggregated EU estimates indicate a decline in the number of irregular foreign residents between 2002 and 2008. This decline has been influenced by the EU enlargement and legalisation programmes.

  3. 77 FR 51807 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-27

    ... Minimum Data Elements (MDEs) for the National Breast and Cervical Cancer Early Detection Program (NBCCEDP... screening and early detection tests for breast and cervical cancer. Mammography is extremely valuable as an early detection tool because it can detect breast cancer well before the woman can feel the lump, when...

  4. Behavioral and physiological significance of minimum resting metabolic rate in king penguins.

    PubMed

    Halsey, L G; Butler, P J; Fahlman, A; Woakes, A J; Handrich, Y

    2008-01-01

    Because fasting king penguins (Aptenodytes patagonicus) need to conserve energy, it is possible that they exhibit particularly low metabolic rates during periods of rest. We investigated the behavioral and physiological aspects of periods of minimum metabolic rate in king penguins under different circumstances. Heart rate (f(H)) measurements were recorded to estimate rate of oxygen consumption during periods of rest. Furthermore, apparent respiratory sinus arrhythmia (RSA) was calculated from the f(H) data to determine probable breathing frequency in resting penguins. The most pertinent results were that minimum f(H) achieved (over 5 min) was higher during respirometry experiments in air than during periods ashore in the field; that minimum f(H) during respirometry experiments on water was similar to that while at sea; and that RSA was apparent in many of the f(H) traces during periods of minimum f(H) and provides accurate estimates of breathing rates of king penguins resting in specific situations in the field. Inferences made from the results include that king penguins do not have the capacity to reduce their metabolism to a particularly low level on land; that they can, however, achieve surprisingly low metabolic rates at sea while resting in cold water; and that during respirometry experiments king penguins are stressed to some degree, exhibiting an elevated metabolism even when resting.

  5. The investigation of solar activity signals by analyzing of tree ring chronological scales

    NASA Astrophysics Data System (ADS)

    Nickiforov, M. G.

    2017-07-01

    The present study examines the ability of detecting short-cycles and global minima of solar activity by analyzing dendrochronologies. Starting with the study of Douglass, which was devoted to the question of climatic cycles and the growth of trees, it is believed that the analysis of dendrochronologies allows to detect the cycle of Wolf-Schwabe. According to his results, the cycle was absent during Maunder's minimum and appeared after its completion. Having checked Douglass's conclusions by using 10 dendrochronologies of yellow pines from Arizona, which cover the time period from 1600 to 1900, we have come to the opposite results. The verification shows that: a) none of the considered dendroscale allows to detect an 11-year cycle; 2) the behaviour of a short peroid-signal does not undergo significant changes before, during or after Maunder's minimum. A similar attempt to detect global minima of solar activity by using five dendrochronologies from different areas has not led to positive results. On the one hand, the signal of global extremum is not always recorded in dendrochronology, on the other hand, the deep depression of annual rings allows to suppose the existence of a global minimum of solar activity, which is actually absent.

  6. Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Fraser, C. S.; Lu, G.

    2015-08-01

    Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.

  7. Detecting anthropogenic footprints in sea level rise: the role of complex colored noise

    NASA Astrophysics Data System (ADS)

    Dangendorf, Sönke; Marcos, Marta; Müller, Alfred; Zorita, Eduardo; Jensen, Jürgen

    2015-04-01

    While there is scientific consensus that global mean sea level (MSL) is rising since the late 19th century, it remains unclear how much of this rise is due to natural variability or anthropogenic forcing. Uncovering the anthropogenic contribution requires profound knowledge about the persistence of natural MSL variations. This is challenging, since observational time series represent the superposition of various processes with different spectral properties. Here we statistically estimate the upper bounds of naturally forced centennial MSL trends on the basis of two separate components: a slowly varying volumetric (mass and density changes) and a more rapidly changing atmospheric component. Resting on a combination of spectral analyses of tide gauge records, ocean reanalysis data and numerical Monte-Carlo experiments, we find that in records where transient atmospheric processes dominate, the persistence of natural volumetric changes is underestimated. If each component is assessed separately, natural centennial trends are locally up to ~0.5 mm/yr larger than in case of an integrated assessment. This implies that external trends in MSL rise related to anthropogenic forcing might be generally overestimated. By applying our approach to the outputs of a centennial ocean reanalysis (SODA), we estimate maximum natural trends in the order of 1 mm/yr for the global average. This value is larger than previous estimates, but consistent with recent paleo evidence from periods in which the anthropogenic contribution was absent. Comparing our estimate to the observed 20th century MSL rise of 1.7 mm/yr suggests a minimum external contribution of at least 0.7 mm/yr. We conclude that an accurate detection of anthropogenic footprints in MSL rise requires a more careful assessment of the persistence of intrinsic natural variability.

  8. Consideraciones para la estimacion de abundancia de poblaciones de mamiferos. [Considerations for the estimation of abundance of mammal populations.

    USGS Publications Warehouse

    Walker, R.S.; Novare, A.J.; Nichols, J.D.

    2000-01-01

    Estimation of abundance of mammal populations is essential for monitoring programs and for many ecological investigations. The first step for any study of variation in mammal abundance over space or time is to define the objectives of the study and how and why abundance data are to be used. The data used to estimate abundance are count statistics in the form of counts of animals or their signs. There are two major sources of uncertainty that must be considered in the design of the study: spatial variation and the relationship between abundance and the count statistic. Spatial variation in the distribution of animals or signs may be taken into account with appropriate spatial sampling. Count statistics may be viewed as random variables, with the expected value of the count statistic equal to the true abundance of the population multiplied by a coefficient p. With direct counts, p represents the probability of detection or capture of individuals, and with indirect counts it represents the rate of production of the signs as well as their probability of detection. Comparisons of abundance using count statistics from different times or places assume that the p are the same for all times or places being compared (p= pi). In spite of considerable evidence that this assumption rarely holds true, it is commonly made in studies of mammal abundance, as when the minimum number alive or indices based on sign counts are used to compare abundance in different habitats or times. Alternatives to relying on this assumption are to calibrate the index used by testing the assumption of p= pi, or to incorporate the estimation of p into the study design.

  9. Estimation of transformation parameters for microarray data.

    PubMed

    Durbin, Blythe; Rocke, David M

    2003-07-22

    Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.

  10. 50 CFR 218.174 - Requirements for monitoring and reporting.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-based surveys shall be designed to maximize detections of marine mammals near mission activity event. (2... Navy to implement, at a minimum, the monitoring activities summarized below: (1) Visual Surveys: (i) The Holder of this Authorization shall conduct a minimum of 2 special visual surveys per year to...

  11. Theoretical detection threshold of the proton-acoustic range verification technique.

    PubMed

    Ahmad, Moiz; Xiang, Liangzhong; Yousefi, Siavash; Xing, Lei

    2015-10-01

    Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method. Various beam pulse widths (0.1-10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. The calculated noise in the transducer was 12-28 mPa, depending on the transducer central frequency (70-380 kHz). The minimum number of protons detectable by the technique was on the order of 3-30 × 10(6) per pulse, with 30-800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 10(6) protons/pulse and beam current.

  12. Theoretical detection threshold of the proton-acoustic range verification technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, Moiz; Yousefi, Siavash; Xing, Lei, E-mail: lei@stanford.edu

    2015-10-15

    Purpose: Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. Methods: An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method.more » Various beam pulse widths (0.1–10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. Results: The calculated noise in the transducer was 12–28 mPa, depending on the transducer central frequency (70–380 kHz). The minimum number of protons detectable by the technique was on the order of 3–30 × 10{sup 6} per pulse, with 30–800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. Conclusions: The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 10{sup 6} protons/pulse and beam current.« less

  13. Theoretical detection threshold of the proton-acoustic range verification technique

    PubMed Central

    Ahmad, Moiz; Xiang, Liangzhong; Yousefi, Siavash; Xing, Lei

    2015-01-01

    Purpose: Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. Methods: An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method. Various beam pulse widths (0.1–10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. Results: The calculated noise in the transducer was 12–28 mPa, depending on the transducer central frequency (70–380 kHz). The minimum number of protons detectable by the technique was on the order of 3–30 × 106 per pulse, with 30–800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. Conclusions: The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 106 protons/pulse and beam current. PMID:26429247

  14. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  15. Sparse EEG/MEG source estimation via a group lasso

    PubMed Central

    Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor

    2017-01-01

    Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790

  16. Using genetic pedigree reconstruction to estimate effective spawner abundance from redd surveys: an example involving Pacific lamprey (Entosphenus tridentatus)

    USGS Publications Warehouse

    Whitlock, S.L.; Schultz, L.D.; Schreck, Carl B.; Hess, J.E.

    2017-01-01

    Redd surveys are a commonly used technique for indexing the abundance of sexually mature fish in streams; however, substantial effort is often required to link redd counts to actual spawner abundance. In this study, we describe how genetic pedigree reconstruction can be used to estimate effective spawner abundance in a stream reach, using Pacific lamprey (Entosphenus tridentatus) as an example. Lamprey embryos were sampled from redds within a 2.5 km reach of the Luckiamute River, Oregon, USA. Embryos were found in only 20 of the 48 redds sampled (suggesting 58% false redds); however, multiple sets of parents were detected in 44% of the true redds. Estimates from pedigree reconstruction suggested that there were 0.48 (95% CI: 0.29–0.88) effective spawners per redd and revealed that individual lamprey contributed gametes to a minimum of between one and six redds, and in one case, spawned in patches that were separated by over 800 m. Our findings demonstrate the utility of pedigree reconstruction techniques for both inferring spawning-ground behaviors and providing useful information for refining lamprey redd survey methodologies.

  17. Reassessing Wind Potential Estimates for India: Economic and Policy Implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phadke, Amol; Bharvirkar, Ranjit; Khangura, Jagmeet

    2011-09-15

    We assess developable on-shore wind potential in India at three different hub-heights and under two sensitivity scenarios – one with no farmland included, the other with all farmland included. Under the “no farmland included” case, the total wind potential in India ranges from 748 GW at 80m hub-height to 976 GW at 120m hub-height. Under the “all farmland included” case, the potential with a minimum capacity factor of 20 percent ranges from 984 GW to 1,549 GW. High quality wind energy sites, at 80m hub-height with a minimum capacity factor of 25 percent, have a potential between 253 GW (nomore » farmland included) and 306 GW (all farmland included). Our estimates are more than 15 times the current official estimate of wind energy potential in India (estimated at 50m hub height) and are about one tenth of the official estimate of the wind energy potential in the US.« less

  18. Sunspot variation and selected associated phenomena: A look at solar cycle 21 and beyond

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.

    1982-01-01

    Solar sunspot cycles 8 through 21 are reviewed. Mean time intervals are calculated for maximum to maximum, minimum to minimum, minimum to maximum, and maximum to minimum phases for cycles 8 through 20 and 8 through 21. Simple cosine functions with a period of 132 years are compared to, and found to be representative of, the variation of smoothed sunspot numbers at solar maximum and minimum. A comparison of cycles 20 and 21 is given, leading to a projection for activity levels during the Spacelab 2 era (tentatively, November 1984). A prediction is made for cycle 22. Major flares are observed to peak several months subsequent to the solar maximum during cycle 21 and to be at minimum level several months after the solar minimum. Additional remarks are given for flares, gradual rise and fall radio events and 2800 MHz radio emission. Certain solar activity parameters, especially as they relate to the near term Spacelab 2 time frame are estimated.

  19. Measurement of caesium-137 in the human body using a whole body counter

    NASA Astrophysics Data System (ADS)

    Elessawi, Elkhadra Abdulmula

    Gamma radiation in the environment is mainly due to naturally occurring radionuclides. However, there is also a contribution from anthropogenic radionuclides such as 137Cs which originate from nuclear fission processes. Since 1986, the accident at the Chernobyl power plant has been a significant source of artificial environmental radioactivity. In order to assess the radiological impact of these radionuclides, it is necessary to measure their activities in samples drawn from the environment and in plants and animals including human populations. The whole body counter (WBC) at the University Hospital of Wales in Cardiff makes in vivo measurements of gamma emitting radionuclides using a scanning ring of six large-volume thallium-doped sodium iodide (Nal(Tl)) scintillation detectors. In this work the WBC was upgraded by the addition of two high purity germanium (HPGe) detectors. The performance and suitability of the detection systems were evaluated by comparing the detection limits for Cs. Sensitivities were measured using sources of known activity in a water filled anthropomorphic phantom and theoretical minimum detectable count-rates were estimated from phantom background pulse height spectra. The theoretical minimum detectable activity was about 24 Bq for the combination of six Nal(Tl) detectors whereas for the individual HPGe detectors it was 64 Bq and 65 Bq, despite the much improved energy resolution Activities of 137Cs in the human body between 1993 and 2007 were estimated from the background Nal(Tl) spectra of 813 patients and compared with recent measurements in 14 volunteers. The body burden of Cs in Cardiff patients increased from an average of about 60 Bq in the early and mid 1990s to a maximum of about 100 Bq in 2000. By 2007 it had decreased to about 40 Bq. This latter value was similar to that of Cardiff residents at the time of the Chernobyl accident and to that of the volunteers measured in 2007 (51 Bq). However, it was less than the mean activity of Cardiff residents in 1988 (130 Bq) indicating an overall decrease over a period of about 20 years. The variation in the in vivo activity is probably due to complex inter-relationships between a number of factors such as the removal of deposited 137Cs into the sea by rainfall, individual dietary choices, the imposition and removal of restrictions on foodstuffs from Chernobyl-affected areas and travel to countries that suffered greater initial fall-out than the UK.

  20. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  1. A model for estimating pathogen variability in shellfish and predicting minimum depuration times.

    PubMed

    McMenemy, Paul; Kleczkowski, Adam; Lees, David N; Lowther, James; Taylor, Nick

    2018-01-01

    Norovirus is a major cause of viral gastroenteritis, with shellfish consumption being identified as one potential norovirus entry point into the human population. Minimising shellfish norovirus levels is therefore important for both the consumer's protection and the shellfish industry's reputation. One method used to reduce microbiological risks in shellfish is depuration; however, this process also presents additional costs to industry. Providing a mechanism to estimate norovirus levels during depuration would therefore be useful to stakeholders. This paper presents a mathematical model of the depuration process and its impact on norovirus levels found in shellfish. Two fundamental stages of norovirus depuration are considered: (i) the initial distribution of norovirus loads within a shellfish population and (ii) the way in which the initial norovirus loads evolve during depuration. Realistic assumptions are made about the dynamics of norovirus during depuration, and mathematical descriptions of both stages are derived and combined into a single model. Parameters to describe the depuration effect and norovirus load values are derived from existing norovirus data obtained from U.K. harvest sites. However, obtaining population estimates of norovirus variability is time-consuming and expensive; this model addresses the issue by assuming a 'worst case scenario' for variability of pathogens, which is independent of mean pathogen levels. The model is then used to predict minimum depuration times required to achieve norovirus levels which fall within possible risk management levels, as well as predictions of minimum depuration times for other water-borne pathogens found in shellfish. Times for Escherichia coli predicted by the model all fall within the minimum 42 hours required for class B harvest sites, whereas minimum depuration times for norovirus and FRNA+ bacteriophage are substantially longer. Thus this study provides relevant information and tools to assist norovirus risk managers with future control strategies.

  2. The Lactate Minimum Test: Concept, Methodological Aspects and Insights for Future Investigations in Human and Animal Models

    PubMed Central

    Messias, Leonardo H. D.; Gobatto, Claudio A.; Beck, Wladimir R.; Manchado-Gobatto, Fúlvia B.

    2017-01-01

    In 1993, Uwe Tegtbur proposed a useful physiological protocol named the lactate minimum test (LMT). This test consists of three distinct phases. Firstly, subjects must perform high intensity efforts to induce hyperlactatemia (phase 1). Subsequently, 8 min of recovery are allowed for transposition of lactate from myocytes (for instance) to the bloodstream (phase 2). Right after the recovery, subjects are submitted to an incremental test until exhaustion (phase 3). The blood lactate concentration is expected to fall during the first stages of the incremental test and as the intensity increases in subsequent stages, to rise again forming a “U” shaped blood lactate kinetic. The minimum point of this curve, named the lactate minimum intensity (LMI), provides an estimation of the intensity that represents the balance between the appearance and clearance of arterial blood lactate, known as the maximal lactate steady state intensity (iMLSS). Furthermore, in addition to the iMLSS estimation, studies have also determined anaerobic parameters (e.g., peak, mean, and minimum force/power) during phase 1 and also the maximum oxygen consumption in phase 3; therefore, the LMT is considered a robust physiological protocol. Although, encouraging reports have been published in both human and animal models, there are still some controversies regarding three main factors: (1) the influence of methodological aspects on the LMT parameters; (2) LMT effectiveness for monitoring training effects; and (3) the LMI as a valid iMLSS estimator. Therefore, the aim of this review is to provide a balanced discussion between scientific evidence of the aforementioned issues, and insights for future investigations are suggested. In summary, further analyses is necessary to determine whether these factors are worthy, since the LMT is relevant in several contexts of health sciences. PMID:28642717

  3. Minimum number of days required for a reliable estimate of daily step count and energy expenditure, in people with MS who walk unaided.

    PubMed

    Norris, Michelle; Anderson, Ross; Motl, Robert W; Hayes, Sara; Coote, Susan

    2017-03-01

    The purpose of this study was to examine the minimum number of days needed to reliably estimate daily step count and energy expenditure (EE), in people with multiple sclerosis (MS) who walked unaided. Seven days of activity monitor data were collected for 26 participants with MS (age=44.5±11.9years; time since diagnosis=6.5±6.2years; Patient Determined Disease Steps=≤3). Mean daily step count and mean daily EE (kcal) were calculated for all combinations of days (127 combinations), and compared to the respective 7-day mean daily step count or mean daily EE using intra-class correlations (ICC), the Generalizability Theory and Bland-Altman. For step count, ICC values of 0.94-0.98 and a G-coefficient of 0.81 indicate a minimum of any random 2-day combination is required to reliably calculate mean daily step count. For EE, ICC values of 0.96-0.99 and a G-coefficient of 0.83 indicate a minimum of any random 4-day combination is required to reliably calculate mean daily EE. For Bland-Altman analyses all combinations of days, bar single day combinations, resulted in a mean bias within ±10%, when expressed as a percentage of the 7-day mean daily step count or mean daily EE. A minimum of 2days for step count and 4days for EE, regardless of day type, is needed to reliably estimate daily step count and daily EE, in people with MS who walk unaided. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.

    PubMed

    Song, Chi; Tseng, George C

    2014-01-01

    Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.

  5. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  6. Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.

  7. Is the Oswestry Disability Index a valid measure of response to sacroiliac joint treatment?

    PubMed

    Copay, Anne G; Cher, Daniel J

    2016-02-01

    Disease-specific measures of the impact of sacroiliac (SI) joint pain on back/pelvis function are not available. The Oswestry Disability Index (ODI) is a validated functional measure for lower back pain, but its responsiveness to SI joint treatment has yet to be established. We sought to assess the validity of ODI to capture disability caused by SI joint pain and the minimum clinically important difference (MCID) after SI joint treatment. Patients (n = 155) participating in a prospective clinical trial of minimally invasive SI joint fusion underwent baseline and follow-up assessments using ODI, visual analog scale (VAS) pain assessment, Short Form 36 (SF-36), EuroQoL-5D, and questions (at follow-up only) regarding satisfaction with the SI joint fusion and whether the patient would have the fusion surgery again. All outcomes were compared from baseline to 12 months postsurgery. The health transition item of the SF-36 and the satisfaction scale were used as external anchors to calculate MCID. MCID was estimated for ODI using four calculation methods: (1) minimum detectable change, (2) average ODI change of patients' subsets, (3) change difference between patients' subsets, and (4) receiver operating characteristic (ROC) curve. After SI fusion, patients improved significantly (p < .0001) on all measures: SI joint pain (48.8 points), ODI (23.8 points), EQ-5D (0.29 points), EQ-5D VAS (11.7 points), PCS (8.9 points), and MCS (9.2 points). The improvement in ODI was significantly correlated (p < .0001) with SI joint pain improvement (r = .48) and with the two external anchors: SF-36 health transition item (r = .49) and satisfaction level (r = .34). The MCID values calculated for ODI using the various methods ranged from 3.5 to 19.5 points. The ODI minimum detectable change was 15.5 with the health transition item as the anchor and 13.5 with the satisfaction scale as the anchor. ODI is a valid measure of change in SI joint health. Hence, researchers and clinicians may rely on ODI scores to measure disability caused by SI pain. We estimated the MCID for ODI to be 13-15 points, which falls within the range of that previously reported for lumbar back pain and indicates that an improvement in disability should be at least 15 % to be beyond random variation.

  8. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    PubMed Central

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  9. Variable Selection for Confounder Control, Flexible Modeling and Collaborative Targeted Minimum Loss-Based Estimation in Causal Inference.

    PubMed

    Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan

    2016-05-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.

  10. A basin-scale approach to estimating stream temperatures of tributaries to the lower Klamath River, California

    USGS Publications Warehouse

    Flint, L.E.; Flint, A.L.

    2008-01-01

    Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  11. A Simulation-Based Comparison of Several Stochastic Linear Regression Methods in the Presence of Outliers.

    ERIC Educational Resources Information Center

    Rule, David L.

    Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…

  12. 12 CFR Appendix M1 to Part 226 - Repayment Disclosures

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...

  13. 12 CFR Appendix M1 to Part 226 - Repayment Disclosures

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... a fixed period of time, as set forth by the card issuer. (2) “Deferred interest or similar plan... calculating the minimum payment repayment estimate, card issuers must use the minimum payment formula(s) that... purchases, such as a “club plan purchase.” Also, assume that based on a consumer's balances in these...

  14. Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm

    Treesearch

    Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney

    2014-01-01

    Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...

  15. Fully invariant wavelet enhanced minimum average correlation energy filter for object recognition in cluttered and occluded environments

    NASA Astrophysics Data System (ADS)

    Tehsin, Sara; Rehman, Saad; Riaz, Farhan; Saeed, Omer; Hassan, Ali; Khan, Muazzam; Alam, Muhammad S.

    2017-05-01

    A fully invariant system helps in resolving difficulties in object detection when camera or object orientation and position are unknown. In this paper, the proposed correlation filter based mechanism provides the capability to suppress noise, clutter and occlusion. Minimum Average Correlation Energy (MACE) filter yields sharp correlation peaks while considering the controlled correlation peak value. Difference of Gaussian (DOG) Wavelet has been added at the preprocessing stage in proposed filter design that facilitates target detection in orientation variant cluttered environment. Logarithmic transformation is combined with a DOG composite minimum average correlation energy filter (WMACE), capable of producing sharp correlation peaks despite any kind of geometric distortion of target object. The proposed filter has shown improved performance over some of the other variant correlation filters which are discussed in the result section.

  16. Sunspot Observations During the Maunder Minimum from the Correspondence of John Flamsteed

    NASA Astrophysics Data System (ADS)

    Carrasco, V. M. S.; Vaquero, J. M.

    2016-11-01

    We compile and analyze the sunspot observations made by John Flamsteed for the period 1672 - 1703, which corresponds to the second part of the Maunder Minimum. They appear in the correspondence of the famous astronomer. We include in an appendix the original texts of the sunspot records kept by Flamsteed. We compute an estimate of the level of solar activity using these records, and compare the results with the latest reconstructions of solar activity during the Maunder Minimum, obtaining values characteristic of a grand solar minimum. Finally, we discuss a phenomenon observed and described by Stephen Gray in 1705 that has been interpreted as a white-light flare.

  17. Assessing Seasonal and Inter-Annual Variations of Lake Surface Areas in Mongolia during 2000-2011 Using Minimum Composite MODIS NDVI

    PubMed Central

    Kang, Sinkyu; Hong, Suk Young

    2016-01-01

    A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km2. The lake area decreased by -9.3% at an annual rate of -53.7 km2 yr-1 during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability. PMID:27007233

  18. Assessing Seasonal and Inter-Annual Variations of Lake Surface Areas in Mongolia during 2000-2011 Using Minimum Composite MODIS NDVI.

    PubMed

    Kang, Sinkyu; Hong, Suk Young

    2016-01-01

    A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km(2). The lake area decreased by -9.3% at an annual rate of -53.7 km(2) yr(-1) during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability.

  19. Differential detection of Gaussian MSK in a mobile radio environment

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Wang, C. C.

    1984-01-01

    Minimum shift keying with Gaussian shaped transmit pulses is a strong candidate for a modulation technique that satisfies the stringent out-of-band radiated power requirements of the mobil radio application. Numerous studies and field experiments have been conducted by the Japanese on urban and suburban mobile radio channels with systems employing Gaussian minimum-shift keying (GMSK) transmission and differentially coherent reception. A comprehensive analytical treatment is presented of the performance of such systems emphasizing the important trade-offs among the various system design parameters such as transmit and receiver filter bandwidths and detection threshold level. It is shown that two-bit differential detection of GMSK is capable of offering far superior performance to the more conventional one-bit detection method both in the presence of an additive Gaussian noise background and Rician fading.

  20. Differential detection of Gaussian MSK in a mobile radio environment

    NASA Astrophysics Data System (ADS)

    Simon, M. K.; Wang, C. C.

    1984-11-01

    Minimum shift keying with Gaussian shaped transmit pulses is a strong candidate for a modulation technique that satisfies the stringent out-of-band radiated power requirements of the mobil radio application. Numerous studies and field experiments have been conducted by the Japanese on urban and suburban mobile radio channels with systems employing Gaussian minimum-shift keying (GMSK) transmission and differentially coherent reception. A comprehensive analytical treatment is presented of the performance of such systems emphasizing the important trade-offs among the various system design parameters such as transmit and receiver filter bandwidths and detection threshold level. It is shown that two-bit differential detection of GMSK is capable of offering far superior performance to the more conventional one-bit detection method both in the presence of an additive Gaussian noise background and Rician fading.

  1. Statistical modeling, detection, and segmentation of stains in digitized fabric images

    NASA Astrophysics Data System (ADS)

    Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.

    2007-02-01

    This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.

  2. Investigation of primordial black hole bursts using interplanetary network gamma-ray bursts

    DOE PAGES

    Ukwatta, Tilan Niranjan; Hurley, Kevin; MacGibbon, Jane H.; ...

    2016-07-25

    The detection of a gamma-ray burst (GRB) in the solar neighborhood would have very important implications for GRB phenomenology. The leading theories for cosmological GRBs would not be able to explain such events. The final bursts of evaporating primordial black holes (PBHs), however, would be a natural explanation for local GRBs. We present a novel technique that can constrain the distance to GRBs using detections from widely separated, non-imaging spacecraft. This method can determine the actual distance to the burst if it is local. We applied this method to constrain distances to a sample of 36 short-duration GRBs detected bymore » the Interplanetary Network (IPN) that show observational properties that are expected from PBH evaporations. These bursts have minimum possible distances in the 10 13–10 18 cm (7–10 5 au) range, which are consistent with the expected PBH energetics and with a possible origin in the solar neighborhood, although none of the bursts can be unambiguously demonstrated to be local. Furthermore, assuming that these bursts are real PBH events, we estimate lower limits on the PBH burst evaporation rate in the solar neighborhood.« less

  3. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  4. Low level radioactivity measurements with phoswich detectors using coincident techniques and digital pulse processing analysis.

    PubMed

    de la Fuente, R; de Celis, B; del Canto, V; Lumbreras, J M; de Celis Alonso, B; Martín-Martín, A; Gutierrez-Villanueva, J L

    2008-10-01

    A new system has been developed for the detection of low radioactivity levels of fission products and actinides using coincidence techniques. The device combines a phoswich detector for alpha/beta/gamma-ray recognition with a fast digital card for electronic pulse analysis. The phoswich can be used in a coincident mode by identifying the composed signal produced by the simultaneous detection of alpha/beta particles and X-rays/gamma particles. The technique of coincidences with phoswich detectors was proposed recently to verify the Nuclear Test Ban Treaty (NTBT) which established the necessity of monitoring low levels of gaseous fission products produced by underground nuclear explosions. With the device proposed here it is possible to identify the coincidence events and determine the energy and type of coincident particles. The sensitivity of the system has been improved by employing liquid scintillators and a high resolution low energy germanium detector. In this case it is possible to identify simultaneously by alpha/gamma coincidence transuranic nuclides present in environmental samples without necessity of performing radiochemical separation. The minimum detectable activity was estimated to be 0.01 Bq kg(-1) for 0.1 kg of soil and 1000 min counting.

  5. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  6. Estimation of Mesospheric Densities at Low Latitudes Using the Kunming Meteor Radar Together With SABER Temperatures

    NASA Astrophysics Data System (ADS)

    Yi, Wen; Xue, Xianghui; Reid, Iain M.; Younger, Joel P.; Chen, Jinsong; Chen, Tingdi; Li, Na

    2018-04-01

    Neutral mesospheric densities at a low latitude have been derived during April 2011 to December 2014 using data from the Kunming meteor radar in China (25.6°N, 103.8°E). The daily mean density at 90 km was estimated using the ambipolar diffusion coefficients from the meteor radar and temperatures from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument. The seasonal variations of the meteor radar-derived density are consistent with the density from the Mass Spectrometer and Incoherent Scatter (MSIS) model, show a dominant annual variation, with a maximum during winter, and a minimum during summer. A simple linear model was used to separate the effects of atmospheric density and the meteor velocity on the meteor radar peak detection height. We find that a 1 km/s difference in the vertical meteor velocity yields a change of approximately 0.42 km in peak height. The strong correlation between the meteor radar density and the velocity-corrected peak height indicates that the meteor radar density estimates accurately reflect changes in neutral atmospheric density and that meteor peak detection heights, when adjusted for meteoroid velocity, can serve as a convenient tool for measuring density variations around the mesopause. A comparison of the ambipolar diffusion coefficient and peak height observed simultaneously by two co-located meteor radars indicates that the relative errors of the daily mean ambipolar diffusion coefficient and peak height should be less than 5% and 6%, respectively, and that the absolute error of the peak height is less than 0.2 km.

  7. Rate based failure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less

  8. Efficiency calibration and minimum detectable activity concentration of a real-time UAV airborne sensor system with two gamma spectrometers.

    PubMed

    Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2016-04-01

    A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Natural radionuclide and radiological assessment of building materials in high background radiation areas of Ramsar, Iran.

    PubMed

    Bavarnegin, Elham; Moghaddam, Masoud Vahabi; Fathabadi, Nasrin

    2013-04-01

    Building materials, collected from different sites in Ramsar, a northern coastal city in Iran, were analyzed for their natural radionuclide contents. The measurements were carried out using a high resolution high purity Germanium (HPGe) gamma-ray spectrometer system. The activity concentration of (226)Ra, (232)Th, and (40)K content varied from below the minimum detection limit up to 86,400 Bqkg(-1), 187 Bqkg(-1), and 1350 Bqkg(-1), respectively. The radiological hazards incurred from the use of these building materials were estimated through various radiation hazard indices. The result of this survey shows that values obtained for some samples are more than the internationally accepted maximum limits and as such, the use of them as a building material pose significant radiation hazard to individuals.

  10. Method to analyze remotely sensed spectral data

    DOEpatents

    Stork, Christopher L [Albuquerque, NM; Van Benthem, Mark H [Middletown, DE

    2009-02-17

    A fast and rigorous multivariate curve resolution (MCR) algorithm is applied to remotely sensed spectral data. The algorithm is applicable in the solar-reflective spectral region, comprising the visible to the shortwave infrared (ranging from approximately 0.4 to 2.5 .mu.m), midwave infrared, and thermal emission spectral region, comprising the thermal infrared (ranging from approximately 8 to 15 .mu.m). For example, employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, MCR can be used to successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. Further, MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of a gas plume component near the minimum detectable quantity.

  11. Data from the 2011 International Piping Plover Census

    USGS Publications Warehouse

    Elliott-Smith, Elise; Bidwell, Mark; Holland, Amanda E.; Haig, Susan M.

    2015-01-01

    This report provides results from the 2011 International Census of Piping Plovers (Charadrius melodus). Distribution and abundance data for wintering and breeding Piping Plovers are summarized in tabular format. An appendix provides census data for every site surveyed in every state, province, and island. The 2011 winter census resulted in the observation of 3,973 Piping Plovers. Expanded coverage outside of the United States led to the discovery of more than 1,000 Piping Plovers wintering in the Bahamas. The breeding census detected 2,771 birds in Atlantic Canada and the Plains, Prairies, and Great Lakes regions of the United States and Canada. Combining the census count with the U.S. Atlantic “window census” provides a total minimum estimate of 5,723 breeding birds for the species.

  12. A spectrophotometric determination of cyanate using reaction with 2-aminobenzoic acid.

    PubMed

    Guilloton, M; Karst, F

    1985-09-01

    A specific method has been devised for the assay of cyanate, based on the reaction with 2-aminobenzoic acid. Cyclization of the product in 6 N HCl results in the formation of 2,4(1H,3H)-quinazolinedione. Cyanate content of the samples can be measured by their absorbances at 310 nm. Alternatively, the second derivatives of the spectra can be recorded; the peak-to-peak height between the first maximum (330 nm) and the first minimum (317 nm) was shown to be proportional to the cyanate content. This method is suitable for the estimation of cyanate in aqueous solutions in the concentration range 0.01 to 2 mM. When added to blood plasma, cyanate could be detected down to 0.1 mM.

  13. Minimum Expected Risk Estimation for Near-neighbor Classification

    DTIC Science & Technology

    2006-04-01

    We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights

  14. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  15. Flying After Conducting an Aircraft Excessive Cabin Leakage Test.

    PubMed

    Houston, Stephen; Wilkinson, Elizabeth

    2016-09-01

    Aviation medical specialists should be aware that commercial airline aircraft engineers may undertake a 'dive equivalent' operation while conducting maintenance activities on the ground. We present a worked example of an occupational risk assessment to determine a minimum safe preflight surface interval (PFSI) for an engineer before flying home to base after conducting an Excessive Cabin Leakage Test (ECLT) on an unserviceable aircraft overseas. We use published dive tables to determine the minimum safe PFSI. The estimated maximum depth acquired during the procedure varies between 10 and 20 fsw and the typical estimated bottom time varies between 26 and 53 min for the aircraft types operated by the airline. Published dive tables suggest that no minimum PFSI is required for such a dive profile. Diving tables suggest that no minimum PFSI is required for the typical ECLT dive profile within the airline; however, having conducted a risk assessment, which considered peak altitude exposure during commercial flight, the worst-case scenario test dive profile, the variability of interindividual inert gas retention, and our existing policy among other occupational groups within the airline, we advised that, in the absence of a bespoke assessment of the particular circumstances on the day, the minimum PFSI after conducting ECLT should be 24 h. Houston S, Wilkinson E. Flying after conducting an aircraft excessive cabin leakage test. Aerosp Med Hum Perform. 2016; 87(9):816-820.

  16. Low-Level detections of halogenated volatile organic compounds in groundwater: Use in vulnerability assessments

    USGS Publications Warehouse

    Plummer, Niel; Busenberg, E.; Eberts, S.M.; Bexfield, L.M.; Brown, C.J.; Fahlquist, L.S.; Katz, B.G.; Landon, M.K.

    2008-01-01

    Concentrations of halogenated volatile organic compounds (VOCs) were determined by gas chromatography (GC) with an electron-capture detector (GC-ECD) and by gas chromatography with mass spectrometry (GC-MS) in 109 groundwater samples from five study areas in the United States. In each case, the untreated water sample was used for drinking-water purposes or was from a monitoring well in an area near a drinking-water source. The minimum detection levels (MDLs) for 25 VOCs that were identified in GC-ECD chromatograms, typically, were two to more than four orders of magnitude below the GC-MS MDLs. At least six halogenated VOCs were detected in all of the water samples analyzed by GC-ECD, although one or more VOCs were detected in only 43% of the water samples analyzed by GC-MS. In nearly all of the samples, VOC concentrations were very low and presented no known health risk. Most of the low-level VOC detections indicated post-1940s recharge, or mixtures of recharge that contained a fraction of post-1940s water. Concentrations of selected halogenated VOCs in groundwater from natural and anthropogenic atmospheric sources were estimated and used to recognize water samples that are being impacted by nonatmospheric sources. A classification is presented to perform vulnerability assessments at the scale of individual wells using the number of halogenated VOC detections and total dissolved VOC concentrations in samples of untreated drinking water. The low-level VOC detections are useful in vulnerability assessments, particularly for samples in which no VOCs are detected by GC-MS analysis.

  17. Visualization and curve-parameter estimation strategies for efficient exploration of phenotype microarray kinetics.

    PubMed

    Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.

  18. Visualization and Curve-Parameter Estimation Strategies for Efficient Exploration of Phenotype Microarray Kinetics

    PubMed Central

    Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335

  19. Low-flow characteristics of streams in Ohio through water year 1997

    USGS Publications Warehouse

    Straub, David E.

    2001-01-01

    This report presents selected low-flow and flow-duration characteristics for 386 sites throughout Ohio. These sites include 195 long-term continuous-record stations with streamflow data through water year 1997 (October 1 to September 30) and for 191 low-flow partial-record stations with measurements into water year 1999. The characteristics presented for the long-term continuous-record stations are minimum daily streamflow; average daily streamflow; harmonic mean flow; 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 5-, 10-, 20-, and 50-year recurrence intervals; and 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent daily duration flows. The characteristics presented for the low-flow partial-record stations are minimum observed streamflow; estimated 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 10-, and 20-year recurrence intervals; and estimated 98-, 95-, 90-, 85- and 80-percent daily duration flows. The low-flow frequency and duration analyses were done for three seasonal periods (warm weather, May 1 to November 30; winter, December 1 to February 28/29; and autumn, September 1 to November 30), plus the annual period based on the climatic year (April 1 to March 31).

  20. Dual Energy Method for Breast Imaging: A Simulation Study.

    PubMed

    Koukou, V; Martini, N; Michail, C; Sotiropoulou, P; Fountzoula, C; Kalyvas, N; Kandarakis, I; Nikiforidis, G; Fountos, G

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNR tc ) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels.

  1. Dual Energy Method for Breast Imaging: A Simulation Study

    PubMed Central

    2015-01-01

    Dual energy methods can suppress the contrast between adipose and glandular tissues in the breast and therefore enhance the visibility of calcifications. In this study, a dual energy method based on analytical modeling was developed for the detection of minimum microcalcification thickness. To this aim, a modified radiographic X-ray unit was considered, in order to overcome the limited kVp range of mammographic units used in previous DE studies, combined with a high resolution CMOS sensor (pixel size of 22.5 μm) for improved resolution. Various filter materials were examined based on their K-absorption edge. Hydroxyapatite (HAp) was used to simulate microcalcifications. The contrast to noise ratio (CNRtc) of the subtracted images was calculated for both monoenergetic and polyenergetic X-ray beams. The optimum monoenergetic pair was 23/58 keV for the low and high energy, respectively, resulting in a minimum detectable microcalcification thickness of 100 μm. In the polyenergetic X-ray study, the optimal spectral combination was 40/70 kVp filtered with 100 μm cadmium and 1000 μm copper, respectively. In this case, the minimum detectable microcalcification thickness was 150 μm. The proposed dual energy method provides improved microcalcification detectability in breast imaging with mean glandular dose values within acceptable levels. PMID:26246848

  2. Methodological Considerations When Quantifying High-Intensity Efforts in Team Sport Using Global Positioning System Technology.

    PubMed

    Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J

    2017-09-01

    Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was <0.5 s and small to very large differences (ES -5.69 to 0.26) in the number of accelerations when minimum duration was <0.7 s. There was an exponential decline in the number of all efforts as minimum duration increased, regardless of filtering method, with the largest declines in acceleration efforts. Filtering techniques and minimum durations substantially affect the number of high-speed-running, sprint, and acceleration efforts detected with GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.

  3. Method to improve reliability of a fuel cell system using low performance cell detection at low power operation

    DOEpatents

    Choi, Tayoung; Ganapathy, Sriram; Jung, Jaehak; Savage, David R.; Lakshmanan, Balasubramanian; Vecasey, Pamela M.

    2013-04-16

    A system and method for detecting a low performing cell in a fuel cell stack using measured cell voltages. The method includes determining that the fuel cell stack is running, the stack coolant temperature is above a certain temperature and the stack current density is within a relatively low power range. The method further includes calculating the average cell voltage, and determining whether the difference between the average cell voltage and the minimum cell voltage is greater than a predetermined threshold. If the difference between the average cell voltage and the minimum cell voltage is greater than the predetermined threshold and the minimum cell voltage is less than another predetermined threshold, then the method increments a low performing cell timer. A ratio of the low performing cell timer and a system run timer is calculated to identify a low performing cell.

  4. Establishment of a center of excellence for applied mathematical and statistical research

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Gray, H. L.

    1983-01-01

    The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.

  5. Geo-spatial analysis of temporal trends of temperature and its extremes over India using daily gridded (1°×1°) temperature data of 1969-2005

    NASA Astrophysics Data System (ADS)

    Chakraborty, Abhishek; Seshasai, M. V. R.; Rao, S. V. C. Kameswara; Dadhwal, V. K.

    2017-10-01

    Daily gridded (1°×1°) temperature data (1969-2005) were used to detect spatial patterns of temporal trends of maximum and minimum temperature (monthly and seasonal), growing degree days (GDDs) over the crop-growing season ( kharif, rabi, and zaid) and annual frequencies of temperature extremes over India. The direction and magnitude of trends, at each grid level, were estimated using the Mann-Kendall statistics ( α = 0.05) and further assessed at the homogeneous temperature regions using a field significance test ( α=0.05). General warming trends were observed over India with considerable variations in direction and magnitude over space and time. The spatial extent and the magnitude of the increasing trends of minimum temperature (0.02-0.04 °C year-1) were found to be higher than that of maximum temperature (0.01-0.02 °C year-1) during winter and pre-monsoon seasons. Significant negative trends of minimum temperature were found over eastern India during the monsoon months. Such trends were also observed for the maximum temperature over northern and eastern parts, particularly in the winter month of January. The general warming patterns also changed the thermal environment of the crop-growing season causing significant increase in GDDs during kharif and rabi seasons across India. The warming climate has also caused significant increase in occurrences of hot extremes such as hot days and hot nights, and significant decrease in cold extremes such as cold days and cold nights.

  6. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  7. Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elgered, G.; Davis, J.L.; Herring, T.A.

    1991-04-10

    An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less

  8. Predicting minimum uncertainties in the inversion of ocean color geophysical parameters based on Cramer-Rao bounds.

    PubMed

    Jay, Sylvain; Guillaume, Mireille; Chami, Malik; Minghelli, Audrey; Deville, Yannick; Lafrance, Bruno; Serfaty, Véronique

    2018-01-22

    We present an analytical approach based on Cramer-Rao Bounds (CRBs) to investigate the uncertainties in estimated ocean color parameters resulting from the propagation of uncertainties in the bio-optical reflectance modeling through the inversion process. Based on given bio-optical and noise probabilistic models, CRBs can be computed efficiently for any set of ocean color parameters and any sensor configuration, directly providing the minimum estimation variance that can be possibly attained by any unbiased estimator of any targeted parameter. Here, CRBs are explicitly developed using (1) two water reflectance models corresponding to deep and shallow waters, resp., and (2) four probabilistic models describing the environmental noises observed within four Sentinel-2 MSI, HICO, Sentinel-3 OLCI and MODIS images, resp. For both deep and shallow waters, CRBs are shown to be consistent with the experimental estimation variances obtained using two published remote-sensing methods, while not requiring one to perform any inversion. CRBs are also used to investigate to what extent perfect a priori knowledge on one or several geophysical parameters can improve the estimation of remaining unknown parameters. For example, using pre-existing knowledge of bathymetry (e.g., derived from LiDAR) within the inversion is shown to greatly improve the retrieval of bottom cover for shallow waters. Finally, CRBs are shown to provide valuable information on the best estimation performances that may be achieved with the MSI, HICO, OLCI and MODIS configurations for a variety of oceanic, coastal and inland waters. CRBs are thus demonstrated to be an informative and efficient tool to characterize minimum uncertainties in inverted ocean color geophysical parameters.

  9. Diapycnal diffusivity in the core and oxycline of the tropical North Atlantic oxygen minimum zone

    NASA Astrophysics Data System (ADS)

    Köllner, Manuela; Visbeck, Martin; Tanhua, Toste; Fischer, Tim

    2016-08-01

    Diapycnal diffusivity estimates from two Tracer Release Experiments (TREs) and microstructure measurements in the oxycline and core of the oxygen minimum zone (OMZ) in the Eastern Tropical North Atlantic (ETNA) are compared. For the first time, two TREs within the same area at different depths were realized: the Guinea Upwelling Tracer Release Experiment (GUTRE) initiated in 2008 in the oxycline at approximately 320 m depth, and the Oxygen Supply Tracer Release Experiment (OSTRE) initiated in 2012 in the core of the OMZ at approximately 410 m depth. The mean diapycnal diffusivity Dz was found to be insignificantly smaller in the OMZ core with (1.06 ± 0.24) × 10- 5 m2 s- 1 compared to (1.11 ± 0.22) × 10- 5 m2 s- 1 90 m shallower in the oxycline. Unexpectedly, GUTRE tracer was detected during two of the OSTRE surveys which showed that the estimated diapycnal diffusivity from GUTRE over a time period of seven years was within the uncertainty of the previous estimates over a time period of three years. The results are consistent with the Dz estimates from microstructure measurements and demonstrate that Dz does not vary significantly vertically in the OMZ within the depth range of 200-600 m and does not change with time. The presence of a seamount chain in the vicinity of the GUTRE injection region did not cause enhanced Dz compared to the smoother bottom topography of the OSTRE injection region, although the analysis of vertical shear spectra from ship ADCP data showed elevated internal wave energy level in the seamount vicinity. However, the two tracer patches covered increasingly overlapping areas with time and thus spatially integrated increasingly similar fields of local diffusivity, as well as the difference in local stratification counteracted the influence of roughness on Dz. For both experiments no significant vertical displacements of the tracer were observed, thus diapycnal upwelling within the ETNA OMZ is below the uncertainty level of 5 m yr- 1.

  10. Comparing different policy scenarios to reduce the consumption of ultra-processed foods in UK: impact on cardiovascular disease mortality using a modelling approach.

    PubMed

    Moreira, Patricia V L; Baraldi, Larissa Galastri; Moubarac, Jean-Claude; Monteiro, Carlos Augusto; Newton, Alex; Capewell, Simon; O'Flaherty, Martin

    2015-01-01

    The global burden of non-communicable diseases partly reflects growing exposure to ultra-processed food products (UPPs). These heavily marketed UPPs are cheap and convenient for consumers and profitable for manufacturers, but contain high levels of salt, fat and sugars. This study aimed to explore the potential mortality reduction associated with future policies for substantially reducing ultra-processed food intake in the UK. We obtained data from the UK Living Cost and Food Survey and from the National Diet and Nutrition Survey. By the NOVA food typology, all food items were categorized into three groups according to the extent of food processing: Group 1 describes unprocessed/minimally processed foods. Group 2 comprises processed culinary ingredients. Group 3 includes all processed or ultra-processed products. Using UK nutrient conversion tables, we estimated the energy and nutrient profile of each food group. We then used the IMPACT Food Policy model to estimate reductions in cardiovascular mortality from improved nutrient intakes reflecting shifts from processed or ultra-processed to unprocessed/minimally processed foods. We then conducted probabilistic sensitivity analyses using Monte Carlo simulation. Approximately 175,000 cardiovascular disease (CVD) deaths might be expected in 2030 if current mortality patterns persist. However, halving the intake of Group 3 (processed) foods could result in approximately 22,055 fewer CVD related deaths in 2030 (minimum estimate 10,705, maximum estimate 34,625). An ideal scenario in which salt and fat intakes are reduced to the low levels observed in Group 1 and 2 could lead to approximately 14,235 (minimum estimate 6,680, maximum estimate 22,525) fewer coronary deaths and approximately 7,820 (minimum estimate 4,025, maximum estimate 12,100) fewer stroke deaths, comprising almost 13% mortality reduction. This study shows a substantial potential for reducing the cardiovascular disease burden through a healthier food system. It highlights the crucial importance of implementing healthier UK food policies.

  11. Estimated Minimum Discharge Rates of the Deepwater Horizon Spill-Interim Report to the Flow Rate Technical Group from the Mass Balance Team

    USGS Publications Warehouse

    Labson, Victor F.; Clark, Roger N.; Swayze, Gregg A.; Hoefen, Todd M.; Kokaly, Raymond F.; Livo, K. Eric; Powers, Michael H.; Plumlee, Geoffrey S.; Meeker, Gregory P.

    2010-01-01

    All of the calculations and results in this report are preliminary and intended for the purpose, and only for the purpose, of aiding the incident team in assessing the extent of the spilled oil for ongoing response efforts. Other applications of this report are not authorized and are not considered valid. Because of time constraints and limitations of data available to the experts, many of their estimates are approximate, are subject to revision, and certainly should not be used as the Federal Government's final values for assessing volume of the spill or its impact to the environment or to coastal communities. Each expert that contributed to this report reserves the right to alter his conclusions based upon further analysis or additional information. An estimated minimum total oil discharge was determined by calculations of oil volumes measured as of May 17, 2010. This included oil on the ocean surface measured with satellite and airborne images and with spectroscopic data (129,000 barrels to 246,000 barrels using less and more aggressive assumptions, respectively), oil skimmed off the surface (23,500 barrels from U.S. Coast Guard [USCG] estimates), oil burned off the surface (11,500 barrels from USCG estimates), dispersed subsea oil (67,000 to 114,000 barrels), and oil evaporated or dissolved (109,000 to 185,000 barrels). Sedimentation (oil captured from Mississippi River silt and deposited on the ocean bottom), biodegradation, and other processes may indicate significant oil volumes beyond our analyses, as will any subsurface volumes such as suspended tar balls or other emulsions that are not included in our estimates. The lower bounds of total measured volumes are estimated to be within the range of 340,000 to 580,000 barrels as of May 17, 2010, for an estimated average minimum discharge rate of 12,500 to 21,500 barrels per day for 27 days from April 20 to May 17, 2010.

  12. Robust high-contrast companion detection from interferometric observations. The CANDID algorithm and an application to six binary Cepheids

    NASA Astrophysics Data System (ADS)

    Gallenne, A.; Mérand, A.; Kervella, P.; Monnier, J. D.; Schaefer, G. H.; Baron, F.; Breitfelder, J.; Le Bouquin, J. B.; Roettenbacher, R. M.; Gieren, W.; Pietrzyński, G.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.; Ridgway, S.; Kraus, S.

    2015-07-01

    Context. Long-baseline interferometry is an important technique to spatially resolve binary or multiple systems in close orbits. By combining several telescopes together and spectrally dispersing the light, it is possible to detect faint components around bright stars in a few hours of observations. Aims: We provide a rigorous and detailed method to search for high-contrast companions around stars, determine the detection level, and estimate the dynamic range from interferometric observations. Methods: We developed the code CANDID (Companion Analysis and Non-Detection in Interferometric Data), a set of Python tools that allows us to search systematically for point-source, high-contrast companions and estimate the detection limit using all interferometric observables, i.e., the squared visibilities, closure phases and bispectrum amplitudes. The search procedure is made on a N × N grid of fit, whose minimum needed resolution is estimated a posteriori. It includes a tool to estimate the detection level of the companion in the number of sigmas. The code CANDID also incorporates a robust method to set a 3σ detection limit on the flux ratio, which is based on an analytical injection of a fake companion at each point in the grid. Our injection method also allows us to analytically remove a detected component to 1) search for a second companion; and 2) set an unbiased detection limit. Results: We used CANDID to search for the companions around the binary Cepheids V1334 Cyg, AX Cir, RT Aur, AW Per, SU Cas, and T Vul. First, we showed that our previous discoveries of the components orbiting V1334 Cyg and AX Cir were detected at >25σ and >13σ, respectively. The astrometric positions and flux ratios provided by CANDID for these two stars are in good agreement with our previously published values. The companion around AW Per is detected at more than 15σ with a flux ratio of f = 1.22 ± 0.30%, and it is located at ρ = 32.16 ± 0.29 mas and PA = 67.1 ± 0.3°. We made a possible detection of the companion orbiting RT Aur with f = 0.22 ± 0.11%, and at ρ = 2.10 ± 0.23 mas and PA = -136 ± 6°. It was detected at 3.8σ using the closure phases only, and so more observations are needed to confirm the dectection. No companions were detected around SU Cas and T Vul. We also set the detection limit for possible undetected companions around these stars. We found that there is no companion with a spectral type earlier than B7V, A5V, F0V, B9V, A0V, and B9V orbiting the Cepheids V1334 Cyg, AX Cir, RT Aur, AW Per, SU Cas, and T Vul, respectively. This work also demonstrates the capabilities of the MIRC and PIONIER instruments, which can reach a dynamic range of 1:200, depending on the angular distance of the companion and the (u,v) plane coverage. In the future, we plan to work on improving the sensitivity limits for realistic data through better handling of the correlations. The current version of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/579/A68

  13. Hydrological Retrospective of floods and droughts: Case study in the Amazon

    NASA Astrophysics Data System (ADS)

    Wongchuig Correa, Sly; Cauduro Dias de Paiva, Rodrigo; Carlo Espinoza Villar, Jhan; Collischonn, Walter

    2017-04-01

    Recent studies have reported an increase in intensity and frequency of hydrological extreme events in many regions of the Amazon basin over last decades, these events such as seasonal floods and droughts have originated a significant impact in human and natural systems. Recently, methodologies such as climatic reanalysis are being developed in order to create a coherent register of climatic systems, thus taking this notion, this research efforts to produce a methodology called Hydrological Retrospective (HR), that essentially simulate large rainfall datasets over hydrological models in order to develop a record over past hydrology, enabling the analysis of past floods and droughts. We developed our methodology on the Amazon basin, thus we used eight large precipitation datasets (more than 30 years) through a large scale hydrological and hydrodynamic model (MGB-IPH), after that HR products were validated against several in situ discharge gauges dispersed throughout Amazon basin, given focus in maximum and minimum events. For better HR results according performance metrics, we performed a forecast skill of HR to detect floods and droughts considering in-situ observations. Furthermore, statistical temporal series trend was performed for intensity of seasonal floods and drought in the whole Amazon basin. Results indicate that better HR represented well most past extreme events registered by in-situ observed data and also showed coherent with many events cited by literature, thus we consider viable to use some large precipitation datasets as climatic reanalysis mainly based on land surface component and datasets based in merged products for represent past regional hydrology and seasonal hydrological extreme events. On the other hand, an increase trend of intensity was realized for maximum annual discharges (related to floods) in north-western regions and for minimum annual discharges (related to drought) in central-south regions of the Amazon basin, these features were previously detected by other researches. In the whole basin, we estimated an upward trend of maximum annual discharges at Amazon River. In order to estimate better future hydrological behavior and their impacts on the society, HR could be used as a methodology to understand past extreme events occurrence in many places considering the global coverage of rainfall datasets.

  14. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    NASA Astrophysics Data System (ADS)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  15. Minimum viewing angle for visually guided ground speed control in bumblebees.

    PubMed

    Baird, Emily; Kornfeldt, Torill; Dacke, Marie

    2010-05-01

    To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel's length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23-30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered.

  16. Electrofishing distance needed to estimate consistent Index of Biotic Integrity (IBI) scores in raftable Oregon rivers

    EPA Science Inventory

    An important issue surrounding assessment of riverine fish assemblages is the minimum amount of sampling distance needed to adequately determine biotic condition. Determining adequate sampling distance is important because sampling distance affects estimates of fish assemblage c...

  17. 38 CFR 36.4365 - Appraisal requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...

  18. 38 CFR 36.4365 - Appraisal requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...

  19. 38 CFR 36.4365 - Appraisal requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...

  20. 38 CFR 36.4365 - Appraisal requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...

  1. The minimum distance approach to classification

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1971-01-01

    The work to advance the state-of-the-art of miminum distance classification is reportd. This is accomplished through a combination of theoretical and comprehensive experimental investigations based on multispectral scanner data. A survey of the literature for suitable distance measures was conducted and the results of this survey are presented. It is shown that minimum distance classification, using density estimators and Kullback-Leibler numbers as the distance measure, is equivalent to a form of maximum likelihood sample classification. It is also shown that for the parametric case, minimum distance classification is equivalent to nearest neighbor classification in the parameter space.

  2. Exploration-driven NEO Detection Requirements

    NASA Astrophysics Data System (ADS)

    Head, J. N.; Sykes, M. V.

    2005-12-01

    The Vision for Space Exploration calls for use of in situ resources to support human solar system exploration goals. Focus has been on potential lunar polar ice, Martian subsurface water and resource extraction from Phobos. Near-earth objects (NEOs) offer easily accessible targets that may represent a critical component to achieving sustainable human operations, in particular small, newly discovered asteroids within a specified dynamical range having requisite composition and frequency. A minimum size requirement is estimated assuming CONOPs has an NEO harvester on station at L1. When the NEO launch window opens, the vehicle departs, rendezvousing within 30 days. Mining and processing operations ( 60 days) produces dirty water for the return trip ( 30 days) to L1 for final refinement into propellants. A market for propellant at L1 is estimated to be 700 mT /year: 250 mT for Mars missions, 100 mT for GTO services (Blair et al. 2002), 50 mT for L1 to lunar surface services, and 300 mT for bringing NEO-derived propellants to L1. Assuming an appropriate NEO has 5% recoverable water, exploited with 50% efficiency, 23000 mT/year must be processed. At 1500 kg/m3, this corresponds to one object per year with a radius of 15 meters, or two 5 m radius objects per month, of which it is estimated there are 10000 having delta-v < 4.2 km/s and 200/year of these available for short roundtrip missions to meet resource requirements (Jones et al. 2002). The importance of these potential resource objects should drive a requirement that next generation NEO detection systems (e.g., Pan-STARRS/LSST) be capable by 2010 of detecting dark NEOs fainter than V=24, allowing for identification 3 months before closest approach. Blair et al. 2002. Final Report to NASA Exploration Team, December 20, 2002. Jones et al. 2002. ASP Conf. Series Vol. 202 (M. Sykes, Ed.), pp. 141-154.

  3. VLBI imaging of a flare in the Crab nebula: more than just a spot

    NASA Astrophysics Data System (ADS)

    Lobanov, A. P.; Horns, D.; Muxlow, T. W. B.

    2011-09-01

    We report on very long baseline interferometry (VLBI) observations of the radio emission from the inner region of the Crab nebula, made at 1.6 GHz and 5 GHz after a recent high-energy flare in this object. The 5 GHz data have provided only upper limits of 0.4 milli-Jansky (mJy) on the flux density of the pulsar and 0.4 mJy/beam on the brightness of the putative flaring region. The 1.6 GHz data have enabled imaging the inner regions of the nebula on scales of up to ≈ 40''. The emission from the inner "wisps" is detected for the first time with VLBI observations. A likely radio counterpart (designated "C1") of the putative flaring region observed with Chandra and HST is detected in the radio image, with an estimated flux density of 0.5 ± 0.3 mJy and a size of 0.2 arcsec - 0.6 arcsec. Another compact feature ("C2") is also detected in the VLBI image closer to the pulsar, with an estimated flux density of 0.4 ± 0.2 mJy and a size smaller than 0.2 arcsec. Combined with the broad-band SED of the flare, the radio properties of C1 yield a lower limit of ≈ 0.5 mG for the magnetic field and a total minimum energy of 1.2 × 1041 erg vested in the flare (corresponding to using about 0.2% of the pulsar spin-down power). The 1.6 GHz observations provide upper limits for the brightness (0.2 mJy/beam) and total flux density (0.4 mJy) of the optical Knot 1 located at 0.6 arcsec from the pulsar. The absolute position of the Crab pulsar is determined, and an estimate of the pulsar proper motion (μα = -13.0 ± 0.2 mas/yr, μδ = + 2.9 ± 0.1 mas/yr) is obtained.

  4. The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight

    PubMed Central

    Livingston, Melvin D.; Markowitz, Sara; Wagenaar, Alexander C.

    2016-01-01

    Objectives. To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. Methods. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28–364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Results. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. Conclusions. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year. PMID:27310355

  5. The Effect of an Increased Minimum Wage on Infant Mortality and Birth Weight.

    PubMed

    Komro, Kelli A; Livingston, Melvin D; Markowitz, Sara; Wagenaar, Alexander C

    2016-08-01

    To investigate the effects of state minimum wage laws on low birth weight and infant mortality in the United States. We estimated the effects of state-level minimum wage laws using a difference-in-differences approach on rates of low birth weight (< 2500 g) and postneonatal mortality (28-364 days) by state and month from 1980 through 2011. All models included state and year fixed effects as well as state-specific covariates. Across all models, a dollar increase in the minimum wage above the federal level was associated with a 1% to 2% decrease in low birth weight births and a 4% decrease in postneonatal mortality. If all states in 2014 had increased their minimum wages by 1 dollar, there would likely have been 2790 fewer low birth weight births and 518 fewer postneonatal deaths for the year.

  6. Segmentation-based L-filtering of speckle noise in ultrasonic images

    NASA Astrophysics Data System (ADS)

    Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis

    1994-05-01

    We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.

  7. Weak hydrogen bond topology in 1,1-difluoroethane dimer: A rotational study.

    PubMed

    Chen, Junhua; Zheng, Yang; Wang, Juan; Feng, Gang; Xia, Zhining; Gou, Qian

    2017-09-07

    The rotational spectrum of the 1,1-difluoroethane dimer has been investigated by pulsed-jet Fourier transform microwave spectroscopy. Two most stable isomers have been detected, which are both stabilized by a network of three C-H⋯F-C weak hydrogen bonds: in the most stable isomer, two difluoromethyl C-H groups and one methyl C-H group act as the weak proton donors whilst in the second isomer, two methyl C-H groups and one difluoromethyl C-H group act as the weak proton donors. For the global minimum, the measurements have also been extended to its four 13 C isotopologues in natural abundance, allowing a precise, although partial, structural determination. Relative intensity measurements on a set of μ a -type transitions allowed estimating the relative population ratio of the two isomers as N I /N II ∼ 6/1 in the pulsed jet, indicating a much larger energy gap between these two isomers than that expected from ab initio calculation, consistent with the result from pseudo-diatomic dissociation energies estimation.

  8. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  9. Nanonewton thrust measurement of photon pressure propulsion using semiconductor laser

    NASA Astrophysics Data System (ADS)

    Iwami, K.; Akazawa, Taku; Ohtsuka, Tomohiro; Nishida, Hiroyuki; Umeda, Norihiro

    2011-09-01

    To evaluate the thrust produced by photon pressure emitted from a 100 W class continuous-wave semiconductor laser, a torsion-balance precise thrust stand is designed and tested. Photon emission propulsion using semiconductor light sources attract interests as a possible candidate for deep-space propellant-less propulsion and attitude control system. However, the thrust produced by photon emission as large as several ten nanonewtons requires precise thrust stand. A resonant method is adopted to enhance the sensitivity of the biflier torsional-spring thrust stand. The torsional spring constant and the resonant of the stand is 1.245 × 10-3 Nm/rad and 0.118 Hz, respectively. The experimental results showed good agreement with the theoretical estimation. The thrust efficiency for photon propulsion was also defined. A maximum thrust of 499 nN was produced by the laser with 208 W input power (75 W of optical output) corresponding to a thrust efficiency of 36.7%. The minimum detectable thrust of the stand was estimated to be 2.62 nN under oscillation at a frequency close to resonance.

  10. Current trends in Natural Gas Flaring Observed from Space with VIIRS

    NASA Astrophysics Data System (ADS)

    Zhizhin, M. N.; Elvidge, C.; Baugh, K.

    2017-12-01

    The five-year survey of natural gas flaring in 2012-2016 has been completed with nighttime Visible Infrared Imaging Radiometer Suite (VIIRS) data. The survey identifies flaring site locations, annual duty cycle, and provides an estimate of the flared gas volumes in methane equivalents. VIIRS is particularly well-.suited for detecting and measuring the radiant emissions from gas flares through the collection of shortwave and near-infrared data at night, recording the peak radiant emissions from flares. The total flared gas volume is estimated at 140 +/-30 billion cubic meters (BCM) per year, corresponding to 3.5% of global natural gas production. While Russia leads in terms of flared gas volume (>20 BCM), the U.S. has the largest number of flares (8,199 of 19,057 worldwide). The two countries have opposite trends in flaring: while for the U.S. the peak was reached in 2015, for Russia it was the minimum. On the regional scale in the U.S., Texas has the maximum number of flares (3749), with North Dakota, the second highest, having one half of this number (2,003). The number of flares for most of the states has decreased in the last 3 years following the trend in oil prices. The presentation will compare the global estimates, and regional trends observed in the U.S. regions. Preliminary estimates for global gas flaring in 2017 will be presented

  11. Hybrid Stochastic Models for Remaining Lifetime Prognosis

    DTIC Science & Technology

    2004-08-01

    literature for techniques and comparisons. Os- ogami and Harchol-Balter [70], Perros [73], Johnson [36], and Altiok [5] provide excellent summaries of...and type of PH-distribution approximation for c2 > 0.5 is not as obvious. In order to use the minimum distance estimation, Perros [73] indicated that...moment-matching techniques. Perros [73] indicated that the maximum likelihood and minimum distance techniques require nonlinear optimization. Johnson

  12. Statistical detection of patterns in unidimensional distributions by continuous wavelet transforms

    NASA Astrophysics Data System (ADS)

    Baluev, R. V.

    2018-04-01

    Objective detection of specific patterns in statistical distributions, like groupings or gaps or abrupt transitions between different subsets, is a task with a rich range of applications in astronomy: Milky Way stellar population analysis, investigations of the exoplanets diversity, Solar System minor bodies statistics, extragalactic studies, etc. We adapt the powerful technique of the wavelet transforms to this generalized task, making a strong emphasis on the assessment of the patterns detection significance. Among other things, our method also involves optimal minimum-noise wavelets and minimum-noise reconstruction of the distribution density function. Based on this development, we construct a self-closed algorithmic pipeline aimed to process statistical samples. It is currently applicable to single-dimensional distributions only, but it is flexible enough to undergo further generalizations and development.

  13. Determination of limonin in grapefruit juice and other citrus juices by high-performance liquid chromatography.

    PubMed

    Van Beek, T A; Blaakmeer, A

    1989-03-03

    A method has been developed for the quantitation of the bitter component limonin in grapefruit juice and other citrus juices. The sample clean-up consisted of centrifugation, filtration and a selective, rapid and reproducible purification with a C2 solid-phase extraction column. The limonin concentration was determined by high-performance liquid chromatography on a C18 column with UV detection at 210 nm. A linear response was obtained from 0.0 to 45 ppm limonin. The minimum detectable amount was 2 ng. The minimum concentration which was detected without concentration with good precision was 0.1 ppm. The method was also used for the determination of limonin in different types of oranges, including navel oranges, mandarins, lemons, limes, pomelos and uglis.

  14. Detection of large scale geomagnetic pulsations by MAGDAS-egypt stations during the solar minimum of the solar cycle 24

    NASA Astrophysics Data System (ADS)

    Fathy, Ibrahim

    2016-07-01

    This paper presents a statistical study of different types of large-scale geomagnetic pulsation (Pc3, Pc4, Pc5 and Pi2) detected simultaneously by two MAGDAS stations located at Fayum (Geo. Coordinates 29.18 N and 30.50 E) and Aswan (Geo. Coordinates 23.59 N and 32.51 E) in Egypt. The second order butter-worth band-pass filter has been used to filter and analyze the horizontal H-component of the geomagnetic field in one-second data. The data was collected during the solar minimum of the current solar cycle 24. We list the most energetic pulsations detected by the two stations instantaneously, in addition; the average amplitude of the pulsation signals was calculated.

  15. The thermoluminescence response of doped SiO2 optical fibres subjected to photon and electron irradiations.

    PubMed

    Hashim, S; Al-Ahbabi, S; Bradley, D A; Webb, M; Jeynes, C; Ramli, A T; Wagiran, H

    2009-03-01

    Modern linear accelerators, the predominant teletherapy machine in major radiotherapy centres worldwide, provide multiple electron and photon beam energies. To obtain reasonable treatment times, intense electron beam currents are achievable. In association with this capability, there is considerable demand to validate patient dose using systems of dosimetry offering characteristics that include good spatial resolution, high precision and accuracy. Present interest is in the thermoluminescence response and dosimetric utility of commercially available doped optical fibres. The important parameter for obtaining the highest TL yield during this study is to know the dopant concentration of the SiO2 fibre because during the production of the optical fibres, the dopants tend to diffuse. To achieve this aim, proton-induced X-ray emission (PIXE), which has no depth resolution but can unambiguously identify elements and analyse for trace elements with detection limits approaching microg/g, was used. For Al-doped fibres, the dopant concentration in the range 0.98-2.93 mol% have been estimated, with equivalent range for Ge-doped fibres being 0.53-0.71 mol%. In making central-axis irradiation measurements a solid water phantom was used. For 6-MV photons and electron energies in the range 6, 9 and 12 MeV, a source to surface distance of 100 cm was used, with a dose rate of 400 cGy/min for photons and electrons. The TL measurements show a linear dose-response over the delivered range of absorbed dose from 1 to 4 Gy. Fading was found to be minimal, less than 10% over five days subsequent to irradiation. The minimum detectable dose for 6-MV photons was found to be 4, 30 and 900 microGy for TLD-100 chips, Ge- and Al-doped fibres, respectively. For 6-, 9- and 12-MeV electron energies, the minimum detectable dose were in the range 3-5, 30-50 and 800-1400 microGy for TLD-100 chip, Ge-doped and Al-doped fibres, respectively.

  16. Cassini Radio Occultation by Enceladus Plume

    NASA Astrophysics Data System (ADS)

    Kliore, A.; Armstrong, J.; Flasar, F.; French, R.; Marouf, E.; Nagy, A.; Rappaport, N.; McGhee, C.; Schinder, P.; Anabtawi, A.; Asmar, S.; Barbinis, E.; Fleischman, D.; Goltz, G.; Aguilar, R.; Rochblatt, D.

    2006-12-01

    A fortuitous Cassini radio occultation by Enceladus plume occurs on September 15, 2006. The occultation track (the spacecraft trajectory in the plane of the sky as viewed from the Earth) has been designed to pass behind the plume (to pass above the south polar region of Enceladus) in a roughly symmetrical geometry centered on a minimum altitude above the surface of about 20 km. The minimum altitude was selected primarily to ensure probing much of the plume with good confidence given the uncertainty in the spacecraft trajectory. Three nearly-pure sinusoidal signals of 0.94, 3.6, and 13 cm-wavelength (Ka-, X-, and S-band, respectively) are simultaneously transmitted from Cassini and are monitored at two 34-m Earth receiving stations of the Deep Space Network (DSN) in Madrid, Spain (DSS-55 and DSS-65). The occultation of the visible plume is extremely fast, lasting less than about two minutes. The actual observation time extends over a much longer time interval, however, to provide a good reference baseline for potential detection of signal perturbations introduced by the tenuous neutral and ionized plume environment. Given the likely very small fraction of optical depth due to neutral particles of sizes larger than about 1 mm, detectable changes in signal intensity is perhaps unlikely. Detection of plume plasma along the radio path as perturbations in the signals frequency/phase is more likely and the magnitude will depend on the electron columnar density probed. The occultation time occurs not far from solar conjunction time (Sun-Earth-probe angle of about 33 degrees), causing phase scintillations due to the solar wind to be the primary limiting noise source. We estimate a delectability limit of about 1 to 3E16 electrons per square meter columnar density assuming about 100 seconds integration time. Potential measurement of the profile of electron columnar density along the occultation track is an exciting prospect at this time.

  17. Estimating abundance and density of Amur tigers along the Sino-Russian border.

    PubMed

    Xiao, Wenhong; Feng, Limin; Mou, Pu; Miquelle, Dale G; Hebblewhite, Mark; Goldberg, Joshua F; Robinson, Hugh S; Zhao, Xiaodan; Zhou, Bo; Wang, Tianming; Ge, Jianping

    2016-07-01

    As an apex predator the Amur tiger (Panthera tigris altaica) could play a pivotal role in maintaining the integrity of forest ecosystems in Northeast Asia. Due to habitat loss and harvest over the past century, tigers rapidly declined in China and are now restricted to the Russian Far East and bordering habitat in nearby China. To facilitate restoration of the tiger in its historical range, reliable estimates of population size are essential to assess effectiveness of conservation interventions. Here we used camera trap data collected in Hunchun National Nature Reserve from April to June 2013 and 2014 to estimate tiger density and abundance using both maximum likelihood and Bayesian spatially explicit capture-recapture (SECR) methods. A minimum of 8 individuals were detected in both sample periods and the documentation of marking behavior and reproduction suggests the presence of a resident population. Using Bayesian SECR modeling within the 11 400 km(2) state space, density estimates were 0.33 and 0.40 individuals/100 km(2) in 2013 and 2014, respectively, corresponding to an estimated abundance of 38 and 45 animals for this transboundary Sino-Russian population. In a maximum likelihood framework, we estimated densities of 0.30 and 0.24 individuals/100 km(2) corresponding to abundances of 34 and 27, in 2013 and 2014, respectively. These density estimates are comparable to other published estimates for resident Amur tiger populations in the Russian Far East. This study reveals promising signs of tiger recovery in Northeast China, and demonstrates the importance of connectivity between the Russian and Chinese populations for recovering tigers in Northeast China. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  18. Improving Nocturnal Fire Detection with the VIIRS Day-Night Band

    NASA Technical Reports Server (NTRS)

    Polivka, Thomas N.; Wang, Jun; Ellison, Luke T.; Hyer, Edward J.; Ichoku, Charles M.

    2016-01-01

    Building on existing techniques for satellite remote sensing of fires, this paper takes advantage of the day-night band (DNB) aboard the Visible Infrared Imaging Radiometer Suite (VIIRS) to develop the Firelight Detection Algorithm (FILDA), which characterizes fire pixels based on both visible-light and infrared (IR) signatures at night. By adjusting fire pixel selection criteria to include visible-light signatures, FILDA allows for significantly improved detection of pixels with smaller and/or cooler subpixel hotspots than the operational Interface Data Processing System (IDPS) algorithm. VIIRS scenes with near-coincident Advanced Spaceborne Thermal Emission and Reflection (ASTER) overpasses are examined after applying the operational VIIRS fire product algorithm and including a modified "candidate fire pixel selection" approach from FILDA that lowers the 4-µm brightness temperature (BT) threshold but includes a minimum DNB radiance. FILDA is shown to be effective in detecting gas flares and characterizing fire lines during large forest fires (such as the Rim Fire in California and High Park fire in Colorado). Compared with the operational VIIRS fire algorithm for the study period, FILDA shows a large increase (up to 90%) in the number of detected fire pixels that can be verified with the finer resolution ASTER data (90 m). Part (30%) of this increase is likely due to a combined use of DNB and lower 4-µm BT thresholds for fire detection in FILDA. Although further studies are needed, quantitative use of the DNB to improve fire detection could lead to reduced response times to wildfires and better estimate of fire characteristics (smoldering and flaming) at night.

  19. On the minimum quantum requirement of photosynthesis.

    PubMed

    Zeinalov, Yuzeir

    2009-01-01

    An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.

  20. Biochemical methane potential (BMP) tests: Reducing test time by early parameter estimation.

    PubMed

    Da Silva, C; Astals, S; Peces, M; Campos, J L; Guerrero, L

    2018-01-01

    Biochemical methane potential (BMP) test is a key analytical technique to assess the implementation and optimisation of anaerobic biotechnologies. However, this technique is characterised by long testing times (from 20 to >100days), which is not suitable for waste utilities, consulting companies or plants operators whose decision-making processes cannot be held for such a long time. This study develops a statistically robust mathematical strategy using sensitivity functions for early prediction of BMP first-order model parameters, i.e. methane yield (B 0 ) and kinetic constant rate (k). The minimum testing time for early parameter estimation showed a potential correlation with the k value, where (i) slowly biodegradable substrates (k≤0.1d -1 ) have a minimum testing times of ≥15days, (ii) moderately biodegradable substrates (0.1

  1. Research on Abnormal Detection Based on Improved Combination of K - means and SVDD

    NASA Astrophysics Data System (ADS)

    Hao, Xiaohong; Zhang, Xiaofeng

    2018-01-01

    In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.

  2. How dusty is α Centauri?. Excess or non-excess over the infrared photospheres of main-sequence stars

    NASA Astrophysics Data System (ADS)

    Wiegert, J.; Liseau, R.; Thébault, P.; Olofsson, G.; Mora, A.; Bryden, G.; Marshall, J. P.; Eiroa, C.; Montesinos, B.; Ardila, D.; Augereau, J. C.; Bayo Aran, A.; Danchi, W. C.; del Burgo, C.; Ertel, S.; Fridlund, M. C. W.; Hajigholi, M.; Krivov, A. V.; Pilbratt, G. L.; Roberge, A.; White, G. J.; Wolf, S.

    2014-03-01

    Context. Debris discs around main-sequence stars indicate the presence of larger rocky bodies. The components of the nearby, solar-type binary α Centauri have metallicities that are higher than solar, which is thought to promote giant planet formation. Aims: We aim to determine the level of emission from debris around the stars in the α Cen system. This requires knowledge of their photospheres. Having already detected the temperature minimum, Tmin, of α Cen A at far-infrared wavelengths, we here attempt to do the same for the more active companion α Cen B. Using the α Cen stars as templates, we study the possible effects that Tmin may have on the detectability of unresolved dust discs around other stars. Methods: We used Herschel-PACS, Herschel-SPIRE, and APEX-LABOCA photometry to determine the stellar spectral energy distributions in the far infrared and submillimetre. In addition, we used APEX-SHeFI observations for spectral line mapping to study the complex background around α Cen seen in the photometric images. Models of stellar atmospheres and of particulate discs, based on particle simulations and in conjunction with radiative transfer calculations, were used to estimate the amount of debris around these stars. Results: For solar-type stars more distant than α Cen, a fractional dust luminosity fd ≡ Ldust/Lstar 2 × 10-7 could account for SEDs that do not exhibit the Tmin effect. This is comparable to estimates of fd for the Edgeworth-Kuiper belt of the solar system. In contrast to the far infrared, slight excesses at the 2.5σ level are observed at 24 μm for both α Cen A and B, which, if interpreted as due to zodiacal-type dust emission, would correspond to fd (1-3) × 10-5, i.e. some 102 times that of the local zodiacal cloud. Assuming simple power-law size distributions of the dust grains, dynamical disc modelling leads to rough mass estimates of the putative Zodi belts around the α Cen stars, viz. ≲4 × 10-6 M≤ftmoon of 4 to 1000 μm size grains, distributed according to n(a) ∝ a-3.5. Similarly, for filled-in Tmin emission, corresponding Edgeworth-Kuiper belts could account for {˜ 10-3 M≤ftmoon} of dust. Conclusions: Our far-infrared observations lead to estimates of upper limits to the amount of circumstellar dust around the stars α Cen A and B. Light scattered and/or thermally emitted by exo-Zodi discs will have profound implications for future spectroscopic missions designed to search for biomarkers in the atmospheres of Earth-like planets. The far-infrared spectral energy distribution of α Cen B is marginally consistent with the presence of a minimum temperature region in the upper atmosphere of the star. We also show that an α Cen A-like temperature minimum may result in an erroneous apprehension about the presence of dust around other, more distant stars. Based on observations with Herschel which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.And also based on observations with APEX, which is a 12 m diameter submillimetre telescope at 5100 m altitude on Llano Chajnantor in Chile. The telescope is operated by Onsala Space Observatory, Max-Planck-Institut für Radioastronomie (MPIfR), and European Southern Observatory (ESO).

  3. Portfolio optimization and the random magnet problem

    NASA Astrophysics Data System (ADS)

    Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.

    2002-08-01

    Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.

  4. Estimation of additive forces and moments for supersonic inlets

    NASA Technical Reports Server (NTRS)

    Perkins, Stanley C., Jr.; Dillenius, Marnix F. E.

    1991-01-01

    A technique for estimating the additive forces and moments associated with supersonic, external compression inlets as a function of mass flow ratio has been developed. The technique makes use of a low order supersonic paneling method for calculating minimum additive forces at maximum mass flow conditions. A linear relationship between the minimum additive forces and the maximum values for fully blocked flow is employed to obtain the additive forces at a specified mass flow ratio. The method is applicable to two-dimensional inlets at zero or nonzero angle of attack, and to axisymmetric inlets at zero angle of attack. Comparisons with limited available additive drag data indicate fair to good agreement.

  5. Design of an ultraviolet fluorescence lidar for biological aerosol detection

    NASA Astrophysics Data System (ADS)

    Rao, Zhimin; Hua, Dengxin; He, Tingyao; Le, Jing

    2016-09-01

    In order to investigate the biological aerosols in the atmosphere, we have designed an ultraviolet laser induced fluorescence lidar based on the lidar measuring principle. The fluorescence lidar employs a Nd:YAG laser of 266 nm as an excited transmitter, and examines the intensity of the received light at 400 nm for biological aerosol concentration measurements. In this work, we firstly describe the designed configuration and the simulation to estimate the measure range and the system resolution of biological aerosol concentration under certain background radiation. With a relative error of less than 10%, numerical simulations show the system is able to monitor biological aerosols within detected distances of 1.8 km and of 7.3 km in the daytime and nighttime, respectively. Simulated results demonstrate the designed fluorescence lidar is capable to identify a minimum concentration of biological aerosols at 5.0×10-5 ppb in the daytime and 1.0×10-7 ppb in the nighttime at the range of 0.1 km. We believe the ultraviolet laser induced fluorescence lidar can be spread in the field of remote sensing of biological aerosols in the atmosphere.

  6. Combined optimization of image-gathering and image-processing systems for scene feature detection

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.

    1987-01-01

    The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.

  7. Dielectric relaxation behavior and impedance studies of Cu2+ ion doped Mg - Zn spinel nanoferrites

    NASA Astrophysics Data System (ADS)

    Choudhary, Pankaj; Varshney, Dinesh

    2018-03-01

    Cu2+ substituted Mg - Zn nanoferrites is synthesized by low temperature fired sol gel auto combustion method. The spinel nature of nanoferrites was confirmed by lab x-ray technique. Williamson - Hall (W-H) analysis estimate the average crystallite size (22.25-29.19 ± 3 nm) and micro strain induced Mg0.5Zn0.5-xCuxFe2O4 (0.0 ≤ x ≤ 0.5). Raman scattering measurements confirm presence of four active phonon modes. Red shift is observed with enhanced Cu concentration. Dielectric parameters exhibit a non - monotonous dispersion with Cu concentration and interpreted with the support of hopping mechanism and Maxwell-Wagner type of interfacial polarization. The ac conductivity of nanoferrites increases with raising the frequency. Complex electrical modulus reveals a non - Debye type of dielectric relaxation present in nanoferrites. Reactive impedance (Z″) detected an anomalous behavior and is related with resonance effect. Complex impedance demonstrates one semicircle corresponding to the intergrain (grain boundary) resistance and also explains conducting nature of nanoferrites. For x = 0.2, a large semicircle is observed revealing the ohmic nature (minimum potential drop at electrode surface). Dielectric properties were improved for nanoferrites with x = 0.2 and is due to high dielectric constant, conductivity and minimum loss value (∼0.009) at 1 MHz.

  8. An echolocation model for the restoration of an acoustic image from a single-emission echo

    NASA Astrophysics Data System (ADS)

    Matsuo, Ikuo; Yano, Masafumi

    2004-12-01

    Bats can form a fine acoustic image of an object using frequency-modulated echolocation sound. The acoustic image is an impulse response, known as a reflected-intensity distribution, which is composed of amplitude and phase spectra over a range of frequencies. However, bats detect only the amplitude spectrum due to the low-time resolution of their peripheral auditory system, and the frequency range of emission is restricted. It is therefore necessary to restore the acoustic image from limited information. The amplitude spectrum varies with the changes in the configuration of the reflected-intensity distribution, while the phase spectrum varies with the changes in its configuration and location. Here, by introducing some reasonable constraints, a method is proposed for restoring an acoustic image from the echo. The configuration is extrapolated from the amplitude spectrum of the restricted frequency range by using the continuity condition of the amplitude spectrum at the minimum frequency of the emission and the minimum phase condition. The determination of the location requires extracting the amplitude spectra, which vary with its location. For this purpose, the Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates were used. The location is estimated from the temporal changes of the amplitude spectra. .

  9. UAS Well Clear Recovery Against Non-Cooperative Intruders Using Vertical Maneuvers

    NASA Technical Reports Server (NTRS)

    Cone, Andrew; Thipphavong, David; Lee, Seung Man; Santiago, Confesor

    2017-01-01

    This paper documents a study that drove the development of a mathematical expression in the minimum operational performance standards (MOPS) of detect-and-avoid (DAA) systems for unmanned aircraft systems (UAS). This equation describes the conditions under which vertical maneuver guidance could be provided during recovery of well clear separation with a non-cooperative VFR aircraft in addition to horizontal maneuver guidance. Although suppressing vertical maneuver guidance in these situations increased the minimum horizontal separation from 500 to 800 feet, the maximum severity of loss of well clear increased in about 35 of the encounters compared to when a vertical maneuver was preferred and allowed. Additionally, analysis of individual cases led to the identification of a class of encounter where vertical rate error had a large effect on horizontal maneuvers due to the difficulty of making the correct left-right turn decision: crossing conflict with intruder changing altitude. These results supported allowing vertical maneuvers when UAS vertical performance exceeds the relative vertical position and velocity accuracy of the DAA tracker given the current velocity of the UAS and the relative vertical position and velocity estimated by the DAA tracker. Looking ahead, these results indicate a need to improve guidance algorithms by utilizing maneuver stability and near mid-air collision risk when determining maneuver guidance to regain well clear separation.

  10. Temporal variation of VOC emission from solvent and water based wood stains

    NASA Astrophysics Data System (ADS)

    de Gennaro, Gianluigi; Loiotile, Annamaria Demarinis; Fracchiolla, Roberta; Palmisani, Jolanda; Saracino, Maria Rosaria; Tutino, Maria

    2015-08-01

    Solvent- and water-based wood stains were monitored using a small test emission chamber in order to characterize their emission profiles in terms of Total and individual VOCs. The study of concentration-time profiles of individual VOCs enabled to identify the compounds emitted at higher concentration for each type of stain, to examine their decay curve and finally to estimate the concentration in a reference room. The solvent-based wood stain was characterized by the highest Total VOCs emission level (5.7 mg/m3) that decreased over time more slowly than those related to water-based ones. The same finding was observed for the main detected compounds: Benzene, Toluene, Ethylbenzene, Xylenes, Styrene, alpha-Pinene and Camphene. On the other hand, the highest level of Limonene was emitted by a water-based wood stain. However, the concentration-time profile showed that water-based product was characterized by a remarkable reduction of the time of maximum and minimum emission: Limonene concentration reached the minimum concentration in about half the time compared to the solvent-based product. According to AgBB evaluation scheme, only one of the investigated water-based wood stains can be classified as a low-emitting product whose use may not determine any potential adverse effect on human health.

  11. AOAC SMPR 2015.009: Estimation of total phenolic content using Folin-C Assay

    USDA-ARS?s Scientific Manuscript database

    This AOAC Standard Method Performance Requirements (SMPR) is for estimation of total soluble phenolic content in dietary supplement raw materials and finished products using the Folin-C assay for comparison within same matrices. SMPRs describe the minimum recommended performance characteristics to b...

  12. ELECTROFISHING DISTANCE NEEDED TO ESTIMATE FISH SPECIES RICHNESS IN RAFTABLE WESTERN USA RIVERS

    EPA Science Inventory

    A critical issue in river monitoring is the minimum amount of sampling distance required to adequately represent the fish assemblage of a reach. Determining adequate sampling distance is important because it affects estimates of fish assemblage integrity and diversity at local a...

  13. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  14. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  15. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  16. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  17. 32 CFR 218.4 - Dose estimate reporting standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) MISCELLANEOUS GUIDANCE FOR THE DETERMINATION AND REPORTING OF NUCLEAR RADIATION DOSE FOR DOD PARTICIPANTS IN THE ATMOSPHERIC NUCLEAR TEST PROGRAM (1945-1962) § 218.4 Dose estimate reporting standards. The following minimum... of the radiation environment to which the veteran was exposed and shall include inhaled, ingested...

  18. Identification of the release history of a groundwater contaminant in non-uniform flow field through the minimum relative entropy method

    NASA Astrophysics Data System (ADS)

    Cupola, F.; Tanda, M. G.; Zanini, A.

    2014-12-01

    The interest in approaches that allow the estimation of pollutant source release in groundwater has increased exponentially over the last decades. This is due to the large number of groundwater reclamation procedures that have been carried out: the remediation is expensive and the costs can be easily shared among the different actors if the release history is known. Moreover, a reliable release history can be a useful tool for predicting the plume evolution and for minimizing the harmful effects of the contamination. In this framework, Woodbury and Ulrych (1993, 1996) adopted and improved the minimum relative entropy (MRE) method to solve linear inverse problems for the recovery of the pollutant release history in an aquifer. In this work, the MRE method has been improved to detect the source release history in 2-D aquifer characterized by a non-uniform flow-field. The approach has been tested on two cases: a 2-D homogeneous conductivity field and a strong heterogeneous one (the hydraulic conductivity presents three orders of magnitude in terms of variability). In the latter case the transfer function could not be described with an analytical formulation, thus, the transfer functions were estimated by means of the method developed by Butera et al. (2006). In order to demonstrate its scope, this method was applied with two different datasets: observations collected at the same time at 20 different monitoring points, and observations collected at 2 monitoring points at different times (15-25 monitoring points). The data observed were considered affected by a random error. These study cases have been carried out considering a Boxcar and a Gaussian function as expected value of the prior distribution of the release history. The agreement between the true and the estimated release history has been evaluated through the calculation of the normalized Root Mean Square (nRMSE) error: this has shown the ability of the method of recovering the release history even in the most severe cases. Finally, the forward simulation has been carried out by using the estimated release history in order to compare the true data with the estimated one: the best agreement has been obtained in the homogeneous case, even if also in the heterogenous one the nRMSE is acceptable.

  19. Development and evaluation of a technique for in vivo monitoring of 60Co in human lungs

    NASA Astrophysics Data System (ADS)

    de Mello, J. Q.; Lucena, E. A.; Dantas, A. L. A.; Dantas, B. M.

    2016-07-01

    60Co is a fission product of 235U and represents a risk of internal exposure of workers in nuclear power plants, especially those involved in the maintenance of potentially contaminated parts and equipment. The control of 60Co intake by inhalation can be performed through in vivo monitoring. This work describes the evaluation of a technique through the minimum detectable activity and the corresponding minimum detectable effective doses, based on biokinetic and dosimetric models of 60Co in the human body. The results allow to state that the technique is suitable either for monitoring of occupational exposures or evaluation of accidental intake.

  20. THREE PLANETS ORBITING WOLF 1061

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, D. J.; Wittenmyer, R. A.; Tinney, C. G.

    We use archival HARPS spectra to detect three planets orbiting the M3 dwarf Wolf 1061 (GJ 628). We detect a 1.36 M{sub ⊕} minimum-mass planet with an orbital period P = 4.888 days (Wolf 1061b), a 4.25 M{sub ⊕} minimum-mass planet with orbital period P = 17.867 days (Wolf 1061c), and a likely 5.21 M{sub ⊕} minimum-mass planet with orbital period P = 67.274 days (Wolf 1061d). All of the planets are of sufficiently low mass that they may be rocky in nature. The 17.867 day planet falls within the habitable zone for Wolf 1061 and the 67.274 day planetmore » falls just outside the outer boundary of the habitable zone. There are no signs of activity observed in the bisector spans, cross-correlation FWHMs, calcium H and K indices, NaD indices, or Hα indices near the planetary periods. We use custom methods to generate a cross-correlation template tailored to the star. The resulting velocities do not suffer the strong annual variation observed in the HARPS DRS velocities. This differential technique should deliver better exploitation of the archival HARPS data for the detection of planets at extremely low amplitudes.« less

  1. Wind adaptive modeling of transmission lines using minimum description length

    NASA Astrophysics Data System (ADS)

    Jaw, Yoonseok; Sohn, Gunho

    2017-03-01

    The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.

  2. Airborne LiDAR analysis and geochronology of faulted glacial moraines in the Tahoe-Sierra frontal fault zone reveal substantial seismic hazards in the Lake Tahoe region, California-Nevada USA

    USGS Publications Warehouse

    Howle, James F.; Bawden, Gerald W.; Schweickert, Richard A.; Finkel, Robert C.; Hunter, Lewis E.; Rose, Ronn S.; von Twistern, Brent

    2012-01-01

    We integrated high-resolution bare-earth airborne light detection and ranging (LiDAR) imagery with field observations and modern geochronology to characterize the Tahoe-Sierra frontal fault zone, which forms the neotectonic boundary between the Sierra Nevada and the Basin and Range Province west of Lake Tahoe. The LiDAR imagery clearly delineates active normal faults that have displaced late Pleistocene glacial moraines and Holocene alluvium along 30 km of linear, right-stepping range front of the Tahoe-Sierra frontal fault zone. Herein, we illustrate and describe the tectonic geomorphology of faulted lateral moraines. We have developed new, three-dimensional modeling techniques that utilize the high-resolution LiDAR data to determine tectonic displacements of moraine crests and alluvium. The statistically robust displacement models combined with new ages of the displaced Tioga (20.8 ± 1.4 ka) and Tahoe (69.2 ± 4.8 ka; 73.2 ± 8.7 ka) moraines are used to estimate the minimum vertical separation rate at 17 sites along the Tahoe-Sierra frontal fault zone. Near the northern end of the study area, the minimum vertical separation rate is 1.5 ± 0.4 mm/yr, which represents a two- to threefold increase in estimates of seismic moment for the Lake Tahoe basin. From this study, we conclude that potential earthquake moment magnitudes (Mw) range from 6.3 ± 0.25 to 6.9 ± 0.25. A close spatial association of landslides and active faults suggests that landslides have been seismically triggered. Our study underscores that the Tahoe-Sierra frontal fault zone poses substantial seismic and landslide hazards.

  3. Post-Newtonian evolution of massive black hole triplets in galactic nuclei - III. A robust lower limit to the nHz stochastic background of gravitational waves

    NASA Astrophysics Data System (ADS)

    Bonetti, Matteo; Sesana, Alberto; Barausse, Enrico; Haardt, Francesco

    2018-06-01

    Inspiraling massive black hole binaries (MBHBs) forming in the aftermath of galaxy mergers are expected to be the loudest gravitational-wave (GW) sources relevant for pulsar-timing arrays (PTAs) at nHz frequencies. The incoherent overlap of signals from a cosmic population of MBHBs gives rise to a stochastic GW background (GWB) with characteristic strain around hc ˜ 10-15 at a reference frequency of 1 yr-1, although uncertainties around this value are large. Current PTAs are piercing into the GW amplitude range predicted by MBHB-population models, but no detection has been reported so far. To assess the future success prospects of PTA experiments, it is therefore important to estimate the minimum GWB level consistent with our current understanding of the formation and evolution of galaxies and massive black holes (MBHs). To this purpose, we couple a semi-analytic model of galaxy evolution and an extensive study of the statistical outcome of triple MBH interactions. We show that even in the most pessimistic scenario where all MBHBs stall before entering the GW-dominated regime, triple interactions resulting from subsequent galaxy mergers inevitably drive a considerable fraction of the MBHB population to coalescence. At frequencies relevant for PTA, the resulting GWB is only a factor of 2-3 suppressed compared to a fiducial model where binaries are allowed to merge over Gyr time-scales . Coupled with current estimates of the expected GWB amplitude range, our findings suggest that the minimum GWB from cosmic MBHBs is unlikely to be lower than hc ˜ 10-16 (at f = 1 yr-1), well within the expected sensitivity of projected PTAs based on future observations with FAST, MeerKAT, and SKA.

  4. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Estimation of the transmissivity of thin leaky-confined aquifers from single-well pumping tests

    NASA Astrophysics Data System (ADS)

    Worthington, Paul F.

    1981-01-01

    Data from the quasi-equilibrium phases of a step-drawdown test are used to evaluate the coefficient of non-linear head losses subject to the assumption of a constant effective well radius. After applying a well-loss correction to the observed drawdowns of the first step, an approximation method is used to estimate a pseudo-transmissivity of the aquifer from a single value of time-variant drawdown. The pseudo-transmissivities computed for each of a sequence of values of time pass through a minimum when there is least manifestation of casing-storage and leakage effects, phenomena to which pumping-test data of this kind are particularly susceptible. This minimum pseudo-transmissivity, adjusted for partial penetration effects where appropriate, constitutes the best possible estimate of aquifer transmissivity. The ease of application of the overall procedure is illustrated by a practical example.

  6. Muon tomography in the Mont Terri underground rock laboratory

    NASA Astrophysics Data System (ADS)

    Lesparre, N.; Gibert, D.; Marteau, J.; Carlus, B.; Nussbaum, C.

    2012-04-01

    The Mont Terri underground rock laboratory (Switzerland) was excavated in a Mesozoic shale formation constituted by Opalinus clay. This impermeable formation presents suitable properties for hosting repository sites of radioactive waste. A muon telescope has been placed in this laboratory in October 2009 to establish the feasibility of the muon tomography and to test the sensor performance in a calm environment, where we are protected from atmospheric noisy particles. However, the presence of radon in the gallery as well as charged particles issued from the decay of gamma rays may create a background noise. This noise shift and smooths the signal inducing an under estimation of the rock density. The uncorrelated background has been measured by placing the planes of detection in anti-coincidence. This estimation is preponderant and has to be combined to the theoretical feasibility evaluation to determine the best experimental set-up to observe muon flux fluctuations due to density variations. The muon densitometry experience is here exposed with the estimation of its feasibility. The data acquired from different locations inside the underground laboratory are presented. They are compared to two models representing the layer above the laboratory corresponding to a minimum and a maximum muon flux expectation depending on the values of the rock density.

  7. Retrospective dosimetry using OSL of tooth enamel and dental repair materials irradiated under wet and dry conditions.

    PubMed

    Geber-Bergstrand, Therése; Bernhardsson, Christian; Mattsson, Sören; Rääf, Christopher L

    2012-11-01

    Following a radiological or nuclear emergency event, there is a need for quick and reliable dose estimations of potentially exposed people. In situations where dosimeters are not readily available, the dose estimations must be carried out using alternative methods. In the present study, the optically stimulated luminescence (OSL) properties of tooth enamel and different dental repair materials have been examined. Specimens of the materials were exposed to gamma and beta radiation in different types of liquid environments to mimic the actual irradiation situation in the mouth. Measurements were taken using a Risø TL/OSL reader, and irradiations were made using a (90)Sr/(90)Y source and a linear accelerator (6 MV photons). Results show that the OSL signal from tooth enamel decreases substantially when the enamel is kept in a wet environment. Thus, tooth enamel is not reliable for retrospective dose assessment without further studies of the phenomenon. Dental repair materials, on the other hand, do not exhibit the same effect when exposed to liquids. In addition, dose-response and fading measurements of the dental repair materials show promising results, making these materials highly interesting for retrospective dosimetry. The minimum detectable dose for the dental repair materials has been estimated to be 20-185 mGy.

  8. A combined joint diagonalization-MUSIC algorithm for subsurface targets localization

    NASA Astrophysics Data System (ADS)

    Wang, Yinlin; Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon

    2014-06-01

    This paper presents a combined joint diagonalization (JD) and multiple signal classification (MUSIC) algorithm for estimating subsurface objects locations from electromagnetic induction (EMI) sensor data, without solving ill-posed inverse-scattering problems. JD is a numerical technique that finds the common eigenvectors that diagonalize a set of multistatic response (MSR) matrices measured by a time-domain EMI sensor. Eigenvalues from targets of interest (TOI) can be then distinguished automatically from noise-related eigenvalues. Filtering is also carried out in JD to improve the signal-to-noise ratio (SNR) of the data. The MUSIC algorithm utilizes the orthogonality between the signal and noise subspaces in the MSR matrix, which can be separated with information provided by JD. An array of theoreticallycalculated Green's functions are then projected onto the noise subspace, and the location of the target is estimated by the minimum of the projection owing to the orthogonality. This combined method is applied to data from the Time-Domain Electromagnetic Multisensor Towed Array Detection System (TEMTADS). Examples of TEMTADS test stand data and field data collected at Spencer Range, Tennessee are analyzed and presented. Results indicate that due to its noniterative mechanism, the method can be executed fast enough to provide real-time estimation of objects' locations in the field.

  9. Hance_WFSR flasher locations

    EPA Pesticide Factsheets

    This entry contains two files. The first file, Hance_WFSR Flasher locations.xlxs, contains information describing the location of installed landmark 'flashers' consisting of 2 square aluminum metal tags. Each tag was inscribed with a number to aid field personnel in the identification of landmark location within the West Fork Smith River watershed in southern coastal Oregon. These landmarks were used to calculate stream distances between points in the watershed, including distances between tagging locations and detection events for tagged fish. A second file, named Hance_fish_detection_data1.xlxs contains information on the detection of tagged fish within the West Fork Smith River stream network. The file includes both the location where the fish were tagged and where they were subsequently detected. Together with the information in the WFSR flasher location dataset, these data allow estimation of the minimum distances and directions moved by juvenile coho salmon during the fall transition period.A map locator is provided in Figure 1 in the accompanying manuscript: Dalton J. Hance, Lisa M. Ganio, Kelly M. Burnett & Joseph L. Ebersole (2016) Basin-Scale Variation in the Spatial Pattern of Fall Movement of Juvenile Coho Salmon in the West Fork Smith River, Oregon, Transactions of the American Fisheries Society, 145:5, 1018-1034, DOI: 10.1080/00028487.2016.1194892This dataset is associated with the following publication:Hance, D.J., L.M. Ganio, K.M. Burnett, an

  10. Age validation of canary rockfish (Sebastes pinniger) using two independent otolith techniques: lead-radium and bomb radiocarbon dating.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, A H; Kerr, L A; Cailliet, G M

    2007-11-04

    Canary rockfish (Sebastes pinniger) have long been an important part of recreational and commercial rockfish fishing from southeast Alaska to southern California, but localized stock abundances have declined considerably. Based on age estimates from otoliths and other structures, lifespan estimates vary from about 20 years to over 80 years. For the purpose of monitoring stocks, age composition is routinely estimated by counting growth zones in otoliths; however, age estimation procedures and lifespan estimates remain largely unvalidated. Typical age validation techniques have limited application for canary rockfish because they are deep dwelling and may be long lived. In this study, themore » unaged otolith of the pair from fish aged at the Department of Fisheries and Oceans Canada was used in one of two age validation techniques: (1) lead-radium dating and (2) bomb radiocarbon ({sup 14}C) dating. Age estimate accuracy and the validity of age estimation procedures were validated based on the results from each technique. Lead-radium dating proved successful in determining a minimum estimate of lifespan was 53 years and provided support for age estimation procedures up to about 50-60 years. These findings were further supported by {Delta}{sup 14}C data, which indicated a minimum estimate of lifespan was 44 {+-} 3 years. Both techniques validate, to differing degrees, age estimation procedures and provide support for inferring that canary rockfish can live more than 80 years.« less

  11. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera

    PubMed Central

    Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin

    2016-01-01

    This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292

  12. Minimum average 7-day, 10-year flows in the Hudson River basin, New York, with release-flow data on Rondout and Ashokan reservoirs

    USGS Publications Warehouse

    Archer, Roger J.

    1978-01-01

    Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.

  13. Setting population targets for mammals using body mass as a predictor of population persistence.

    PubMed

    Hilbers, Jelle P; Santini, Luca; Visconti, Piero; Schipper, Aafke M; Pinto, Cecilia; Rondinini, Carlo; Huijbregts, Mark A J

    2017-04-01

    Conservation planning and biodiversity assessments need quantitative targets to optimize planning options and assess the adequacy of current species protection. However, targets aiming at persistence require population-specific data, which limit their use in favor of fixed and nonspecific targets, likely leading to unequal distribution of conservation efforts among species. We devised a method to derive equitable population targets; that is, quantitative targets of population size that ensure equal probabilities of persistence across a set of species and that can be easily inferred from species-specific traits. In our method, we used models of population dynamics across a range of life-history traits related to species' body mass to estimate minimum viable population targets. We applied our method to a range of body masses of mammals, from 2 g to 3825 kg. The minimum viable population targets decreased asymptotically with increasing body mass and were on the same order of magnitude as minimum viable population estimates from species- and context-specific studies. Our approach provides a compromise between pragmatic, nonspecific population targets and detailed context-specific estimates of population viability for which only limited data are available. It enables a first estimation of species-specific population targets based on a readily available trait and thus allows setting equitable targets for population persistence in large-scale and multispecies conservation assessments and planning. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  14. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part III: reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Lobit, P.; Gómez Tagle, A.; Bautista, F.; Lhomme, J. P.

    2017-07-01

    We evaluated two methods to estimate evapotranspiration (ETo) from minimal weather records (daily maximum and minimum temperatures) in Mexico: a modified reduced set FAO-Penman-Monteith method (Allen et al. 1998, Rome, Italy) and the Hargreaves and Samani (Appl Eng Agric 1(2): 96-99, 1985) method. In the reduced set method, the FAO-Penman-Monteith equation was applied with vapor pressure and radiation estimated from temperature data using two new models (see first and second articles in this series): mean temperature as the average of maximum and minimum temperature corrected for a constant bias and constant wind speed. The Hargreaves-Samani method combines two empirical relationships: one between diurnal temperature range ΔT and shortwave radiation Rs, and another one between average temperature and the ratio ETo/Rs: both relationships were evaluated and calibrated for Mexico. After performing a sensitivity analysis to evaluate the impact of different approximations on the estimation of Rs and ETo, several model combinations were tested to predict ETo from daily maximum and minimum temperature alone. The quality of fit of these models was evaluated on 786 weather stations covering most of the territory of Mexico. The best method was found to be a combination of the FAO-Penman-Monteith reduced set equation with the new radiation estimation and vapor pressure model. As an alternative, a recalibration of the Hargreaves-Samani equation is proposed.

  15. Measurement of formaldehyde in clean air

    NASA Astrophysics Data System (ADS)

    Neitzert, Volker; Seiler, Wolfgang

    1981-01-01

    A method for the measurement of small amounts of formaldehyde in air has been developed. The method is based on the derivatization of HCHO with 2.4-Dinitrophenylhydrazine, forming 2.4-Dinitrophenylhydrazone, measured with GC-ECD-technique. HCHO is preconcentrated using a cryogenic sampling technique. The detection limit is 0.05 ppbv for a sampling volume of 200 liter. The method has been applied for measurements in continental and marine air masses showing HCHO mixing ratios of 0.4 - 5.0 ppbv and 0.2 - 1.0 ppbv, respectively. HCHO mixing ratios show diurnal variations with maximum values during the early afternoon and minimum values during the early morning. In continental air, HCHO mixing ratios are positively correlated with CO and SO2, indicating anthropogenic HCHO sources which are estimated to be 6-11 × 1012g/year-1 on a global scale.

  16. Image information content and patient exposure.

    PubMed

    Motz, J W; Danos, M

    1978-01-01

    Presently, patient exposure and x-ray tube kilovoltage are determined by image visibility requirements on x-ray film. With the employment of image-processing techniques, image visibility may be manipulated and the exposure may be determined only by the desired information content, i.e., by the required degree of tissue-density descrimination and spatial resolution. This work gives quantitative relationships between the image information content and the patient exposure, give estimates of the minimum exposures required for the detection of image signals associated with particular radiological exams. Also, for subject thickness larger than approximately 5 cm, the results show that the maximum information content may be obtained at a single kilovoltage and filtration with the simultaneous employment of image-enhancement and antiscatter techniques. This optimization may be used either to reduce the patient exposure or to increase the retrieved information.

  17. Robust spike sorting of retinal ganglion cells tuned to spot stimuli.

    PubMed

    Ghahari, Alireza; Badea, Tudor C

    2016-08-01

    We propose an automatic spike sorting approach for the data recorded from a microelectrode array during visual stimulation of wild type retinas with tiled spot stimuli. The approach first detects individual spikes per electrode by their signature local minima. With the mixture probability distribution of the local minima estimated afterwards, it applies a minimum-squared-error clustering algorithm to sort the spikes into different clusters. A template waveform for each cluster per electrode is defined, and a number of reliability tests are performed on it and its corresponding spikes. Finally, a divisive hierarchical clustering algorithm is used to deal with the correlated templates per cluster type across all the electrodes. According to the measures of performance of the spike sorting approach, it is robust even in the cases of recordings with low signal-to-noise ratio.

  18. A review of the theory of interstellar communication

    NASA Technical Reports Server (NTRS)

    Billingham, J.; Wolfe, J. H.; Oliver, B. M.

    1975-01-01

    The probability is analyzed that intelligent civilizations capable of interstellar communication exist in the galaxy. Drake's (1960) equation for the prevalence of communicative civilization is used in the calculations, and attempts are made to place limits on the search range that must be covered to contact other civilizations, the longevity of the communicative phase of such civilizations, and the possible number of two-way exchanges between civilizations in contact with each other. The minimum estimates indicate that some 100,000 civilizations probably coexist within several tens of astronomical units of each other and that some 1,000,000 probably coexist within 10 light years of each other. Attempts to detect coherent signals characteristic of intelligent life are briefly noted, including Projects Ozma and Cyclops as well as some Soviet attempts. Recently proposed American and Soviet programs for interstellar communication are outlined.

  19. Automated Track Recognition and Event Reconstruction in Nuclear Emulsion

    NASA Technical Reports Server (NTRS)

    Deines-Jones, P.; Cherry, M. L.; Dabrowska, A.; Holynski, R.; Jones, W. V.; Kolganova, E. D.; Kudzia, D.; Nilsen, B. S.; Olszewski, A.; Pozharova, E. A.; hide

    1998-01-01

    The major advantages of nuclear emulsion for detecting charged particles are its submicron position resolution and sensitivity to minimum ionizing particles. These must be balanced, however, against the difficult manual microscope measurement by skilled observers required for the analysis. We have developed an automated system to acquire and analyze the microscope images from emulsion chambers. Each emulsion plate is analyzed independently, allowing coincidence techniques to be used in order to reject back- ground and estimate error rates. The system has been used to analyze a sample of high-multiplicity Pb-Pb interactions (charged particle multiplicities approx. 1100) produced by the 158 GeV/c per nucleon Pb-208 beam at CERN. Automatically reconstructed track lists agree with our best manual measurements to 3%. We describe the image analysis and track reconstruction techniques, and discuss the measurement and reconstruction uncertainties.

  20. Methane concentration and isotopic composition measurements with a mid-infrared quantum-cascade laser

    NASA Technical Reports Server (NTRS)

    Kosterev, A. A.; Curl, R. F.; Tittel, F. K.; Gmachl, C.; Capasso, F.; Sivco, D. L.; Baillargeon, J. N.; Hutchinson, A. L.; Cho, A. Y.

    1999-01-01

    A quantum-cascade laser operating at a wavelength of 8.1 micrometers was used for high-sensitivity absorption spectroscopy of methane (CH4). The laser frequency was continuously scanned with current over more than 3 cm-1, and absorption spectra of the CH4 nu 4 P branch were recorded. The measured laser linewidth was 50 MHz. A CH4 concentration of 15.6 parts in 10(6) ( ppm) in 50 Torr of air was measured in a 43-cm path length with +/- 0.5-ppm accuracy when the signal was averaged over 400 scans. The minimum detectable absorption in such direct absorption measurements is estimated to be 1.1 x 10(-4). The content of 13CH4 and CH3D species in a CH4 sample was determined.

Top